<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>AI Working Papers (1971 - 1995)</title>
<link href="https://hdl.handle.net/1721.1/39813" rel="alternate"/>
<subtitle/>
<id>https://hdl.handle.net/1721.1/39813</id>
<updated>2026-04-08T08:56:24Z</updated>
<dc:date>2026-04-08T08:56:24Z</dc:date>
<entry>
<title>Dependency-Directed Backtracking in Non-Deterministic Scheme</title>
<link href="https://hdl.handle.net/1721.1/46712" rel="alternate"/>
<author>
<name>Zabih, Ramin</name>
</author>
<id>https://hdl.handle.net/1721.1/46712</id>
<updated>2019-04-11T00:37:04Z</updated>
<published>1988-08-01T00:00:00Z</published>
<summary type="text">Dependency-Directed Backtracking in Non-Deterministic Scheme
Zabih, Ramin
Non-deterministic LISP can be used to describe a search problem without specifying the method used to solve the problem. We show that SCHEMER, a non-deterministic dialect of SCHEME, can support dependency-directed backtracking as well as chronological backtracking. Full code for a working SCHEMER interpreter that provides dependency-directed backtracking is included.
This is a greatly revised version of a thesis submitted to the Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science on January 2, 1987, in partial fulfillment of the requirements for the degree of Master of Science.
</summary>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mini-Robot Group User's Guide Part 1: The 11/45 System</title>
<link href="https://hdl.handle.net/1721.1/41999" rel="alternate"/>
<author>
<name>Billmers, Meyer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41999</id>
<updated>2019-04-11T02:59:01Z</updated>
<published>1978-06-01T00:00:00Z</published>
<summary type="text">Mini-Robot Group User's Guide Part 1: The 11/45 System
Billmers, Meyer A.
This USER'S GUIDE is in two parts. Part 1 describes the facilities of the mini-robot group 11/45 and the software available to persons using those facilities. It is intended for those writing their own programs to be run on the 11/45 system.
A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. Although some will be given a limited external distribution, it is not intended that they should be considered papers to which reference can be made in the literature.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Message Passing Instead of the GOTO Construct</title>
<link href="https://hdl.handle.net/1721.1/41998" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41998</id>
<updated>2019-04-10T20:32:31Z</updated>
<published>1978-04-01T00:00:00Z</published>
<summary type="text">Using Message Passing Instead of the GOTO Construct
Hewitt, Carl
This paper advocates a programming methodology using message passing. Efficient programs are derived for fast exponentiation, merging ordered sequences, and path existence determination in a directed graph. The problems have been proposed by John Reynolds as interesting ones to investigate because they illustrate significant issues in programming. The methodology advocated here is directed toward the production of programs that are intended to execute efficiently in a computing environment with many processors. The absence of the GOTO construct does not seem to be constricting in any respect in the development of efficient programs using the programming methodology advocated here.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Detection of Bent Fingers in Lead Bonding Frames</title>
<link href="https://hdl.handle.net/1721.1/41997" rel="alternate"/>
<author>
<name>Mitnick, Walter L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41997</id>
<updated>2019-04-10T21:08:46Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Computer Detection of Bent Fingers in Lead Bonding Frames
Mitnick, Walter L.
In the production of logic circuits in dual inline packages, various tedious assembly line tasks are performed by human operators using microscopes or television enlargements. One boring and difficult task is the detection of bent fingers in lead bonding frames to which integrated circuit chips are subsequently bonded. Bent fingers can cause stresses which may eventually lead to the failure of circuits. This paper discusses the inspection problem and presents a computerized bent finger detection method which could be adapted to free human operators from this task. More immediately, it presents a method of examining an object and determining whether or not it is in focus based solely on inspection of the object's digitized light intensity profiles.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Fundamental Eel Equations</title>
<link href="https://hdl.handle.net/1721.1/41996" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41996</id>
<updated>2019-04-12T09:45:01Z</updated>
<published>1975-12-01T00:00:00Z</published>
<summary type="text">The Fundamental Eel Equations
Horn, Berthold K.P.
Details of the kinematics, statics, and dynamics of a particularly simple form of locomotory system are developed to demonstrate the importance of understanding the behavior of the mechanical system interposed between the commands to the actuators and the generation of displacements in manipulation and locomotion systems, both natural and artificial.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Intersection Problem</title>
<link href="https://hdl.handle.net/1721.1/41995" rel="alternate"/>
<author>
<name>Fahlman, Scott E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41995</id>
<updated>2019-04-12T09:44:59Z</updated>
<published>1975-11-01T00:00:00Z</published>
<summary type="text">The Intersection Problem
Fahlman, Scott E.
This paper is intended as a supplement to AI MEMO 331, "A System for Representing and Using Real-World Knowledge". It is an attempt to redefine and clarify what I now believe the central theme of the research to be. Briefly, I will present the following points:&#13;
1. The operation of set-intersection, performed upon large pre-existing sets, plays a pivotal role in the processes of intelligence.&#13;
2. Von Neumann machines intersect large sets very slowly. Attempts to avoid or speed up these intersections have obscured and distorted the other, non-intersection AI problems.&#13;
3. The parallel hardware system described in the earlier memo can be viewed as a conceptual tool for thinking about a world in which set-intersection of this sort is cheap. It thus divides many AI problems by factoring out all elements that arise solely due to set-intersection.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>One System for Two Tasks: A Commonsense Algorithm Memory that Solves Problems and Comprehends Language</title>
<link href="https://hdl.handle.net/1721.1/41994" rel="alternate"/>
<author>
<name>Rieger, Chuck</name>
</author>
<id>https://hdl.handle.net/1721.1/41994</id>
<updated>2019-04-10T22:36:41Z</updated>
<published>1975-11-01T00:00:00Z</published>
<summary type="text">One System for Two Tasks: A Commonsense Algorithm Memory that Solves Problems and Comprehends Language
Rieger, Chuck
Plan synthesis and language comprehension, or more generally, the act of discovering how one perception relates to others, are two sides of the same coin, because they both rely on a knowledge of cause and effect - algorithmic knowledge about how to do things and how things work. I will describe a new theory of representation for commonsense algorithmic world knowledge, then show how this knowledge can be organized into larger memory structures, as it has been in a LISP implementation of the theory. The large-scale organization of the memory is based on structures called a bypassable causal selection networks. A system of such networks serves to embed thousands of small commonsense algorithm patterns into a larger fabric which is directly usable by both a plan synthesizer and a language comprehender. Because these bypassable networks can adapt to context, so will the plan synthesizer and language comprehender. I will propose that the model is an approximation to the way humans organize and use algorithmic knowledge, and as such, that it suggests approaches not only to problem solving and language comprehension, but also to learning. I'll describe the commonsense algorithm representation, show how the system synthesizes plans using this knowledge, and trace through the process of language comprehension, illustrating how it threads its way through these algorithmic structures.
This is the edited text of the "Computers and Thought Lecture" delivered to the 4th International Conference on Artificial Intelligence, held in Tbilisi, Georgia, USSR, September 1975.&#13;
Work reported herein was conducted partly at the University of Maryland, under support of a University Research Board grant, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-75-c-0643.
</summary>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Solving The Findspace Problem, or How to Find Out Where Things Aren't ....</title>
<link href="https://hdl.handle.net/1721.1/41993" rel="alternate"/>
<author>
<name>Pfister, Gregory F.</name>
</author>
<id>https://hdl.handle.net/1721.1/41993</id>
<updated>2019-04-11T03:10:24Z</updated>
<published>1973-03-29T00:00:00Z</published>
<summary type="text">On Solving The Findspace Problem, or How to Find Out Where Things Aren't ....
Pfister, Gregory F.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOO14-70-A-0362-0006.
</summary>
<dc:date>1973-03-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Garbage Collection in a Very Large Address Space</title>
<link href="https://hdl.handle.net/1721.1/41992" rel="alternate"/>
<author>
<name>Bishop, Peter B.</name>
</author>
<id>https://hdl.handle.net/1721.1/41992</id>
<updated>2019-04-11T03:10:23Z</updated>
<published>1975-09-01T00:00:00Z</published>
<summary type="text">Garbage Collection in a Very Large Address Space
Bishop, Peter B.
The address space is broken into areas that can be garbage collected separately. An area is analogous to a file on current systems. Each process has a local computation area for its stack and temporary storage that is roughly analogous to a job core image. A mechanism is introduced for maintaining lists of inter-area links, the key to separate garbage collection. This mechanism is designed to be placed in hardware and does not create much overhead. It could be used in a practical computer system that uses the same address space for all users for the life of the system. It is necessary for the hardware to implement a reference count scheme that is adequate for handling stack frames. The hardware also facilitates implementation of protection by capabilities without the use of unique codes. This is due to elimination of dangling references. Areas can be deleted without creating dangling references.
This research was done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology and was supported by the Office of Naval Research under contract number N00014-75-C-0522.
</summary>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assigning Hierarchical Descriptions to Visual Assemblies of Blocks with Occlusion</title>
<link href="https://hdl.handle.net/1721.1/41991" rel="alternate"/>
<author>
<name>Dunlavey, Michael R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41991</id>
<updated>2019-04-12T09:43:49Z</updated>
<published>1975-10-01T00:00:00Z</published>
<summary type="text">Assigning Hierarchical Descriptions to Visual Assemblies of Blocks with Occlusion
Dunlavey, Michael R.
This memo describes a program for parsing simple two-dimensional piles of blocks into plausible nested subassemblies. Each subassembly must be one of a few types known to the program, such as stack, tower, or arch. Each subassembly has the overall shape of a single block, allowing it to behave as part of another subassembly. Occlusion is represented by an area of the image plane whose contents cannot be seen. Heuristic aspects of the program are concerned with 1) ambiguity among competing subassemblies due to sloppiness of the placement of the blocks, 2) ambiguity due to uncertain measurements of blocks which are partially occluded, and 3) total ambiguity as to the contents of the occluded region.&#13;
Choice among competing subassemblies is accomplished by first making a topological description of the network of conflicts among subassemblies, then considering only the simplest competing subset. If this does not clearly indicate a winner, the system can make an in-depth comparison of the internal structures of the last two competing subassemblies.&#13;
Uncertainty as to measurements of blocks is handled by creation of a disjunction of more certain blocks, each of which participates in the parsing process. If this disjunction results in a pair of competing subassemblies, only one is used, the other being hidden as an alternate to the first, so that the choice of which will be accepted can be deferred. This is a deferrable choice because the alternate subassemblies are so closely similar that the parsing process does not depend on choosing one of them.&#13;
Uncertainty due to occlusion is handled by allowing a potential subassembly to use the occluded area as a "wild card", meaning that if the subassembly can be completed by creating a block which intersects the occluded area, it is so completed. Such an imaginary block may later be consolidated with a real one, or it may remain imaginary.&#13;
The reason for studying this problem is to become acquainted with the program and data structure needed to assign a nested structural description to a complicated visual assembly in which occlusion makes the data incomplete. The extension to 3-dimensional descriptions should be straightforward.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1975-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Data Flow Computation to the Shaded Image Problem</title>
<link href="https://hdl.handle.net/1721.1/41990" rel="alternate"/>
<author>
<name>Strat, Thomas M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41990</id>
<updated>2019-04-12T09:43:49Z</updated>
<published>1978-05-01T00:00:00Z</published>
<summary type="text">Application of Data Flow Computation to the Shaded Image Problem
Strat, Thomas M.
This paper presents a method of producing shaded images of terrain at an extremely fast rate by exploiting parallelism. The architecture of the Data Flow Computer is explained along with an appropriate "program" to compute the images. It is shown how shaded images of terrain can be computed in less than one-tenth of a second using a moderate-sized Data Flow Computer.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What is Delaying the Manipulator Revolution?</title>
<link href="https://hdl.handle.net/1721.1/41989" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41989</id>
<updated>2019-04-11T03:10:23Z</updated>
<published>1978-02-01T00:00:00Z</published>
<summary type="text">What is Delaying the Manipulator Revolution?
Horn, Berthold K.P.
Despite two decades of work on mechanical manipulators and their associated controls, we do not see wide-spread application of these devices to many of the tasks to which they seem so obviously suited. Somehow, a variety of interacting causes has conspired to prevent them from fulfilling their much talked about potential. In part, this appears to be the result of a research effort that was too small, too fragmented, and too discontinuous in time.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under ONR contract N00014-77-C-0389.
</summary>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hierarchy in Knowledge Representations</title>
<link href="https://hdl.handle.net/1721.1/41988" rel="alternate"/>
<author>
<name>Doyle, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/41988</id>
<updated>2019-04-12T09:45:00Z</updated>
<published>1977-11-01T00:00:00Z</published>
<summary type="text">Hierarchy in Knowledge Representations
Doyle, Jon
This paper discusses a number of problems faced in communicating expertise and common sense to a computer, and the approaches taken by several current knowledge representation languages towards solving these problems. The main topic discussed is hierarchy. The importance of hierarchy is almost universally recognized. Hierarchy forms the backbone of many existing representation languages. We discuss several technical problems raised in constructing hierarchical and almost hierarchical systems as criteria and open problems.
This research was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</summary>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of a Three Degree of Freedom Kinematic Chain</title>
<link href="https://hdl.handle.net/1721.1/41987" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41987</id>
<updated>2019-04-09T16:00:46Z</updated>
<published>1977-10-01T00:00:00Z</published>
<summary type="text">Dynamics of a Three Degree of Freedom Kinematic Chain
Horn, Berthold K.P.
In order to be able to design a control system for high-speed control of mechanical manipulators, it is necessary to understand properly their dynamics. Here we present an analysis of a detailed model of a three-link device which may be viewed as either a "leg" in a locomotory system, or the first three degrees of freedom of an "arm" providing for its gross motions. The equations of motion are shown to be non-trivial, yet manageable.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wumpus Protocol Analysis</title>
<link href="https://hdl.handle.net/1721.1/41986" rel="alternate"/>
<author>
<name>White, Barbara Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/41986</id>
<updated>2019-04-12T09:44:58Z</updated>
<published>1977-08-01T00:00:00Z</published>
<summary type="text">Wumpus Protocol Analysis
White, Barbara Y.
The goal of this research was to assist in the creation of a new, improved Wumpus advisor by taking protocols of ten people learning to play Wumpus with a human coach. It was hoped that by observing these subjects learn Wumpus from a human coach, that insights would be gained into how the computer coach could be modified or extended. In particular, attention was paid to the representations subjects used, the goals they pursued, and the problems they had as well as to the teaching methods used by the human versus the computer coach.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision Review</title>
<link href="https://hdl.handle.net/1721.1/41985" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41985</id>
<updated>2019-04-12T09:43:49Z</updated>
<published>1978-05-01T00:00:00Z</published>
<summary type="text">Vision Review
Horn, Berthold K.P.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rational Arithmetic For Mini-Computers</title>
<link href="https://hdl.handle.net/1721.1/41984" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41984</id>
<updated>2019-04-12T09:44:00Z</updated>
<published>1977-09-01T00:00:00Z</published>
<summary type="text">Rational Arithmetic For Mini-Computers
Horn, Berthold K.P.
A representation for numbers using two computer words is discussed, where the value represented is the ratio of the corresponding integers. This allows for better dynamic range and relative accuracy than single-precision fixed point, yet is less costly than floating point arithmetic. The scheme is easy to implement and particularly well suited for mini-computer applications that call for a great deal of numerical computation. The techniques described have been used to implement a mathematical function subroutine package for a mini-computer as well as a number of applications programs in the machine vision and machine manipulation area.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-A-0643.
</summary>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AMORD: A Deductive Procedure System</title>
<link href="https://hdl.handle.net/1721.1/41983" rel="alternate"/>
<author>
<name>Sussman, Gerald Jay</name>
</author>
<author>
<name>Steele, Guy L. Jr.</name>
</author>
<author>
<name>Rich, Charles</name>
</author>
<author>
<name>Doyle, Jon</name>
</author>
<author>
<name>de Kleer, Johan</name>
</author>
<id>https://hdl.handle.net/1721.1/41983</id>
<updated>2019-04-11T03:39:30Z</updated>
<published>1977-08-01T00:00:00Z</published>
<summary type="text">AMORD: A Deductive Procedure System
Sussman, Gerald Jay; Steele, Guy L. Jr.; Rich, Charles; Doyle, Jon; de Kleer, Johan
We have implemented an interpreter for a rule-based system, AMORD, based on a non-chronological control structure and a system of automatically maintained data-dependencies. The purpose of this paper is tutorial. We wish to illustrate:&#13;
(1) The discipline of explicit control and dependencies,&#13;
(2) How to use AMORD, and&#13;
(3) One way to implement the mechanisms provided by AMORD.&#13;
This paper is organized into sections. The first section is a short "reference manual" describing the major features of AMORD. Next, we present some examples which illustrate the style of expression encouraged by AMORD. This style makes control information explicit in a rule-manipulable form, and depends on an understanding of the use of non-chronological justifications for program beliefs as a means for determining the current set of beliefs. The third section is a brief description of the Truth Maintenance System employed by AMORD for maintaining these justifications and program beliefs. The fourth section presents a completely annotated interpreter for AMORD, written in SCHEME.
This research was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</summary>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Method, Based on Plans, for Understanding How a Loop Implements a Computation</title>
<link href="https://hdl.handle.net/1721.1/41982" rel="alternate"/>
<author>
<name>Waters, Richard C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41982</id>
<updated>2019-04-12T09:44:58Z</updated>
<published>1977-07-01T00:00:00Z</published>
<summary type="text">A Method, Based on Plans, for Understanding How a Loop Implements a Computation
Waters, Richard C.
The plan method analyzes the structure of a program. The plan which results from applying the method represents this structure by specifying how the parts of the program interact. This paper demonstrates the utility of the plan method by showing how a plan for a loop can be used to help prove the correctness of a loop. The plan does this by providing a convenient description of what the loop does. This paper also shows how a plan for a loop can be developed based on the code for the loop without the assistance of any commentary. This is possible primarily because most loops are built up in stereotyped ways according to a few fundamental plan types. An experiment is presented which supports the claim that a small number of plan types cover a large percentage of actual cases.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A History Keeping Debugging System for PLASMA</title>
<link href="https://hdl.handle.net/1721.1/41981" rel="alternate"/>
<author>
<name>Morrison, Jerry Howard</name>
</author>
<id>https://hdl.handle.net/1721.1/41981</id>
<updated>2019-04-12T09:44:00Z</updated>
<published>1977-05-01T00:00:00Z</published>
<summary type="text">A History Keeping Debugging System for PLASMA
Morrison, Jerry Howard
PLASMA (for PLAnner-like System Modeled on Actors) is a message-passing computer language based on actor semantics. Since every event in the system is the receipt of a message actor by a target actor, a complete history of a computation can be kept by recording these events. The facility to search through and examine such a history, combined with the facility to pre-set breakpoints or stopping points, and the ability to restore side effects, provides a powerful way to debug programs written in PLAMSA. The kinds of history-manipulation and breakpoint setting commands needed, and the ways they can be used, particularly on recursive programs without side effects, are presented.
Artificial Intelligence Laboratory Massachusetts Institute of Technology Working papers are informal papers intended for internal use. This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided by the Office of Naval Research under contract N00014-75-C-0522.
</summary>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extracting topographic features from elevation data using contour lines</title>
<link href="https://hdl.handle.net/1721.1/41980" rel="alternate"/>
<author>
<name>Bruss, Anna R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41980</id>
<updated>2019-04-12T09:45:00Z</updated>
<published>1977-05-01T00:00:00Z</published>
<summary type="text">Extracting topographic features from elevation data using contour lines
Bruss, Anna R.
This paper describes a method for finding such topographical features as ridges and valleys in a given terrain. Contour lines are used to obtain the desired result.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Theory of Animation</title>
<link href="https://hdl.handle.net/1721.1/41979" rel="alternate"/>
<author>
<name>Kahn, Kenneth M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41979</id>
<updated>2019-04-11T03:46:10Z</updated>
<published>1977-04-01T00:00:00Z</published>
<summary type="text">A Computational Theory of Animation
Kahn, Kenneth M.
A system is proposed capable of generating narrative computer animation in response to a simple script. The major problem addressed is how to imbed into the system some of the knowledge that animators use when creating animation. Infinitely many animated films can fulfill a single script. The system is faced with the problem of how to make a good one by making decisions in very under-constrained situations. This paper is a total revision of AI Working Paper 119.
The author of this work is supported by an IBM Fellowship. The research described herein is being conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program.
</summary>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Semantic Component of PAL: The Personal Assistant Language Understanding Program</title>
<link href="https://hdl.handle.net/1721.1/41978" rel="alternate"/>
<author>
<name>Bullwinkle, Candace</name>
</author>
<id>https://hdl.handle.net/1721.1/41978</id>
<updated>2019-04-11T04:02:56Z</updated>
<published>1977-03-01T00:00:00Z</published>
<summary type="text">The Semantic Component of PAL: The Personal Assistant Language Understanding Program
Bullwinkle, Candace
This paper summarizes the design and implementation of the "semantics" module of a natural language undertanding system for the personal assistant domain. This module includes mappings to deep frames, noun phrase referencing and discourse analysis.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defence under Office of Naval Research Contract N00014-75-C-0643.
</summary>
<dc:date>1977-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Birthday Party Frame System</title>
<link href="https://hdl.handle.net/1721.1/41977" rel="alternate"/>
<author>
<name>Clemenson, Gregory D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41977</id>
<updated>2019-04-12T09:43:52Z</updated>
<published>1977-02-01T00:00:00Z</published>
<summary type="text">A Birthday Party Frame System
Clemenson, Gregory D.
This paper is an experimental investigation of the utility of the MIT-AI frames system. Using this system, a birthday party planning system was written, representing the basic decisions that comprise such a plan as frames. The planning problem is presented in the user in a way conforming to his natural planning procedures. The system is able to check the consistency of the plan parts, and finally produces a completed plan for the party, and can supply the user with some valuable summaries, such as a shopping list.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>List Processing in Real Time on a Serial Computer</title>
<link href="https://hdl.handle.net/1721.1/41976" rel="alternate"/>
<author>
<name>Baker, Henry G. Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/41976</id>
<updated>2019-04-09T16:44:09Z</updated>
<published>1977-04-01T00:00:00Z</published>
<summary type="text">List Processing in Real Time on a Serial Computer
Baker, Henry G. Jr.
A real-time list processing system is one in which the time required by each elementary list operation (CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical list processing systems such as LISP do not have this property because a call to CONS may invoke the garbage collector which requires time proportional to the number of accessible cells to finish. The space requirement of a classical LISP system with N accessible cells under equilibrium conditions is (1.5+μ)N or (1+μ)N, depending upon whether a stack is required for the garbage collector, where μ&gt;0 is typically less than 2.&#13;
A list processing system is presented which:&#13;
1) is real-time--i.e. T(CONS) is bounded by a constant independent of the number of cells in use;&#13;
2) requires space (2+2μ)N, i.e. not more than twice that of a classical system;&#13;
3) runs on a serial computer without a time-sharing clock;&#13;
4) handles directed cycles in the data structures;&#13;
5) is fast--the average time for each operation is about the same as with normal garbage collection;&#13;
6) compacts--minimizes the working set;&#13;
7) keeps the free pool in one contiguous block--objects of nonuniform size pose no problem;&#13;
8) uses one phase incremental collection--no separate mark, sweep, relocate phases;&#13;
9) requires no garbage collector stack;&#13;
10) requires no "mark bits", per se;&#13;
11) is simple--suitable for microcoded implementation.&#13;
Extensions of the system to handle a user program stack, compact list representation ("CDR-coding"), arrays of non-uniform size, and hash linking are discussed. CDR-coding is shown to reduce memory requirements for N LISP cells to ≈(I+μ)N. Our system is also compared with another approach to the real-time storage management problem, reference counting, and reference counting is shown to be neither competitive with our system when speed of allocation is critical, nor compatible, in the sense that a system with both forms of garbage collection is worse than our pure one.
Key Words and Phrases: real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage allocation, LISP, CDR-coding, reference counting.&#13;
CR Categories: 3.50, 3.60, 373, 3.80, 4.13, 24.32, 433, 4.35, 4.49&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</summary>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shallow Binding in LISP 1.5</title>
<link href="https://hdl.handle.net/1721.1/41975" rel="alternate"/>
<author>
<name>Baker, Henry G. Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/41975</id>
<updated>2019-04-12T07:43:57Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Shallow Binding in LISP 1.5
Baker, Henry G. Jr.
Shallow binding is a scheme which allows the value of a variable to be accessed in a bounded amount of computation. An elegant model for shallow binding in LISP 1.5 is presented in which context-switching is an environment structure transformation called "re-rooting". Re-rooting is completely general and reversible, and is optional in the sense that a LISP 1.5 interpreter will operate correctly whether or not re-rooting is invoked on every context change. Since re-rooting leaves (ASSOC X A) invariant, for all variables X and all environments A, the programmer can have access to a re-rooting primitive, (SHALLOW), which gives him dynamic control over whether accesses are shallow or deep, and which effects only the speed of execution of a program, not its semantics. So long as re-rooting is an indivisible operation, multiple processes can be active in the same environment structure. The re-rooting scheme is compared to a cache scheme for shallow binding and the two are found to be compatible. Finally, the concept of re-rooting is shown not to depend upon LISP's choice of dynamic instead of lexical binding for free variables; hence it can be used in an Algol interpreter, for example.
Key Words and Phrases: LISP 1.5, environment structures, FUNARGs, shallow and deep binding, multiprogramming, cache.&#13;
CR Categories: 4.13, 4.22, 4.32&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cryptology and Data Communications</title>
<link href="https://hdl.handle.net/1721.1/41974" rel="alternate"/>
<author>
<name>Waters, Richard C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41974</id>
<updated>2019-04-11T04:02:56Z</updated>
<published>1976-12-01T00:00:00Z</published>
<summary type="text">Cryptology and Data Communications
Waters, Richard C.
This paper is divided into two parts. The first part deals with cryptosystems and cryptanalysis. It surveys the basic information about cryptosystems and then addresses two specific questions. Are cryptosystems such as LUCIFER which are based on the ideas of Feistel and Shannon secure for all practical purposes? Is the proposed NBS standard cryptosystem secure for all practical purposes? This paper argues that the answer to the first question is "they might well be" and that the answer to the second is "no."&#13;
The second part of this paper considers how a cryptosystem can be used to provide security of data transmission in a computer environment. It discusses the two basic aspects of security: secrecy and authentication. It then describes and discusses a specific proposal by Kent of a set of protocols designed to provide security through encryption. Finally, an alternate proposal is given in order to explore some of the other design choices which could have been made.
Research reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the office of Naval Research under contract N00014-75-C-0643.
</summary>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laws for Communicating Parallel Processes</title>
<link href="https://hdl.handle.net/1721.1/41973" rel="alternate"/>
<author>
<name>Baker, Henry</name>
</author>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41973</id>
<updated>2019-04-11T03:10:31Z</updated>
<published>1976-11-01T00:00:00Z</published>
<summary type="text">Laws for Communicating Parallel Processes
Baker, Henry; Hewitt, Carl
This paper presents some "laws" that must be satisfied by computations involving communicating parallel processes. The laws take the form of stating restrictions on the histories of computations that are physically realizable. The laws are intended to characterize aspects of parallel computations that are independent of the number of physical processors that are used in the computation.
DRAFT COPY ONLY&#13;
Working Papers are informal papers intended for internal use. This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</summary>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving Parallel Programs</title>
<link href="https://hdl.handle.net/1721.1/41972" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41972</id>
<updated>2019-04-12T09:44:58Z</updated>
<published>1979-05-01T00:00:00Z</published>
<summary type="text">Evolving Parallel Programs
Hewitt, Carl
Message passing is directed toward the production of programs that are intended to execute efficiently in a computing environment with a large number of processors. The paradigm attempts to address the computational issues of state change and communication directly with appropriate primitives. Efficient programs are evolved for fast factorial and path existence determination in a directed graph.&#13;
This paper is a contribution to the continuing debate on programming methodology. It advocates that simple initial implementations of programs should be constructed and then the implementations should be evolved to meet their partial specifications where it is anticipated that the partial specifications will themselves evolve with time.&#13;
The programming methodology used in this paper is intended for use with an actor machine which consists of a large number of processors connected by a high bandwidth network. We evolve implementations for factorial and for the path existence problem that execute in the logarithm of the amount of time required on a conventional machine. The implementation (with no redundant exploration) of the path existence problem evolved in this paper is more efficient than any implementation that can be programmed in a dialect of pure LISP that allows the arguments to a function to be evaluated in parallel. This is evidence that applicative programming in languages like pure LISP is apparently less efficient in some practical applications. The efficiency of such applicative languages is important because many computer scientists are proposing to use them on future generation parallel machines whose architectures exploit ultra large scale integration.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Position of the Sun</title>
<link href="https://hdl.handle.net/1721.1/41971" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41971</id>
<updated>2019-04-10T21:08:24Z</updated>
<published>1978-03-01T00:00:00Z</published>
<summary type="text">The Position of the Sun
Horn, Berthold K.P.
The appearance of a surface depends dramatically on how it is illuminated. In order to interpret properly satellite and aerial imagery, it is necessary to know the position of the sun in the sky. This is particularly important if this interpretation is to be done in an automated fashion. Techniques using relatively straightforward methods are presented here for calculating the position of the sun with more than enough accuracy.&#13;
Caution: Do not use this technique for navigational purposes. Correction terms have been omitted; as a result, the ephemeris data calculated may be in error by about one minute of arc, an amount which is of no significance for the application of this data in image analysis.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract N00014-75-C-0643.
</summary>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reporter: An Intelligent Noticer</title>
<link href="https://hdl.handle.net/1721.1/41970" rel="alternate"/>
<author>
<name>Rosenberg, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/41970</id>
<updated>2019-04-12T09:44:01Z</updated>
<published>1977-11-15T00:00:00Z</published>
<summary type="text">Reporter: An Intelligent Noticer
Rosenberg, Steven
Some researchers, notably Schank and Abelson, (1975) have argued for the existence of large numbers of scripts as a representation for complex events. This paper adopts a different viewpoint. I consider complex events to have no fixed definition. Instead they are defined by a set of target components. At any given time an arbitrarily complex description which contains the target components can be generated from semantic memory. This description provides evidence for a complex event containing the target components. It can be as complex or as simple as the task demands.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-11-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Incremental Garbage Collection of Processes</title>
<link href="https://hdl.handle.net/1721.1/41969" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<author>
<name>Baker, Henry G. Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/41969</id>
<updated>2019-04-12T09:43:50Z</updated>
<published>1977-06-01T00:00:00Z</published>
<summary type="text">The Incremental Garbage Collection of Processes
Hewitt, Carl; Baker, Henry G. Jr.
This paper investigates some problems associated with an argument evaluation order that we call "future" order, which is different from both call-by-name and call-by-value. In call-by-future, each formal parameter of a function is bound to a separate process (called a "future") dedicated to the evaluation of the corresponding argument. This mechanism allows the fully parallel evaluation of arguments to a function, and has been shown to augment the expressive power of a language.&#13;
We discuss an approach to a problem that arises in this context: futures which were thought to be relevant when they were created become irrelevant through being ignored in the body of the expression where they were bound. The problem of irrelevant processes also appears in multiprocessing problem-solving systems which start several processors working on the same problem but with different methods, and return with the solution which finishes first. This parallel method strategy has the drawback that the processes which are investigating the losing methods must be identified, stopped, and re-assigned to more useful tasks. &#13;
The solution we propose is that of garbage collection. We propose that the goal structure of the solution plan be explicitly represented in memory as part of the graph memory (like Lisp's heap) so that a garbage collection algorithm can discover which processes are performing useful work, and which can be recycled for a new task. &#13;
An incremental algorithm for the unified garbage collection of storage and processes is described.
Key Words and Phrases: garbage collection, multiprocessing systems, processor scheduling. "lazy evaluation, "eager" evaluation.&#13;
CR Categories: 3.60, 3.80, 4.13, 4.22, 4.32.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.&#13;
This paper was presented at the AI*PL Conference at Rochester, N.Y. in August, 1977.
</summary>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plan Verification in a Programmer's Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41968" rel="alternate"/>
<author>
<name>Shrobe, Howard Elliot</name>
</author>
<id>https://hdl.handle.net/1721.1/41968</id>
<updated>2019-04-11T04:02:58Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Plan Verification in a Programmer's Apprentice
Shrobe, Howard Elliot
Brief Statement of the Problem:&#13;
An interactive programming environment called the Programmer's Apprentice is described. Intended for use by the expert programmer in the process of program design and maintenance, the apprentice will be capable of understanding, explaining and reasoning about the behavior of real-world LISP programs with side effects on complex data-structures. We view programs as engineered devices whose analysis must be carried out at many level of abstraction. This leads to a set of logical dependencies between modules which explains how and why modules interact to achieve an overall intention. Such a network of dependencies is a teleological structure which we call a plan; the process of elucidating such a plan stucture and showing that it is coherent and that it achieves its overall intended behavior we call plan verification.&#13;
This approach to program verification is sharply contrasted with the traditional Floyd-Hoare systems which overly restrict themselves to surface features of the programming language. More similar in philosophy is the evolving methodology of languages like CLU or ALPHARD which stress conceptual layering.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plan Recognition in a Programmer's Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41967" rel="alternate"/>
<author>
<name>Rich, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/41967</id>
<updated>2019-04-11T04:02:55Z</updated>
<published>1977-05-01T00:00:00Z</published>
<summary type="text">Plan Recognition in a Programmer's Apprentice
Rich, Charles
Brief Statement of the Problem: &#13;
Stated most generally, the proposed research is concerned with understanding and representing the teleological structure of engineered devices. More specifically, I propose to study the teleological structure of computer programs written in LISP which perform a wide range of non-numerical computations. The major theoretical goal of the research is to further develop a formal representation for teleological structure, called plans, which will facilitate both the abstract description of particular programs, and the compilation of a library of programming expertise in the domain of non-numerical computation. Adequacy of the theory will be demonstrated by implementing a system (to eventually become part of a LISP Programmer's Apprentice) which will be able to recognize various plans in LISP programs written by human programmers and thereby generate cogent explanations of how the programs work, including the detection of some programming errors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Theory of Plans for Electronic Circuits</title>
<link href="https://hdl.handle.net/1721.1/41966" rel="alternate"/>
<author>
<name>de Kleer, Johan</name>
</author>
<id>https://hdl.handle.net/1721.1/41966</id>
<updated>2019-04-09T18:28:07Z</updated>
<published>1977-04-01T00:00:00Z</published>
<summary type="text">A Theory of Plans for Electronic Circuits
de Kleer, Johan
A plan for a device assigns purposes to each of the more primitive components and explains how these components interact to achieve the desired behavior of the composite device. Such an information structure is critically important in analyzing, designing or troubleshooting devices. The first goal of this research is to develop a theory of plans for electronic circuits which can be used for these purposes. The second goal is the construction of a system which can automatically recognize a plan for a circuit from a geometrical representation of the circuit's schematic diagram.&#13;
Recognition is a process which recaptures the plan the designer originally had in mind. A theory of schemata will be introduced in which recognition is viewed as the identification of an instance of a schema in the library with the particular circuit being recognized. This process is guided by topological and geometric evidence extracted from the circuit schematic. Causal reasoning, using the technique of propagation of constraints, provides further evidence. One important use of causal reasoning is the confirmation of tentative instantiations based on topological and geometric evidence alone.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Sentences to Case Frames</title>
<link href="https://hdl.handle.net/1721.1/41965" rel="alternate"/>
<author>
<name>Levin, Beth</name>
</author>
<id>https://hdl.handle.net/1721.1/41965</id>
<updated>2019-04-11T04:02:57Z</updated>
<published>1977-03-01T00:00:00Z</published>
<summary type="text">Mapping Sentences to Case Frames
Levin, Beth
This paper describes a range of phenomena that a case frame system should be able to handle and proposes generalizations to capture this behavior which are formulated as a set of production-like rules. These rules allow the possible surface orders of cases found in English declarative sentences to be generated from a case frame. This is important for the implementation of a case frame builder described here which requires the ability to determine what cases in a case frame can appear in a grammatical role. The appendix contains an in detail survey of some English verbs which illustrate the types of mapping found in English.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defence under Office of Naval Research Contract N00014-75-C-0643.
</summary>
<dc:date>1977-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Note on the Optimal Allocation of Spaces in MACLISP</title>
<link href="https://hdl.handle.net/1721.1/41964" rel="alternate"/>
<author>
<name>Baker, Henry G. Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/41964</id>
<updated>2019-04-12T09:43:51Z</updated>
<published>1977-03-16T00:00:00Z</published>
<summary type="text">A Note on the Optimal Allocation of Spaces in MACLISP
Baker, Henry G. Jr.
This note describes a method for allocating storage among the various spaces in the MACLISP Implementation of LISP. The optimal strategy which minimizes garbage collector effort allocates free storage among the various spaces in such a way that they all run out at the same time. In an equilibrium situation, this corresponds to allocating free storage to the spaces in proportion to their usage. &#13;
Methods are investigated by which the rates of usage can be inferred, and a gc-daemon interrupt handler is developed which implements an approximately optimal strategy in MACLISP. Finally, the sensitivity of this method to rapidly varying differential rates of cell usage is discussed.
Key Words and Phrases: garbage collection, list processing, virtual memory, storage management, storage allocation, LISP.&#13;
CR Categories: 3.50, 3.60, 3.73, 3.80, 4.13, 422, 4.32, 4.33, 4.35, 4.49&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</summary>
<dc:date>1977-03-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>PSUDOC - A Simple Diagnostic Program</title>
<link href="https://hdl.handle.net/1721.1/41963" rel="alternate"/>
<author>
<name>Lozano-Perez, Tomas</name>
</author>
<id>https://hdl.handle.net/1721.1/41963</id>
<updated>2019-04-11T03:10:32Z</updated>
<published>1976-12-01T00:00:00Z</published>
<summary type="text">PSUDOC - A Simple Diagnostic Program
Lozano-Perez, Tomas
This paper describes PSUDOC, a very simple LISP program to carry out some medical diagnosis tasks. The program's domain is a subset of clinical medicine characterized by patients presenting with edema and/or hematuria. The program's goal is to go from the presenting symptoms to a hypothesis of the underlying disease state. The program uses a variation of simple tree searching strategies called ETS.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laws for Communicating Parallel Processes</title>
<link href="https://hdl.handle.net/1721.1/41962" rel="alternate"/>
<author>
<name>Baker, Henry</name>
</author>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41962</id>
<updated>2019-04-11T03:10:32Z</updated>
<published>1977-05-10T00:00:00Z</published>
<summary type="text">Laws for Communicating Parallel Processes
Baker, Henry; Hewitt, Carl
This paper presents some laws that must be satisfied by computations involving communicating parallel processes. The laws are stated in the context of the actor theory, a model for distributed parallel computation, and take the form of stating plausible restrictions on the histories of parallel computations to make them physically realizable. The laws are justified by appeal to physical intuition and are to be regarded as falsifiable assertions about the kinds of computations that occur in nature rather than as proven theorems in mathematics. The laws are used to analyze the mechanisms by which multiple processes can communicate to work effectively together to solve difficult problems.&#13;
Since the causal relations among the events in a parallel computation do not specify a total order on events, the actor model generalizes the notion of computation from a sequence of states to a partial order of events. The interpretation of unordered events in this partial order is that they proceed concurrently. The utility of partial orders is demonstrated by using them to express our laws for distributed computation.
Key Words and Phrases: parallel processes, parallel or asynchronous computations, partial orders of events, Actor theory.&#13;
CR Categories: 5.21, 5.24, 5.26.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</summary>
<dc:date>1977-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Use of Dependency Relationships in the Control of Reasoning</title>
<link href="https://hdl.handle.net/1721.1/41961" rel="alternate"/>
<author>
<name>Doyle, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/41961</id>
<updated>2019-04-12T09:43:52Z</updated>
<published>1976-11-01T00:00:00Z</published>
<summary type="text">The Use of Dependency Relationships in the Control of Reasoning
Doyle, Jon
Several recent problem-solving programs have indicated improved methods for controlling program actions. Some of these methods operate by analyzing the time-independent antecedent-consequent dependency relationships between the components of knowledge about the problem for solution. This paper is a revised version of a thesis proposal which indicates how a general system of automatically maintained dependency relationships can be used to effect many forms of control on reasoning in an antecedent reasoning framework.
Research reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contract N00014-75-C-0643.
</summary>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reasoning By Analogy: A Progress Report</title>
<link href="https://hdl.handle.net/1721.1/41960" rel="alternate"/>
<author>
<name>Brown, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/41960</id>
<updated>2019-04-11T03:10:31Z</updated>
<published>1976-10-01T00:00:00Z</published>
<summary type="text">Reasoning By Analogy: A Progress Report
Brown, Richard
Rather.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643. The views expressed are necessarily (and perhaps only) those of the author.
</summary>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Computational Theory to Psychology and Neurophysiology -- a case study from vision</title>
<link href="https://hdl.handle.net/1721.1/41959" rel="alternate"/>
<author>
<name>Marr, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41959</id>
<updated>2019-04-09T18:44:29Z</updated>
<published>1976-08-01T00:00:00Z</published>
<summary type="text">From Computational Theory to Psychology and Neurophysiology -- a case study from vision
Marr, David
The CNS needs to be understood at four nearly independent levels of description: (1) that at which the nature of a computation is expressed; (2) that at which the algorithms that implement a computation are characterised; (3) that at which an algorithm is committed to particular mechanisms; and (4) that at which the mechanisms are realised in hardware. In general, the nature of a computation is determined by the problem to be solved, the mechanisms that are used depend upon the available hardware, and the particular algorithms chosen depend on the problem and on the available mechanisms. Examples are given of theories at each level from current research in vision, and a brief review of the immediate prospects for the field is given.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discourse Structure</title>
<link href="https://hdl.handle.net/1721.1/41958" rel="alternate"/>
<author>
<name>Rosenberg, Steven T.</name>
</author>
<id>https://hdl.handle.net/1721.1/41958</id>
<updated>2019-04-10T20:50:33Z</updated>
<published>1976-08-17T00:00:00Z</published>
<summary type="text">Discourse Structure
Rosenberg, Steven T.
An essential step in understanding connected discourse is the ability to link the meanings of successive sentences together. Given a growing database to which new sentence meanings must be linked, which out of many possible inference chains will succeed? To which items already in a data base is a new item relevent? To assure easy understandability of text the amount of processing time spent on unsuccessful linkage attempts must be reduced. This paper develops a preliminary theory of discourse structure. Several newspaper articles were examined in the light of this theory. Two examples were worked out in detail to explore how a hypothetical discourse understander might use the model of discourse structure to represent knowledge gained from processing text.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the National Science Foundation under grant C40708X and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the National Science Foundation or the United States Government.
</summary>
<dc:date>1976-08-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Control of a Six-Axis Manipulator</title>
<link href="https://hdl.handle.net/1721.1/41957" rel="alternate"/>
<author>
<name>Blanchard, David C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41957</id>
<updated>2019-04-12T09:44:02Z</updated>
<published>1976-08-01T00:00:00Z</published>
<summary type="text">Digital Control of a Six-Axis Manipulator
Blanchard, David C.
This paper describes a scheme for providing low-level control of a multi-link serial manipulator. The goal was to achieve adaptive behavior without making assumptions about the environment.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOO14-75-C-0643.
</summary>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Representation and Use of Semantic Categories: A Survey and Prospectus</title>
<link href="https://hdl.handle.net/1721.1/41956" rel="alternate"/>
<author>
<name>Schatz, Bruce R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41956</id>
<updated>2019-04-11T03:10:23Z</updated>
<published>1976-05-01T00:00:00Z</published>
<summary type="text">On the Representation and Use of Semantic Categories: A Survey and Prospectus
Schatz, Bruce R.
This paper is intended as a brief introduction to several issues concerning semantic categories. These are the everyday, factual groupings of world knowledge according to some similarity in characteristics. Some psychological data concerning the structure, formation, and use of categories is surveyed. Then several psychological models (set-theoretic and network) are considered. Various artificial intelligence representations (concerning the symbol mapping and recognition problems) dealing with similar issues are also reviewed. It is argued that these data and representations approach semantic categories at too abstract a level and a set of guidelines which may be helpful in constructing a microworld are given.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</summary>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hand Eye Coordination</title>
<link href="https://hdl.handle.net/1721.1/41955" rel="alternate"/>
<author>
<name>Speckert, Glen</name>
</author>
<id>https://hdl.handle.net/1721.1/41955</id>
<updated>2019-04-12T09:44:01Z</updated>
<published>1976-07-01T00:00:00Z</published>
<summary type="text">Hand Eye Coordination
Speckert, Glen
This paper describes a simple method of converting visual coordinates to arm coordinates which does not require knowledge of the position of the camera(s). Comparisons are made to other methods and two camera, three dimensional extensions are discusssed. The single camera method for converting points on a tabletop is used by Marc Raibert and Glen Speckert in a working hand-eye system which recognizes objects and picks them up under visual guidance. This was implemented on the MIT Micro-Automation PDP 11/45 using a low speed vidicon and a Scheinman arm.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research. contract N88814-75C-8643-8885.
</summary>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two Simple Algorithms For Displaying Orthographic Projections of Surfaces</title>
<link href="https://hdl.handle.net/1721.1/41954" rel="alternate"/>
<author>
<name>Woodham, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41954</id>
<updated>2019-04-11T03:10:29Z</updated>
<published>1976-08-01T00:00:00Z</published>
<summary type="text">Two Simple Algorithms For Displaying Orthographic Projections of Surfaces
Woodham, Robert J.
Two simple algorithms are described for displaying orthographic projections of surfaces. The first, called RELIEF-PLOT, produces a three-dimensional plot of a surface z = f(x,y). The second, called SHADED-IMAGE, adds information about surface reflectivity and source illumination to produce a grey level image of a surface z = f(x,y).&#13;
Both algorithms demonstrate how a systematic profile expansion can be used to do hidden surface elimination essentially for free.
Work reported herein was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract number N00014-75C-0643
</summary>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structured Planning and Debugging: A Linguistic Approach to Problem Solving</title>
<link href="https://hdl.handle.net/1721.1/41953" rel="alternate"/>
<author>
<name>Miller, Mark L.</name>
</author>
<author>
<name>Goldstein, Ira P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41953</id>
<updated>2019-04-09T16:14:34Z</updated>
<published>1976-06-08T00:00:00Z</published>
<summary type="text">Structured Planning and Debugging: A Linguistic Approach to Problem Solving
Miller, Mark L.; Goldstein, Ira P.
A structured approach to planning and debugging is obtained by using an Augmented Transition Network (ATN) to model the problem solving process. This proves to be a perspicuous representation for planning concepts including techniques of identification, decomposition and reformulation. It also provides an elegant theory of debugging, in which bugs are identified as errors in transitions between states in the ATN. Examples from the Blocks World and elementary graphics programming problems are used to illustrate the theory.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the National Science Foundation under grant C40708X and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643. &#13;
The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the National Science Foundation or the United States Government.
</summary>
<dc:date>1976-06-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Symbol IC-Evaluation as an Aid to Program Synthesis</title>
<link href="https://hdl.handle.net/1721.1/41952" rel="alternate"/>
<author>
<name>Yonezawa, Akinori</name>
</author>
<id>https://hdl.handle.net/1721.1/41952</id>
<updated>2019-04-10T07:20:55Z</updated>
<published>1976-04-01T00:00:00Z</published>
<summary type="text">Symbol IC-Evaluation as an Aid to Program Synthesis
Yonezawa, Akinori
Symbolic-evaluation is the process which abstractly evaluates an actor program and checks to see whether the program fulfills its contract (specification). In this paper, a formalism based on the conceptual representation is proposed as a specification language and a proof system for programs which may include change of behavior (side-effects). The relation between algebraic specifications and the specifications based on the conceptual representation is discussed and the limitation of the current algebraic specifications is pointed out. The proposed formalism can deal with problems of side-effects which have been beyond the scope of Floyd-Hoare proof rules. Symbolic-evaluation is carried out with explicit use of the notion of situation (local state of an actor system). Uses of situational tags in assertions make it possible to state relations holding between objects in different situations. As an illustrative example, an impure actors which behave like a queue is extensively examined. The verification of a procedure which deals with the queue-actors and the correctness of its implementations are demonstrated by the symbolic-evaluation. Furthermore how the symbolic-evaluation serves as an aid to program synthesis is illustrated using two different implementations of the queue-actor.
This report describes research done at the Artificial Intelligence laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advance Research Projects Agency of the Department of Defence under Office of Naval Research contract N00014-75-C0522.
</summary>
<dc:date>1976-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CGOL - an Alternative External Representation For LISP users</title>
<link href="https://hdl.handle.net/1721.1/41951" rel="alternate"/>
<author>
<name>Pratt, Vaughan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41951</id>
<updated>2019-04-11T03:10:29Z</updated>
<published>1976-03-01T00:00:00Z</published>
<summary type="text">CGOL - an Alternative External Representation For LISP users
Pratt, Vaughan R.
Advantages of the standard external representation of LISP include its simple definition, its economical implementation and its convenient extensibility. These advantages have been gained by trading off syntactic variety for the rigidity of parenthesized prefix notation. This paper describes an approach to increasing the available notational variety in LISP without compromising the above advantages of the standard notation. A primary advantage of the availability of such variety is the extent to which documentation can be incorporated into the code itself, decreasing the chance of mismatches between cods and documentation. The approach differs from that of MLISP[superscript 4], which attempts to be a self-contained language rather than a notation available immediately on demand to the ordinary LISP user. A striking feature of a MACLISP implementation of this approach, the CGOL notation, is that any LISP user, at any time, without any prior preparation, and without significant compromise of storage or speed, can in mid-stream change to the CGOL notation merely by typing (CGOL) at the LISP he is presently using, even if he has already loaded and begun running his LISP program. Another striking feature is the possibility of notational transparency; a LISP user may ask LISP to read a file without needing to know the notation(s) used within that file.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Actor-Based Computer Animation Language</title>
<link href="https://hdl.handle.net/1721.1/41950" rel="alternate"/>
<author>
<name>Kahn, Kenneth M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41950</id>
<updated>2019-04-12T09:44:59Z</updated>
<published>1976-02-01T00:00:00Z</published>
<summary type="text">An Actor-Based Computer Animation Language
Kahn, Kenneth M.
This paper reproduces an appendix of a doctoral thesis proposal that describes a language based on actor semantics designed especially for animation. The system described herein is built upon MacLisp and is also compatible with Lisp-Logo. The system was implemented to serve two functions: to provide a base system for the knowledge-based animation system which is described in Working Paper 119 (or Logo WP 47) and to experiment with various extensions of Logo to improve its value as an educational tool.
This work was supported in part by the National Science Foundation under grant number GJ-1049 and conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program. Reproduction of this document in whole or in part is permitted for any purpose of the United States Government.
</summary>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Knowledge-Based Computer Animation System</title>
<link href="https://hdl.handle.net/1721.1/41949" rel="alternate"/>
<author>
<name>Kahn, Kenneth M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41949</id>
<updated>2019-04-10T22:36:42Z</updated>
<published>1976-02-01T00:00:00Z</published>
<summary type="text">A Knowledge-Based Computer Animation System
Kahn, Kenneth M.
This paper reproduces part of a doctoral thesis proposal describing the design of a system capable of generating animated drawings in response to a simple story. The representation and interaction of the various sources of the knowledge necessary to accomplish this are discussed. The appropriateness of an actor formalism for representing the concurrent processes and knowledge of the system is touched upon here and discussed further in Working Paper 120 (or Logo WP 48) "An Actor-Based Computer Animation Language". Finally, the role of the system as an example of a visible intelligent system in education is discussed.
This work was supported in part by the National Science Foundation under grant number GJ-1049 and conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program. Reproduction of this document in whole or in part is permitted for any purpose of the United States Government.
</summary>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge Driven Recognition of the Human Body</title>
<link href="https://hdl.handle.net/1721.1/41948" rel="alternate"/>
<author>
<name>Speckert, Glen</name>
</author>
<id>https://hdl.handle.net/1721.1/41948</id>
<updated>2019-04-10T22:36:42Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Knowledge Driven Recognition of the Human Body
Speckert, Glen
This paper shows how a good internal model of the subject viewed aids in the visual recognition and following of key parts. The role of knowledge driven top-down tools and methods is shown by recognizing a series of human figures drawn from Eadward Muybridge's collection of 1887. Knowledge of the subject's structure and actions are used to find the head, shoulder, elbow, hip, knees, and ankles of the subject.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mini-Robot Group User's Guide Part 2: Access From ITS</title>
<link href="https://hdl.handle.net/1721.1/41947" rel="alternate"/>
<author>
<name>Billmers, Meyer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41947</id>
<updated>2019-04-12T09:44:59Z</updated>
<published>1978-06-01T00:00:00Z</published>
<summary type="text">Mini-Robot Group User's Guide Part 2: Access From ITS
Billmers, Meyer A.
Part 2 of the MINI-ROBOT USER'S GUIDE describes those devices attached to the mini-robot system which may be accessed from ITS, and describes the appropriate software for accessing them. Specifically, the photowriter, photoscanner, vidicon, and Scheinman arm are documented.
A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. Although some will be given a limited external distribution, it is not intended that they should be considered papers to which reference can be made in the literature.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Stored Picture Hacking Facility</title>
<link href="https://hdl.handle.net/1721.1/41939" rel="alternate"/>
<author>
<name>Markowitz, Sidney</name>
</author>
<id>https://hdl.handle.net/1721.1/41939</id>
<updated>2019-04-12T09:44:57Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">A Stored Picture Hacking Facility
Markowitz, Sidney
A short description of LISP functions that have been written for use with the stored picture facility. These functions allow one to display an image of a stored scene on the 340 scope, and produce graphs and histograms of intensity functions of portions of the scene.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cognitive Cliches</title>
<link href="https://hdl.handle.net/1721.1/41893" rel="alternate"/>
<author>
<name>Chapman, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41893</id>
<updated>2019-04-09T19:16:24Z</updated>
<published>1986-04-01T00:00:00Z</published>
<summary type="text">Cognitive Cliches
Chapman, David
This paper is an exploration of a wide class of mental structures called cognitive cliches that support intermediate methods that are moderately general purpose, in that a few of them will probably be applicable to any given task; efficient; but not individually particularly powerful. These structures are useful in representation, learning, and reasoning of various sorts. Together they form a general theory of special cases.&#13;
A cognitive cliche is a pattern that is commonly found in representations and, when recognized, can be exploited by applying the intermediate methods attached to it. The flavor of the idea is perhaps best conveyed by some examples: TRANSITIVITY, CROSS PRODUCTS, SUCCESSIVE APPROXIMATION, CONTAINMENT, ENABLEMENT, PATHS, RESOURCES, and PROPAGATION are all cognitive cliches.
</summary>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shadows and Cracks</title>
<link href="https://hdl.handle.net/1721.1/41512" rel="alternate"/>
<author>
<name>Dowson, Mark</name>
</author>
<author>
<name>Waltz, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41512</id>
<updated>2019-04-12T09:32:51Z</updated>
<published>1971-06-01T00:00:00Z</published>
<summary type="text">Shadows and Cracks
Dowson, Mark; Waltz, David
The VIRGIN program will interpret pictures of crack and shadow free scenes by labelling them according to the Clowes/Huffman formalism. This paper indicates methods of extending the program to include cracks and shadows and shows that such an extension makes available heuristics which allow the program to be less simple minded.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Injection Molding at the MIT Artificial Intelligence Lab</title>
<link href="https://hdl.handle.net/1721.1/41511" rel="alternate"/>
<author>
<name>Binnard, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/41511</id>
<updated>2019-04-14T07:19:36Z</updated>
<published>1995-02-23T00:00:00Z</published>
<summary type="text">Injection Molding at the MIT Artificial Intelligence Lab
Binnard, Michael
This paper describes the injection molding equipment at the MIT Artificial Intelligence Lab and how to use it. Topic covered include mold design, insert molding, safety, and material properties.
</summary>
<dc:date>1995-02-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capture It, Name It, Own it: How to capture re-occurring patterns, name them and turn them into reusable functions via Emacs kbd-macros</title>
<link href="https://hdl.handle.net/1721.1/41510" rel="alternate"/>
<author>
<name>Kozlowski, Stefan N.</name>
</author>
<id>https://hdl.handle.net/1721.1/41510</id>
<updated>2019-04-10T20:54:54Z</updated>
<published>1992-05-01T00:00:00Z</published>
<summary type="text">Capture It, Name It, Own it: How to capture re-occurring patterns, name them and turn them into reusable functions via Emacs kbd-macros
Kozlowski, Stefan N.
The purpose of this talk is not to teach you about Emacs or Emacs kbd-macros, though we will use both as examples. I can teach you everything there is to know about Emacs and kbd-macros in 5 minutes. There are literally only about six commands which govern the majority of the Emacs kbd-macro universe but just knowing the commands is not going to help you much. To borrow an analogy from the introductory 6.001 lecture, I can teach you all the rules of chess in ten minutes but that does not mean that you will be a good chess player in ten minutes. The purpose of this talk is to get you to think about many of the methods and processes we perform each day in our jobs. Hopefully, such an examination will make you realize that we often repeat the same processes over and over. If we can isolate a repeated process, we can often capture it and transform it into a reusable function.&#13;
Today we will be looking at capturing such processes via Emacs kbd-macros through, you should be aware that many these of the methods can also be applied to UNIX, other operating systems, editors and languages. The reason we will be examining this topic via Emacs kbd-macros is that it is the easiest and most user-friendly way to approach the subject. We are going to start by looking at very simple examples and progress in complexity. I have written these macros and use some of them on a daily basis. Hopefully some these examples will directly correlate to duties you perform each day at work and you will be able to use some of them.
(**Note: This text was delivered as a lecture to the AI Lab Support Staff and still appears as such.**)&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-89-J-3202 and by the National Science Foundation under grant number MIP-9001651.
</summary>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tomorrow's Surgery: Micromotors and Microrobots</title>
<link href="https://hdl.handle.net/1721.1/41509" rel="alternate"/>
<author>
<name>Flynn, Anita M.</name>
</author>
<author>
<name>Udayakumar, K. R.</name>
</author>
<author>
<name>Barrett, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/41509</id>
<updated>2019-04-09T16:26:36Z</updated>
<published>1992-07-01T00:00:00Z</published>
<summary type="text">Tomorrow's Surgery: Micromotors and Microrobots
Flynn, Anita M.; Udayakumar, K. R.; Barrett, David S.
Surgical procedures have changed radically over the last few years due to the arrival of new technology. What will technology bring us in the future?&#13;
This paper examines a few of the forces whose timing are causing new ideas to congeal from the fields of artificial intelligence, robotics, micromachining and smart materials.&#13;
Intelligence systems for autonomous mobile robots can now enable simple insect level behaviors in small amounts of silicon. These software breakthroughs coupled with new techniques for microfabricating miniature sensors and actuators from both silicon and ferroelectric families of materials offer glimpses of the future where robots will be small, cheap and potentially useful to surgeons.&#13;
In this paper we relate our recent efforts to fabricate piezoelectric micromotors in an effort to develop actuator technologies where brawn matches to the scale of the brain. We discuss our experiments with thin film ferroelectric motors 2mm in diameter and larger 8mm versions machined from bulk ceramic and sketch possible applications in the surgical field.
</summary>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Lab Faculty</title>
<link href="https://hdl.handle.net/1721.1/41508" rel="alternate"/>
<author>
<name>Torrance, Mark C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41508</id>
<updated>2019-04-10T17:41:48Z</updated>
<published>1992-09-01T00:00:00Z</published>
<summary type="text">AI Lab Faculty
Torrance, Mark C.
This document is meant to introduce new graduate students in the MIT AI Lab to the faculty members of the laboratory and their research interests. Each entry consists of the faculty member's picture, if available, some information on how to reach them, their responses to a few survey questions, and a few paragraphs excerpted from the AI Lab President's Report, as edited by Patrick Winston.
</summary>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A User's Guide to the AI Lab: Getting Started at Tech Square</title>
<link href="https://hdl.handle.net/1721.1/41507" rel="alternate"/>
<author>
<name>Hofmeister, Scott</name>
</author>
<author>
<name>Ruecker, Lukas</name>
</author>
<id>https://hdl.handle.net/1721.1/41507</id>
<updated>2019-04-12T09:32:51Z</updated>
<published>1991-08-18T00:00:00Z</published>
<summary type="text">A User's Guide to the AI Lab: Getting Started at Tech Square
Hofmeister, Scott; Ruecker, Lukas
</summary>
<dc:date>1991-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fine Grained Robotics</title>
<link href="https://hdl.handle.net/1721.1/41506" rel="alternate"/>
<author>
<name>Flynn, Anita M.</name>
</author>
<author>
<name>Barrett, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/41506</id>
<updated>2019-04-09T18:18:04Z</updated>
<published>1991-02-01T00:00:00Z</published>
<summary type="text">Fine Grained Robotics
Flynn, Anita M.; Barrett, David S.
Fine grained robotics is the idea of solving problems utilizing multitudes of very simple machines in place of one large complex entity. Organized in the proper way, simple machines and simple behaviors can lead to emergent solutions. Just as ants and termites perform useful work and build communal structures, gnat robots can solve problems in new ways. This notion of collective intelligence, married with technologies for mass-producing small robots very cheaply will blaze new avenues in all aspects of everyday life. Building gnat robots involves not only inventing the components from which to put together systems but also developing the technologies to produce the components.&#13;
This paper analyzes prototype microrobotic systems, specifically calculating torque and power requirements for three locomotion alternatives (flying, walking and swimming) for small robots. With target specifications for motors for these systems, we then review technology options and bottlenecks and sort through the tree of possibilities to pick and appropriate path along which we plan to proceed.
</summary>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Evolution of Society</title>
<link href="https://hdl.handle.net/1721.1/41505" rel="alternate"/>
<author>
<name>Inman, Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/41505</id>
<updated>2019-04-11T04:16:03Z</updated>
<published>1991-08-05T00:00:00Z</published>
<summary type="text">The Evolution of Society
Inman, Jeff
We re-examine the evolutionary stability of the tit-for-tat (tft) strategy in the context of the iterated prisoner's dilemma, as introduced by Axelrod and Hamilton. This environment involves a mixture of populations of "organisms" which interact with each other according to the rules of the prisoner's dilemma, from game theory. The tft strategy is nice, retaliatory and forgiving, and these properties contributed to the success of the strategy in the earlier experiments. However, it turns out that the property of being nice represents a weakness, when competing with an insular strategy, but the reverse is also true, which means that tft is not an evolutionarily stable strategy. In fact, insular strategies prove to be better at resisting incursion. Finally, we consider the implications of this result, in terms of naturally occurring societies.
</summary>
<dc:date>1991-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Correction of Force Errors for Flexible Manipulators in Quasi-Static Conditions</title>
<link href="https://hdl.handle.net/1721.1/41504" rel="alternate"/>
<author>
<name>Bicchi, Antonio</name>
</author>
<author>
<name>Melchiorri, Claudio</name>
</author>
<id>https://hdl.handle.net/1721.1/41504</id>
<updated>2019-04-12T09:32:55Z</updated>
<published>1990-12-01T00:00:00Z</published>
<summary type="text">Correction of Force Errors for Flexible Manipulators in Quasi-Static Conditions
Bicchi, Antonio; Melchiorri, Claudio
This paper deals with the problem of controlling the interactions of flexible manipulators with their environment. For executing a force control task, a manipulator with intrinsic (mechanical) compliance has some advantages over the rigid manipulators commonly employed in position control tasks. In particular, stability margins of the force control loop are increased, and robustness to uncertainties in the model of the environment is improved for compliant arms. On the other hand, the deformations of the arm under the applied load give rise to errors, that ultimately reflect in force control errors. This paper addresses the problem of evaluating these errors, and of compensating for them with suitable joint angle corrections. A solution to this problem is proposed in the simplifying assumptions that an accurate model of the arm flexibility is known, and that quasi-static corrections are of interest.
</summary>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experiment in Knowledge Acquisition for Software Requirements</title>
<link href="https://hdl.handle.net/1721.1/41503" rel="alternate"/>
<author>
<name>Lefelhocz, Paul M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41503</id>
<updated>2019-04-12T09:32:57Z</updated>
<published>1990-05-01T00:00:00Z</published>
<summary type="text">An Experiment in Knowledge Acquisition for Software Requirements
Lefelhocz, Paul M.
The Requirements Apprentice (RA) is a demonstration system that assists a human analyst in the requirements-acquisition phase of the software-development process. By applying the RA to another example it has been possible to show some of the range of applicability of the RA. The same disambiguation, formalization, and contradiction-resolution techniques are useful in the air traffic control and library database domains and some clichés are shared between them. In addition, the need for an extension to the RA is seen: summarization of contradictions could be improved.
</summary>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extending 2-D Smoothed Local Symmetries to 3-D</title>
<link href="https://hdl.handle.net/1721.1/41502" rel="alternate"/>
<author>
<name>Braunegg, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41502</id>
<updated>2019-04-12T09:32:56Z</updated>
<published>1985-11-01T00:00:00Z</published>
<summary type="text">Extending 2-D Smoothed Local Symmetries to 3-D
Braunegg, David J.
3-D Smoothed Local Symmetries (3-D SLS's) are presented as a representation for three-dimensional shapes. 3-D SLS's make explicit the perceptually salient features of 3-D objects and are especially suited to representing man-made objects. The definition of the 3-D SLS is given as a natural extension of the 2-D Smoothed Local Symmetry (2-D SLS). Analytic descriptions of the 3-D SLS are derived for objects composed of planar and spherical patches. Results of an implementation of the 3-D SLS are presented, along with suggestions for further research.
</summary>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Program Design Assistant</title>
<link href="https://hdl.handle.net/1721.1/41501" rel="alternate"/>
<author>
<name>Tan, Yang Meng</name>
</author>
<id>https://hdl.handle.net/1721.1/41501</id>
<updated>2019-04-10T23:12:25Z</updated>
<published>1989-06-01T00:00:00Z</published>
<summary type="text">A Program Design Assistant
Tan, Yang Meng
The DA will be a design assistant which can assist the programmer in low-level design. The input language of the DA is a cliché-based program description language that allows the specification and high-level design of commonly-written programs to be described concisely. The DA language is high-level in the sense that programmers need not bother with detailed design. The DA will provide automatic low-level design assistance to the programmer in selecting appropriate algorithms and data structures. It will also detect inconsistencies and incompleteness in program descriptions.&#13;
A key related issue in this research is the representation of programming knowledge in a design assistant. The knowledge needed to automate low-level design and the knowledge in specific programming clichés have to be represented explicitly to facilitate reuse.
</summary>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Principles of Knowledge Representation and Reasoning in the FRAPPE System</title>
<link href="https://hdl.handle.net/1721.1/41500" rel="alternate"/>
<author>
<name>Feldman, Yishai A.</name>
</author>
<author>
<name>Rich, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/41500</id>
<updated>2019-04-11T04:16:03Z</updated>
<published>1989-05-01T00:00:00Z</published>
<summary type="text">Principles of Knowledge Representation and Reasoning in the FRAPPE System
Feldman, Yishai A.; Rich, Charles
The purpose of this paper is to elucidate the following four important architectural principles of knowledge representation and reasoning with the example of an implemented system: limited reasoning, truth maintenance, hybrid architecture, and many sorted logic.
</summary>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decision Representation Language (DRL) and Its Support Environment</title>
<link href="https://hdl.handle.net/1721.1/41499" rel="alternate"/>
<author>
<name>Lee, Jintae</name>
</author>
<id>https://hdl.handle.net/1721.1/41499</id>
<updated>2019-04-12T09:32:50Z</updated>
<published>1989-08-01T00:00:00Z</published>
<summary type="text">Decision Representation Language (DRL) and Its Support Environment
Lee, Jintae
In this report, I describe a language, called Decision Representation Language (DRL), for representing the qualitative aspects of decision making processes such as the alternatives being evaluated, goals to satisfy, and the arguments evaluating the alternatives. Once a decision process is represented in this language, the system can provide a set of services that support people making the decision. These services, together with the interface such as the object and the different presentation formats, form the support environment for using the language. I describe the services that have been so far identified to be useful — the managements of dependency, plausibility, viewpoints, and precedents. I also discuss how this work on DRL is related to other studies on decision making.
</summary>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Don't Loop, Iterate</title>
<link href="https://hdl.handle.net/1721.1/41498" rel="alternate"/>
<author>
<name>Amsterdam, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/41498</id>
<updated>2019-04-10T23:12:29Z</updated>
<published>1990-05-01T00:00:00Z</published>
<summary type="text">Don't Loop, Iterate
Amsterdam, Jonathan
I describe an iteration macro for Common Lisp that is clear, efficient, extensible, and in excellent taste.
</summary>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The GSL Cookbook</title>
<link href="https://hdl.handle.net/1721.1/41497" rel="alternate"/>
<author>
<name>Braunegg, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41497</id>
<updated>2019-04-12T09:32:55Z</updated>
<published>1989-03-01T00:00:00Z</published>
<summary type="text">The GSL Cookbook
Braunegg, David J.
This cookbook contains recipes prepared for the GSL (Graduate Student Lunch) at the Massachusetts Institute of Technology Artificial Intelligence Laboratory.
</summary>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining the Limits of Automated Program Recognition</title>
<link href="https://hdl.handle.net/1721.1/41496" rel="alternate"/>
<author>
<name>Wills, Linda M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41496</id>
<updated>2019-04-12T09:32:54Z</updated>
<published>1989-06-01T00:00:00Z</published>
<summary type="text">Determining the Limits of Automated Program Recognition
Wills, Linda M.
Program recognition is a program understanding technique in which stereotypic computational structures are identified in a program. From this identification and the known relationships between the structures, a hierarchical description of the program's design is recovered. The feasibility of this technique for small programs has been shown by several researchers. However, it seems unlikely that the existing program recognition systems will scale up to realistic, full-sized programs without some guidance (e.g., from a person using the recognition system as an assistant). One reason is that there are limits to what can be recovered by a purely code-driven approach. Some of the information about the program that is useful to know for common software engineering tasks, particularly maintenance, is missing from the code. Another reason guidance must be provided is to reduce the cost of recognition. To determine what guidance is appropriate, therefore, we must know what information is recoverable from the code and where the complexity of program recognition lies. I propose to study the limits of program recognition, both empirically and analytically. First, I will build an experimental system that performs recognition on realistic programs on the order of thousands of lines. This will allow me to characterize the information that can be recovered by this code-driven technique. Second, I will formally analyze the complexity of the recognition process. This will help determine how guidance can be applied most profitably to improve the efficiency of program recognition.
This working paper was submitted as a Ph.D. thesis proposal.
</summary>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating vision modules with coupled MRFs</title>
<link href="https://hdl.handle.net/1721.1/41495" rel="alternate"/>
<author>
<name>Poggio, Tomaso</name>
</author>
<id>https://hdl.handle.net/1721.1/41495</id>
<updated>2019-04-09T18:01:40Z</updated>
<published>1985-12-01T00:00:00Z</published>
<summary type="text">Integrating vision modules with coupled MRFs
Poggio, Tomaso
I outline a project for integrating several early visual modalities based on coupled Markov Random Fields models of the physical processes underlying image formation, such as depth, albedo and orientation of surfaces. The key ideas are:&#13;
a) to use as input data estimates of the various processes and their discontinuities, computed by several different algorithms.&#13;
b) to implement with MRFs the physical and geometrical constraints of local "continuity" of the processes and of their discontinuities. Processes are coupled to each other: the most common form of coupling is a veto — one process vetoing another — as in the case of discontinuities and the associated continuous field.
A. I. Laboratory Working Papers are produced for internal circulation and contain proteins, lipids, cholesterol, polysorbate-80, and other compounds unsuitable for external exposure. It is not intended that material in this paper be applied externally; it is intended for internal consumption only. Serving suggestion: add taco sauce (not included).
</summary>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Construction and Refinement of Justified Causal Models Through Variable-Level Explanation and Perception, and Experimenting</title>
<link href="https://hdl.handle.net/1721.1/41494" rel="alternate"/>
<author>
<name>Doyle, Richard J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41494</id>
<updated>2019-04-11T04:16:02Z</updated>
<published>1985-12-01T00:00:00Z</published>
<summary type="text">Construction and Refinement of Justified Causal Models Through Variable-Level Explanation and Perception, and Experimenting
Doyle, Richard J.
The competence being investigated is causal modelling, whereby the behavior of a physical system is understood through the creation of an explanation or description of the underlying causal relations.&#13;
After developing a model of causality, I show how the causal modelling competence can arise from a combination of inductive and deductive inference employing knowledge of the general form of causal relations and of the kinds of causal mechanisms that exist in a domain.&#13;
The hypotheses generated by the causal modelling system range from purely empirical to more and more strongly justified. Hypotheses are justified by explanations derived from the domain theory and by perceptions which instantiate those explanations. Hypotheses never can be proven because the domain theory is neither complete nor consistent. Causal models which turn out to be inconsistent may be repairable by increasing the resolution of explanation and/or perception.&#13;
During the causal modelling process, many hypotheses may be partially justified and even leading hypotheses may have only minimal justification. An experiment design capability is proposed whereby the next observation can be deliberately arranged to distinguish several hypotheses or to make particular hypotheses more justified. Experimenting is seen as the active gathering of greater justification for fewer and fewer hypotheses.
</summary>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Further Evidence Against the Recovery Theory of Vision</title>
<link href="https://hdl.handle.net/1721.1/41493" rel="alternate"/>
<author>
<name>Marill, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/41493</id>
<updated>2019-04-10T23:12:23Z</updated>
<published>1989-02-01T00:00:00Z</published>
<summary type="text">Further Evidence Against the Recovery Theory of Vision
Marill, Thomas
The problem of three-dimensional vision is generally formulated as the problem of recovering the three-dimensional scene that caused the image.&#13;
We have previously presented a certain line-drawing and shown that it has the following property: the three-dimensional object we see when we look at this line-drawing does not have the line-drawing as its image. It would therefore be impossible for the seen object to be the cause of the image. Such an occurrence constitutes a counterexample to the theory that vision recovers the scene that caused the image.&#13;
Here we show that such a counterexample is not an isolated case, but is the rule rather than the exception. Thus, as a general matter, the three-dimensional scenes we see when we look at line-drawings do not have these drawings as their image. This represents further evidence against the recovery theory.
</summary>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transcendence, Facticity,  and Modes of Non-Being</title>
<link href="https://hdl.handle.net/1721.1/41492" rel="alternate"/>
<author>
<name>Donald, B. Randall</name>
</author>
<author>
<name>Canny, J. Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/41492</id>
<updated>2019-04-09T16:59:19Z</updated>
<published>1986-03-01T00:00:00Z</published>
<summary type="text">Transcendence, Facticity,  and Modes of Non-Being
Donald, B. Randall; Canny, J. Francis
Research in artificial intelligence has yet to satisfactorily address the primordial fissure between human consciousness and the material order. How is this split reconciled in terms of human reality? By what duality is Bad Faith possible? We show that the answer is quite subtle, and of particular relevance to certain classical A.I. problems in introspection and intensional belief structure. A principled approach to bad faith and the consciousness of the other is suggested. We present ideas for an implementation in the domain of chemical engineering.
A.I. Laboratory working papers are produced for internal circulation, and may contain information that is, for example, to preliminary, too detailed, or too silly for formal publication. This paper handsomely satisfies all three criteria. While it is destined to become a landmark in its genre, readers are cautioned against making reference to this paper in the literature, as the authors would like to rejoin society with a clean slate. This paper could not have been produced without the assistance of many brilliant but unstable individuals who could not be reached for comment, and whose names have been suppressed pending determination of competence.
</summary>
<dc:date>1986-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision Utilities</title>
<link href="https://hdl.handle.net/1721.1/41491" rel="alternate"/>
<author>
<name>Voorhees, Harry</name>
</author>
<id>https://hdl.handle.net/1721.1/41491</id>
<updated>2019-04-09T18:08:31Z</updated>
<published>1985-12-01T00:00:00Z</published>
<summary type="text">Vision Utilities
Voorhees, Harry
This paper documents a collection of Lisp utilities which I have written while doing vision programming on a Symbolics Lisp machine. Many of these functions are useful both as interactive commands invoked from the Lisp Listener and as "building blocks" for constructing larger programs. Utilities documented here include functions for loading, storing, and displaying images, for creating synthetic images, for convolving and processing arrays, for making histograms, and for plotting data.
</summary>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Counterexample to the Theory that Vision Recovers Three-Dimensional Scenes</title>
<link href="https://hdl.handle.net/1721.1/41490" rel="alternate"/>
<author>
<name>Marill, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/41490</id>
<updated>2019-04-10T23:12:24Z</updated>
<published>1988-11-01T00:00:00Z</published>
<summary type="text">A Counterexample to the Theory that Vision Recovers Three-Dimensional Scenes
Marill, Thomas
The problem of three-dimensional vision is generally formulated as the problem of recovering the three-dimensional scene that caused the image. Here we present a certain line-drawing and show that it has the following property: the three-dimensional object we see when we look at this line-drawing does not have the line-drawing as its image. It would therefore be impossible for the seen object to be the cause of the image. Such an occurrence constitutes a counterexample to the theory that vision recovers the scene that caused the image.
</summary>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Test Programming by Program Composition and Symbolic Simulation</title>
<link href="https://hdl.handle.net/1721.1/41489" rel="alternate"/>
<author>
<name>Shirley, Mark H.</name>
</author>
<id>https://hdl.handle.net/1721.1/41489</id>
<updated>2019-04-12T09:32:52Z</updated>
<published>1985-11-01T00:00:00Z</published>
<summary type="text">Test Programming by Program Composition and Symbolic Simulation
Shirley, Mark H.
Classical test generation techniques rely on search through gate-level circuit descriptions, which results in long runtimes. In some instances, classical techniques cannot be used because they would take longer than the lifetime of the product to generate tests which are needed when the first devices come off the assembly line. Despite these difficulties, human experts often succeed in writing test programs for very complex circuits. How can we account for their success?&#13;
We take a knowledge engineering approach to this problem by trying to capture in a program techniques gleaned from working with experienced test programmers. From these talks, we conjecture that expert test programming performance relies in part on two aspects of human problem solving.&#13;
First, the experts remember many cliched solutions to test programming problems. The difficulty lies in formalizing the notion of a cliche for this domain. For test programming, we propose that cliches contain goal to subgoal expansions, fragments of test program code, and constraints describing how program fragments fit together. We present an algorithm which uses testing cliches to generate test programs. Second, experts can simulate a circuit at various levels of abstraction and recognize patterns of activity in the circuit which are useful for solving test problems. We argue that symbolic simulation coupled with recognition of which simulated events solve our goals is an effective planning strategy in certain cases. We present a second algorithm which simulates circuit behavior on symbolic inputs at roughly the register transfer level and generates fragments of test programs suitable for use by our first algorithm.
</summary>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Program Recognition: A Proposal</title>
<link href="https://hdl.handle.net/1721.1/41488" rel="alternate"/>
<author>
<name>Zelinka, Linda M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41488</id>
<updated>2019-04-10T16:49:41Z</updated>
<published>1985-12-01T00:00:00Z</published>
<summary type="text">Automated Program Recognition: A Proposal
Zelinka, Linda M.
The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This paper proposes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.
</summary>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to do Research At the MIT AI Lab</title>
<link href="https://hdl.handle.net/1721.1/41487" rel="alternate"/>
<author>
<name>Chapman, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41487</id>
<updated>2019-04-12T09:32:51Z</updated>
<published>1988-10-01T00:00:00Z</published>
<summary type="text">How to do Research At the MIT AI Lab
Chapman, David
This document presumptuously purports to explain how to do research. We give heuristics that may be useful in pickup up specific skills needed for research (reading, writing, programming) and for understanding and enjoying the process itself (methodology, topic and advisor selection, and emotional factors).
</summary>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Jordan Form of (i+j over j) over Z[subscript p]</title>
<link href="https://hdl.handle.net/1721.1/41486" rel="alternate"/>
<author>
<name>Strauss, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/41486</id>
<updated>2019-04-09T19:01:52Z</updated>
<published>1985-07-01T00:00:00Z</published>
<summary type="text">Jordan Form of (i+j over j) over Z[subscript p]
Strauss, Nicholas
The Jordan Form over field Z[subscript p] of J[superscript p][subscript p]n is diagonal for p &gt; 3 with characteristic polynomial, ϕ(x) = x[superscript 3] - 1, for p prime, n natural number. These matrices have dimension p[superscript n] x p[superscript n], with entries (i+j over j). I prove these results with the method of generating functions.
</summary>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IDEME: A DBMS of Methods</title>
<link href="https://hdl.handle.net/1721.1/41485" rel="alternate"/>
<author>
<name>Lee, Jintae</name>
</author>
<id>https://hdl.handle.net/1721.1/41485</id>
<updated>2019-04-12T09:32:50Z</updated>
<published>1985-08-01T00:00:00Z</published>
<summary type="text">IDEME: A DBMS of Methods
Lee, Jintae
In this paper, an intelligent database management system (DBMS) called IDEME is presented. IDEME is a program that takes as input a task specification and finds a set of methods potentially relevant to solving that task. It does so by matching the task specification to the methods in its database at multiple levels of abstraction. After isolating potentially useful methods, IDEME ranks them by how relevant they might be to the task. From the most relevant method, it checks if its operational demands, i.e. those conditions that have to be satisfied for the method to be applicable, are satisfied by the present task. If so, it presents the algorithm of the method relativized to the present task; otherwise, it goes on to the next method. In this paper, the focus will be on the representation scheme that is used by IDEME to represent methods as well as tasks.
</summary>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Writing and Representation</title>
<link href="https://hdl.handle.net/1721.1/41484" rel="alternate"/>
<author>
<name>Agre, Philip E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41484</id>
<updated>2019-04-10T23:12:23Z</updated>
<published>1988-09-01T00:00:00Z</published>
<summary type="text">Writing and Representation
Agre, Philip E.
This paper collects several notes I've written over the last year in an attempt to work through my dissatisfactions with the ideas about representation I was taught in school. Among these ideas are the notion of a 'world model'; the notion of representations having 'content' independent of the identity, location, attitudes, or activities of any agent; and the notion that a representation is the sort of thing you might implement with datastructures and pointers. Here I begin developing an alternative view of representation whose prototype is a set of instructions written in English on a sheet of paper you're holding in your hand while pursuing some ordinarily complicated concrete project in the everyday world. Figuring out what the markings on this paper are talking about is a fresh problem in every next setting, and solving this problem takes work. Several detailed stories about representation use in everyday activities—such as assembling a sofa from a kit, being taught to fold origami cranes, following stories across pages of a newspaper, filling a photocopier with toner, and keeping count when running laps—illustrate this view. Finally, I address the seeming tension between necessity of interpreting one's representations in every next setting and the idea that everyday life is fundamentally routine.
</summary>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a Principle-Based Translator</title>
<link href="https://hdl.handle.net/1721.1/41483" rel="alternate"/>
<author>
<name>Dorr, Bonnie J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41483</id>
<updated>2019-04-12T09:32:49Z</updated>
<published>1985-06-01T00:00:00Z</published>
<summary type="text">Toward a Principle-Based Translator
Dorr, Bonnie J.
A principle-based computational model of natural language translation consists of two components: (1) a module which makes use of a set of principles and parameters to transform the source language into an annotated surface form that can be easily converted into a "base" syntactic structure; and (2) a module which makes use of the same set of principles, but a different set of parameter values, to transform the "base" syntactic structure into the target language surface structure. This proposed scheme of language translation is an improvement over existing schemes since it is based on interactions between principles and parameters rather than on complex interactions between language-specific rules as found in older schemes.&#13;
The background for research of the problem includes: an examination of existing schemes of computerized language translation and an analysis of their shortcomings. Construction of the proposed scheme requires a preliminary investigation of the common "universal" principles and parametric variations across different languages within the framework of current linguistic theory.&#13;
The work to be done includes: construction of a module which uses linguistic principles and source language parameter values to parse and output the corresponding annotated surface structures of source language sentences; creation of procedures which handle the transformation of an annotated surface structure into a "base" syntactic structure; and development of a special purpose generation scheme which converts a "base" syntactic structure into a surface form in the target language.
</summary>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Use YTEX</title>
<link href="https://hdl.handle.net/1721.1/41482" rel="alternate"/>
<author>
<name>Brotsky, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/41482</id>
<updated>2019-04-11T07:44:48Z</updated>
<published>1986-06-09T00:00:00Z</published>
<summary type="text">How to Use YTEX
Brotsky, Daniel
YTEX—pronounced why-TEX or oops-TEX—is a TEX macro package. YTEX provides both an easy-to-use interface for TEX novices and a powerful macro-creation library for TEX programmers. It is this two-tier structure that makes YTEX more useful to a diverse TEX user community than other macro packages such as Plain or LaTEX.&#13;
This paper contains YTEX instructions intended for novice users. It summarizes the facilities provided by YTEX and concludes with a table of useful commands.&#13;
The version of YTEX documented her is release 2.0.
Work on YTEX was supported by a desire to avoid doing real work, like research.
</summary>
<dc:date>1986-06-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Support for Obviously Synchonizable Series Expressions in Pascal</title>
<link href="https://hdl.handle.net/1721.1/41481" rel="alternate"/>
<author>
<name>Orwant, Jonathan L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41481</id>
<updated>2019-04-10T22:36:40Z</updated>
<published>1988-11-01T00:00:00Z</published>
<summary type="text">Support for Obviously Synchonizable Series Expressions in Pascal
Orwant, Jonathan L.
Obviously synchronizable series expressions enable programmers to write algorithms as straightforward compositions of functions rather than as less comprehensible loops while retaining the significantly higher efficiency of loops. A macro package supporting these expressions in Lisp has been in use since December of 1987.&#13;
However, the theory behind obviously synchronizable series expressions is not restricted to Lisp; in fact, it is applicable to any programming language. Because many people view packages designed in Lisp to be dependent on the qualities which make Lisp different from other languages, it was decided to support the macro package in the all-purpose language Pascal. This paper discusses its implementation.
</summary>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Puma/Cougar Implementor's Guide</title>
<link href="https://hdl.handle.net/1721.1/41480" rel="alternate"/>
<author>
<name>Jones, Joe L.</name>
</author>
<author>
<name>O'Donnell, Patrick A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41480</id>
<updated>2019-04-10T22:36:32Z</updated>
<published>1985-04-01T00:00:00Z</published>
<summary type="text">Puma/Cougar Implementor's Guide
Jones, Joe L.; O'Donnell, Patrick A.
This document is intended to be a guide to assist a programmer in modifying or extending the Lisp Puma system, the Puma PDP-11 system, or the Cougar PDP-11 system. It consists mostly of short descriptions or hints, and is not intended to be a polished manual. The reader is expected to be familiar with the use of the Puma system, as described in "Using the PUMA System," and the Lisp flavor system, as described in the Lisp Machine Manual.
</summary>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using the PUMA System</title>
<link href="https://hdl.handle.net/1721.1/41479" rel="alternate"/>
<author>
<name>Jones, Joe L.</name>
</author>
<author>
<name>O'Donnell, Patrick A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41479</id>
<updated>2019-04-11T07:56:53Z</updated>
<published>1985-04-01T00:00:00Z</published>
<summary type="text">Using the PUMA System
Jones, Joe L.; O'Donnell, Patrick A.
This document describes the operation of the Lisp Machine interface to the Unimation Puma 600 Robot Arm. The interface is evolved from a system described in an earlier paper, and much is the same. However, the under-lying interface between the Lisp Machine and the Puma has changed and some enhancements have been made. VAL has been replaced with a PDP-11/23, communicating with the Lisp Machine over the Chaosnet.&#13;
The purpose of this document is to provide instruction and information in the programming of the Puma arm from the Lisp Machine. The network protocol is not described here, nor are the internals of the implementation. These details are provided in separate documentation.&#13;
The reader will find in this paper both a tutorial section and a reference section. The tutorial will lead the reader through a sample session using the Puma by directly calling the primitive operations, and will provide an introduction to programming using the primitives. The reference section provides an overview of the network protocol and describes all of the primitive operations provided.&#13;
Please note that this document corresponds to the version of the Puma system in use on 11 March, 1985. The system is still undergoing development and enhancement, and there may be additional features, if you are running a newer system. The authors welcome reports of errors, inaccuracies, or suggestions for clarification or improvement in either the documentation or the code for the Puma system. Please send electronic mail to BUG-PUMA@MIT-OZ.
</summary>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing the State Behavior of Programs</title>
<link href="https://hdl.handle.net/1721.1/41478" rel="alternate"/>
<author>
<name>Bawden, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/41478</id>
<updated>2019-04-12T09:44:57Z</updated>
<published>1988-08-01T00:00:00Z</published>
<summary type="text">Analyzing the State Behavior of Programs
Bawden, Alan
It is generally agreed that the unrestricted use of state can make a program hard to understand, hard to compile, and hard to execute, and that these difficulties increase in the presence of parallel hardware. This problem has led some to suggest that constructs that allow state should be banished from programming languages. But state is also a very useful phenomenon: some tasks are extremely difficult to accomplish without it, and sometimes the most perspicuous expression of an algorithm is one that makes use of state. Instead of outlawing state, we should be trying to understand it, so that we can make better use of it.&#13;
I propose a way of modeling systems in which the phenomenon of state occurs. I propose that systems that exhibit state-like behavior are those systems that must rely on their own nonlocal structure in order to function correctly, and I make this notion of nonlocal structure precise. This characterization offers some new insights into why state seems to cause the problems that it does. I propose to construct a compiler that takes advantage of these insights to achieve some of the benefits normally associated with purely functional programming systems.
</summary>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a Richer Language for Describing Software Errors</title>
<link href="https://hdl.handle.net/1721.1/41477" rel="alternate"/>
<author>
<name>Levitin, Samuel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41477</id>
<updated>2019-04-12T09:44:53Z</updated>
<published>1985-05-01T00:00:00Z</published>
<summary type="text">Toward a Richer Language for Describing Software Errors
Levitin, Samuel M.
Several approaches to the meaning and uses of errors in software development are discussed. An experiment involving a strong type-checking language, CLU, is described, and the results discussed in terms of the state of the art language for bug description. This method of bug description is found to be lacking sufficient detail to model the progress of software through its entire lifetime. A new method of bug description is proposed, which can describe bug types encountered not only in the current experiment but also in previous experiments. It is expected that this method is robust enough to be independent of the various factors of a software project that influence the realms in which bugs will occur.
</summary>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Proposal for Research With the Goal of Formulating a Computational Theory of Rational Action</title>
<link href="https://hdl.handle.net/1721.1/41476" rel="alternate"/>
<author>
<name>Batali, John</name>
</author>
<id>https://hdl.handle.net/1721.1/41476</id>
<updated>2019-04-12T09:44:47Z</updated>
<published>1985-04-01T00:00:00Z</published>
<summary type="text">A Proposal for Research With the Goal of Formulating a Computational Theory of Rational Action
Batali, John
A theory of rational action can be used to determine the right action to perform in a situation. I will develop a theory of rational action in which an agent has access to an explicit theory of rationality. The agent makes use of this theory when it chooses its actions, including the actions involved in determining how to apply the theory. The Intentional states of the agent are realized in states and processes of its physical body. The body of the agent is a computational entity whose operations are under the control of a program. The agent has full access to that program and controls its actions by manipulating that program. I will illustrate the theory by implementing a system which simulates the actions a rational agent takes in various situations.
</summary>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel Flow Graph Matching for Automated Program Recognition</title>
<link href="https://hdl.handle.net/1721.1/41475" rel="alternate"/>
<author>
<name>Ritto, Patrick M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41475</id>
<updated>2019-04-10T22:36:39Z</updated>
<published>1988-07-01T00:00:00Z</published>
<summary type="text">Parallel Flow Graph Matching for Automated Program Recognition
Ritto, Patrick M.
A flow graph matching algorithm has been implemented on the Connection Machine which employs parallel techniques to allow efficient subgraph matching. By constructing many different matchings in parallel, the algorithm is able to perform subgraph matching in polynomial time in the size of the graphs. The automated program recognition system can use this algorithm to help make a more efficient flow graph parser. The process of automated program recognition involves recognizing familiar data structures and algorithmic fragments (called clichés) in a program so that a hierarchical description of the program can be constructed. The recognition is done by representing the program as a flow graph and parsing it with a graph grammar which encodes the clichés. In order to find clichés in the midst of unfamiliar code, it is necessary to run the parser on all possible subgraphs of the graph, thus starting the parser an exponential number of times. This is too inefficient for practical use on large programs, so this algorithm has been implemented to allow the matchings to be performed in polynomial time.
</summary>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exceptional Situations in Lisp</title>
<link href="https://hdl.handle.net/1721.1/41474" rel="alternate"/>
<author>
<name>Pitman, Kent M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41474</id>
<updated>2019-04-10T22:36:24Z</updated>
<published>1985-02-01T00:00:00Z</published>
<summary type="text">Exceptional Situations in Lisp
Pitman, Kent M.
Frequently, it is convenient to describe a program in terms of the normal situations in which it will be used, even if such a description does not describe the its complete behavior in all circumstances. This paper surveys the issues surrounding the description of program behavior in exceptional situations.
</summary>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Structures of Everyday Life</title>
<link href="https://hdl.handle.net/1721.1/41473" rel="alternate"/>
<author>
<name>Agre, Philip E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41473</id>
<updated>2019-04-10T22:36:32Z</updated>
<published>1985-02-01T00:00:00Z</published>
<summary type="text">The Structures of Everyday Life
Agre, Philip E.
This note descends from a talk I gave at the AI Lab's Revolving Seminar series in November 1984. I offer it as an informal introduction to some work I've been doing over the last year on common sense reasoning. Four themes wander in and out.&#13;
1) Computation provides an observation vocabulary for introspection. With a little work, you can learn to exhume your models of everyday activities. This method can provide empirical grounding for computational theories of the central systems of mind.&#13;
2) The central systems of mind arise in each of us as a rational response to the impediments to living posed by the laws of computation. One of these laws is that all search problems (theorem proving for example) are intractable. Another is that no one model of anything is good enough for all tasks. Reasoning from these laws can provide theoretical grounding for computational theories of the central systems of mind.&#13;
3) Mental models tend to form mathematical lattices under the relation variously called subsumption or generalization. Your mind puts a lot of effort into maintaining this lattice because it has so many important properties. One of these is that the more abstract models provide a normalized decomposition of world-situations that greatly constrains the search for useful analogies.&#13;
4) I have been using these ideas in building a computational theory of routines, the frequency repeated and phenomenologically automatic rituals of which most of daily life is made. I describe this theory briefly.
</summary>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Partial Mechanical Design Compiler</title>
<link href="https://hdl.handle.net/1721.1/41472" rel="alternate"/>
<author>
<name>Ward, Allen C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41472</id>
<updated>2019-04-12T09:44:53Z</updated>
<published>1987-02-01T00:00:00Z</published>
<summary type="text">A Partial Mechanical Design Compiler
Ward, Allen C.
I have implemented a simple "mechanical design compiler", that is a program which can convert high-level descriptions of a mechanical design into detail descriptions. (Human interaction is sometimes required.) The program operates in the domain of power transmission equipment composed of discrete, purchasable components. I describe a semantic theory which assigns meanings to the high-level descriptions, and a set of operations on statements in a "specification language" which perform some of the reasoning required by the "compilation" process.
</summary>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tradeoffs in Designing a Parallel Architecture for the Apiary</title>
<link href="https://hdl.handle.net/1721.1/41221" rel="alternate"/>
<author>
<name>Manning, Carl R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41221</id>
<updated>2019-04-12T09:44:46Z</updated>
<published>1984-12-01T00:00:00Z</published>
<summary type="text">Tradeoffs in Designing a Parallel Architecture for the Apiary
Manning, Carl R.
The Apiary is an abstract computer architecture designed for performing computation based on the idea of message passing between dynamic computational objects called actors. An apiary connotes a community of worker bees busily working together; similarily, the Apiary architecture is made of many workers (processing elements) computing together. The Apiary architecture is designed to exploit the concurrency inherent in the actor model of computation by processing the messages to many different actors in parallel. This paper explores the nature of actor computations and how the Apiary performs computation with actors to give the render some background before looking at some of the tradeoffs which must be made to design special purpose hardware for the Apiary.
</summary>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Mobile Robot Project</title>
<link href="https://hdl.handle.net/1721.1/41220" rel="alternate"/>
<author>
<name>Brooks, Rodney A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41220</id>
<updated>2019-04-12T09:44:47Z</updated>
<published>1985-02-01T00:00:00Z</published>
<summary type="text">A Mobile Robot Project
Brooks, Rodney A.
We are building a mobile robot which will roam around the AI lab observing and later perhaps doing. Our approach to building the robot and its controlling software differs from that used in many other projects in a number of ways. (1) We model the world as three dimensional rather than two. (2) We build no special environment for our robot and insist that it must operate in the same real world that we inhabit. (3) In order to adequately deal with uncertainty of perception and control we build relational maps rather than maps embedded in a coordinate system, and we maintain explicit models of all uncertainties. (4) We explicitly monitor the computational performance of the components of the control system, in order to refine the design of a real time control system for mobile robots based on a special purpose distributed computation engine. (5) We use vision as our primary sense and relegate acoustic sensors to local obstacle detection. (6) We use a new architecture for an intelligent system designed to provide integration of many early vision processes, and robust real-time performance even in cases of sensory overload, failure of certain early vision processes to deliver much information in particular situations, and computation module failure.
</summary>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spurious Behaviors in Qualitative Prediction</title>
<link href="https://hdl.handle.net/1721.1/41219" rel="alternate"/>
<author>
<name>Hall, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41219</id>
<updated>2019-04-10T22:36:35Z</updated>
<published>1988-03-01T00:00:00Z</published>
<summary type="text">Spurious Behaviors in Qualitative Prediction
Hall, Robert J.
I examine the scope and causes of the spurious behavior problem in two widely different approaches to qualitative prediction, Sacks' PLR and Kuipers' QSIM. QSIM's proliferation of spurious behaviors and PLR's limited applicability and problematic extensibility lead me to propose a third, intermediate approach to qualitative prediction called the Phase Space Geometry approach. This has the potential advantages of predicting far fewer spurious behaviors than QSIM-like approaches and being directly applicable to nonlinear systems of all orders.
This paper was originally an Area Exam report, so may seem somewhat sketchy and incomplete.
</summary>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Associative Learning of Standard Regularizing Operators in Early Vision</title>
<link href="https://hdl.handle.net/1721.1/41218" rel="alternate"/>
<author>
<name>Poggio, Tomaso</name>
</author>
<author>
<name>Hurlbert, Anya</name>
</author>
<id>https://hdl.handle.net/1721.1/41218</id>
<updated>2019-04-12T09:44:49Z</updated>
<published>1984-12-01T00:00:00Z</published>
<summary type="text">Associative Learning of Standard Regularizing Operators in Early Vision
Poggio, Tomaso; Hurlbert, Anya
Standard regularization methods can be used to solve satisfactorily several problems in early vision, including edge detection, surface reconstruction, the computation of motion and the recovery of color. In this paper, we suggest (a) that quadratic variational principles corresponding to standard regularization methods are equivalent to a linear regularizing operator acting on the data and (b) that this operator can be synthesized through associative learning. The synthesis of the regularizing operator involves the computation of the pseudoinverse of the data. The pseudoinverse can be computed by iterative methods, that can be implemented in analog networks. Possible implications for biological visual systems are also discussed.
</summary>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role of Intensional and Extensional Representations in Simulation</title>
<link href="https://hdl.handle.net/1721.1/41217" rel="alternate"/>
<author>
<name>Brotsky, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/41217</id>
<updated>2019-04-10T17:17:33Z</updated>
<published>1984-12-01T00:00:00Z</published>
<summary type="text">The Role of Intensional and Extensional Representations in Simulation
Brotsky, Daniel
I review three systems which do simulation in different domains. I observe the following commonality in the representations underlying the simulations:&#13;
• The representations used for individuals tend to be domain-dependent. These representations are highly structured, concentrating in one place all the information concerning any particular individual. I call these representations intensional because two such representations are considered equal if their forms are identical.&#13;
• With important exceptions, the representations used for classes of individuals tend to be domain-independent. These representations are unstructured sets of predications involving the characteristics of class members. I call these representations extensional because two such representations are considered equal if the classes they specify are identical.&#13;
I draw out various ramifications of this dichotomy, and speculate as to its cause. In conclusion, I suggest research into the process of debugging extensional class representations and the development of intensional ones.
This paper was prepared as the author's area examination.
</summary>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Novice's Guide to the UNIX at the AI Laboratory Version 1.0</title>
<link href="https://hdl.handle.net/1721.1/41216" rel="alternate"/>
<author>
<name>Highleyman, Liz A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41216</id>
<updated>2019-04-12T09:44:56Z</updated>
<published>1988-05-01T00:00:00Z</published>
<summary type="text">The Novice's Guide to the UNIX at the AI Laboratory Version 1.0
Highleyman, Liz A.
This is a manual for complete beginners. It requires little knowledge of the MIT computer systems, and assumes no knowledge of the UNIX operating system. This guide will show you how to log onto the AI Lab's SUN system using a SUN III or similar workstation or a non-dedicated terminal. Many of the techniques described will be applicable to other computers running UNIX. You will learn how to use various operating system and network features, send and receive electronic mail, create and edit files using GNU EMACS, process text using YTEX, and print your files.
</summary>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The EIGHT Manual: A System for Geometric Modelling and Three-Dimensional Graphics on the Lisp Machine</title>
<link href="https://hdl.handle.net/1721.1/41215" rel="alternate"/>
<author>
<name>Donald, Bruce R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41215</id>
<updated>2019-04-12T09:44:46Z</updated>
<published>1984-08-01T00:00:00Z</published>
<summary type="text">The EIGHT Manual: A System for Geometric Modelling and Three-Dimensional Graphics on the Lisp Machine
Donald, Bruce R.
We describe a simple geometric modelling system called Eight which supports interactive creation, editing, and display of three-dimensional polyhedral solids. Perspective views of a polyhedral environment may be generated, and hidden surfaces removed. Eight proved useful for creating world models, and as an underlying system for modelling object interaction in robotics research and applications. It is documented here in order to make the facility available to other members of the Artificial Intelligence Laboratory.
</summary>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BUILD -- A System Construction Tool</title>
<link href="https://hdl.handle.net/1721.1/41214" rel="alternate"/>
<author>
<name>Robbins, Richard E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41214</id>
<updated>2019-04-10T22:36:24Z</updated>
<published>1984-08-01T00:00:00Z</published>
<summary type="text">BUILD -- A System Construction Tool
Robbins, Richard E.
BUILD is a proposed tool for constructing systems from existing modules. BUILD system descriptions are composed of module declarations and assertions of how modules refer to each other. An extensible library of information about module types and module interaction types is maintained. The library contains information that allows BUILD to derive construction dependencies from the module declarations and referencing patterns enumerated in system descriptions. BUILD will support facilities not adequately provided by existing tools; including automatic derivation of system descriptions, patching of systems, and incorporation of information about how modules change (e.g. the ability to differentiate between the effect of adding a function definition and the effect of adding a comment).
</summary>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Proposal For An Intelligent Debugging Assistant</title>
<link href="https://hdl.handle.net/1721.1/41213" rel="alternate"/>
<author>
<name>Kuper, Ron I.</name>
</author>
<id>https://hdl.handle.net/1721.1/41213</id>
<updated>2019-04-10T22:36:35Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">A Proposal For An Intelligent Debugging Assistant
Kuper, Ron I.
There are many ways to find bugs in programs. For example, observed input and output values can be compared to predicted values. An execution trace can be examined to locate errors in control flow. The utility of these and other strategies depends on the quality of the specifications available. The Debugging Assistant chooses the most appropriate debugging strategy based on the specification information available and the context of the bug. Particular attention has been given to applying techniques from the domain of hardware troubleshooting to the domain of software debugging. This has revealed two important differences between the two domains: (1) Unlike circuits, programs rarely come with complete specifications of their behavior, and (2) Unlike circuits, the cost of probing inputs and outputs of programs is low.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Routing Thoughts</title>
<link href="https://hdl.handle.net/1721.1/41212" rel="alternate"/>
<author>
<name>Poggio, Tomaso A</name>
</author>
<id>https://hdl.handle.net/1721.1/41212</id>
<updated>2025-07-24T00:06:03Z</updated>
<published>1984-05-01T00:00:00Z</published>
<summary type="text">Routing Thoughts
Poggio, Tomaso A
In a parallel machine with many thousands of processors the routing of information between processors is a key task, which turns out to require as much hardware and perhaps more sophistication than local computing itself. There are at least two basic engineering solutions to the routing problem: one followed by most research projects is of the "packet switching" type, that behaves as a mail service, with data carrying addresses to route the packet through the system. The other, more similar to a traditional telephone system, has connections made and broken (or enabled and disabled) as required for exchanging information. These solutions, based on silicon technology and digital electronic, may be quite different from the routing solutions used by the prototypical parallel machine — the brain.&#13;
This paper asks questions concerning routing information in parallel machines with an eye to biological wetware. It is divided in four disconnected parts, that do not contain finished results but consist of suggestions for future speculations:&#13;
1) How to make Infinity Small.&#13;
2) Routers and Brains&#13;
3) Classifying Parallel Machines&#13;
3) The Problem of Remapping
This working paper is has been brought to you by the modern wonders of microcassette dictating equipment, through which Professor Poggio can now cough up working papers while doing something else more important.
</summary>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>TEMPEST -- A Template Editor for Structured Text</title>
<link href="https://hdl.handle.net/1721.1/41211" rel="alternate"/>
<author>
<name>Sterpe, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/41211</id>
<updated>2019-04-12T09:43:45Z</updated>
<published>1984-05-01T00:00:00Z</published>
<summary type="text">TEMPEST -- A Template Editor for Structured Text
Sterpe, Peter
This paper proposes an editing tool named TEMPEST (TEMPlate Editor for Structured Text) whose goal is to extend a text editing environment by using templates to incorporate into it some knowledge of the structure of the text that is being edited. TEMPEST's functionality is focused on the structural aspects of text editing that are not well supported by typical text editors. In addition, it uses a text-based approach which affords a wide range of applicability. A scenario is given to illustrate its use.
</summary>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Program Improvement by Automatic Redistribution of Intermediate Results</title>
<link href="https://hdl.handle.net/1721.1/41210" rel="alternate"/>
<author>
<name>Hall, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41210</id>
<updated>2019-04-11T02:59:00Z</updated>
<published>1988-05-01T00:00:00Z</published>
<summary type="text">Program Improvement by Automatic Redistribution of Intermediate Results
Hall, Robert J.
The problem of automatically improving the performance of computer programs has many facets. A common source of program inefficiency is the use of abstraction techniques in program design: general tools used in a specific context often do unnecessary or redundant work. Examples include needless copy operations, redundant subexpressions, multiple traversals of the same datastructure and maintenance of overly complex data invariants. I propose to focus on one broadly applicable way of improving a program's performance: redistributing intermediate results so that computation can be avoided. I hope to demonstrate that this is a basic principle of optimization from which many of the current approaches to optimization may be derived. I propose to implement a system that automatically finds and exploits opportunities for redistribution in a given program. In addition to the program source, the system will accept an explanation of correctness and purpose of the code.&#13;
Beyond the specific task of program improvement, I anticipate that the research will contribute to our understanding of the design and explanatory structure of programs. Major results will include (1) definition and manipulation of representation of correctness and purpose of a program's implementation, and (2) definition, construction, and use of a representation of a program's dynamic behavior.
This paper was originally a Ph.D. thesis proposal.
</summary>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chapter and Verse Program Description</title>
<link href="https://hdl.handle.net/1721.1/41209" rel="alternate"/>
<author>
<name>Turrisi, Elizabeth K.</name>
</author>
<id>https://hdl.handle.net/1721.1/41209</id>
<updated>2019-04-11T07:44:48Z</updated>
<published>1984-06-01T00:00:00Z</published>
<summary type="text">Chapter and Verse Program Description
Turrisi, Elizabeth K.
The design of a program is rarely a straightforward mapping from the problem solution to the code. More frequently, fragments of high level concepts are distributed over one or more modules such that it is hard to identify the fragments which belong to one particular concept. These mappings have to be untangled and described in order to give a complete picture of how the program implements the ideas.&#13;
The Chapter and Verse method of program description emphasizes the high level concepts which underlie a program, and the relationship between these concepts and the low level structure of program code. The organization of the description is similar to that of a textbook. The Chapter and Verse description aids in the use, modification, and evaluation of computer programs by promoting a full understanding of the programs.
</summary>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Switching Between Discrete and Continuous Models To Predict Genetic Activity</title>
<link href="https://hdl.handle.net/1721.1/41208" rel="alternate"/>
<author>
<name>Weld, Daniel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/41208</id>
<updated>2019-04-11T07:56:52Z</updated>
<published>1983-10-01T00:00:00Z</published>
<summary type="text">Switching Between Discrete and Continuous Models To Predict Genetic Activity
Weld, Daniel S.
Molecular biologists use a variety of models when they predict the behavior of genetic systems. A discrete model of the behavior of individual macromolecular elements forms the foundation for their theory of each system. Yet a continuous model of the aggregate properties of the system is necessary for many predictive tasks.&#13;
I propose to build a computer program, called PEPTIDE, which can predict the behavior of moderately complex genetics systems by performing qualitative simulation on the discrete model, generating a continuous model from the discrete model through aggregation, and applying limit analysis to the continuous model. PEPTIDE's initial knowledge of a specific system will be represented with a discrete model which distinguishes between macromolecule structure and function and which uses five atomic processes as its functional primitives. Qualitative Process (QP) theory [Forbus 83] provides the representation for the continuous model.&#13;
Whenever a system has multiple models of a domain, the decision of which model to use in a given time becomes a critically important issue. Knowledge of the relative significance of differing element concentrations and the behavior of process structure cycles will allow PEPTIDE to determine when to switch reasoning modes.
</summary>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Introduction to Using the Window System</title>
<link href="https://hdl.handle.net/1721.1/41207" rel="alternate"/>
<author>
<name>Weinreb, Daniel</name>
</author>
<author>
<name>Moon, David A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41207</id>
<updated>2019-04-10T23:12:22Z</updated>
<published>1982-10-14T00:00:00Z</published>
<summary type="text">Introduction to Using the Window System
Weinreb, Daniel; Moon, David A.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 67, but is not guaranteed to remain 100% accurate. To understand some portions of this document may depend on background information which is not contained in any published documentation.&#13;
This paper is a portion of a document which will explain how a programmer may make use of and extend the facilities in the Lisp machine known collectively as the Window System.
</summary>
<dc:date>1982-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Numerical Shape from Shading and Occluding Contours in a Single View</title>
<link href="https://hdl.handle.net/1721.1/41206" rel="alternate"/>
<author>
<name>Ikeuchi, Katsushi</name>
</author>
<id>https://hdl.handle.net/1721.1/41206</id>
<updated>2019-04-12T09:32:25Z</updated>
<published>1979-11-01T00:00:00Z</published>
<summary type="text">Numerical Shape from Shading and Occluding Contours in a Single View
Ikeuchi, Katsushi
An iterative method of using occluding boundary information is proposed to compute surface slope from shading.&#13;
We use a stereographic space rather than the more commonly used gradient space in order to express occluding boundary information. Further, we use "average" smoothness constraints rather than the more obvious "closed loop" smoothness constraints. We develop alternate constraints from the definition of surface smoothness, since the closed loop constraints do not work in the stereographic space. We solve the image irradiance equation iteratively using a Gauss-Seidel method applied to the constraints and boundary information. Numerical experiments show that the method is effective. Finally, we analyze SEM (Scanning Electron Microscope) pictures using this method. Other applications are also proposed.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research under Office of Naval Research contract N00014-77-C-0389.&#13;
Fig. 2-A and Fig. 26 are used from "magnification" by David Scharf under permission of the author.
</summary>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Semantic Description from Drawings of Scenes with Shadows</title>
<link href="https://hdl.handle.net/1721.1/41205" rel="alternate"/>
<author>
<name>Waltz, David L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41205</id>
<updated>2019-04-12T09:32:27Z</updated>
<published>1972-11-01T00:00:00Z</published>
<summary type="text">Generating Semantic Description from Drawings of Scenes with Shadows
Waltz, David L.
The research reported here concerns the principles used to automatically generate three-dimensional representations from line drawings of scenes. The computer programs involved look at scenes which consist of polyhedra and which may contain shadows and various kinds of coincidentally aligned scene features. Each generated description includes information about edge shape (convex, concave, occluding, shadow, etc.), about decomposition of the scene into bodies, about the type of illumination for each region (illuminated, projected shadow, or oriented away from the light source), and about the spacial orientation of regions. The methods used are based on the labeling schemes of Huffman and Clowes; this research provides a considerable extension to their work and also gives theoretical explanation to the heuristic scene analysis work of Guzman, Winston, and others.
This report reproduces a thesis of the same title submitted to the Department of Electrical Engineering, Massachusetts Institute of Technology, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, September 1972.
</summary>
<dc:date>1972-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Heterarchical Program  for Recognition of Polyhedra</title>
<link href="https://hdl.handle.net/1721.1/41204" rel="alternate"/>
<author>
<name>Shirai, Yoshiaki</name>
</author>
<id>https://hdl.handle.net/1721.1/41204</id>
<updated>2019-04-11T04:03:10Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">A Heterarchical Program  for Recognition of Polyhedra
Shirai, Yoshiaki
Recognition of polyhedra by a heterarchical program is presented. The program is based on the strategy of recognizing objects step by step, at each time making use of the previous results. At each stage, the most obvious and simple assumption is made and the assumption is tested. To find a line segment, a range of search is proposed. Once a line segment is found, more of the line is determined by tracking along it. Whenever a new fact is found, the program tries to reinterpret the scene taking the obtained information into consideration. Results of the experiment using an image dissector are satisfactory for scenes containing a few blocks and wedges. Some limitations of the present program and proposals for future development are described.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Reproduction of this document, in whole or in part, is permitted for any purpose of the United States Government.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Planning System for Robot Construction Tasks</title>
<link href="https://hdl.handle.net/1721.1/41203" rel="alternate"/>
<author>
<name>Fahlman, Scott E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41203</id>
<updated>2019-04-12T09:32:23Z</updated>
<published>1973-05-01T00:00:00Z</published>
<summary type="text">A Planning System for Robot Construction Tasks
Fahlman, Scott E.
This paper describes BUILD, a computer program which generates plans for building specified structures out of simple objects such as toy blocks. A powerful heuristic control structure enables BUILD to use a number of sophisticated construction techniques in its plans. Among these are the incorporation of pre-existing structure into the final design, pre-assembly of movable sub-structures on the table, and the use of extra blocks as temporary supports and counterweights in the course of construction.&#13;
Build does its planning in a modeled 3-space in which blocks of various shapes and sizes can be represented in any orientation and location. The modeling system can maintain several world models at once, and contains modules for displaying states, testing them for inter-object contact and collision, and for checking the stability of complex structure involving frictional forces.&#13;
Various alternative approaches are discussed, and suggestions are included for the extension of BUILD-like systems to other domains. Also discussed are the merits of BUILD's implementation language, CONNIVER, for this type of problem solving.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.&#13;
This report reproduces a thesis of the same title submitted to the Department of Electrical Engineering, Massachusetts Institute of Technology, in partial fulfillment of the requirements for the degree of Bachelor of Science and Master of Science, June 1973.
</summary>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning is Just a Way of Avoiding Figuring Out What To Do Next</title>
<link href="https://hdl.handle.net/1721.1/41202" rel="alternate"/>
<author>
<name>Brooks, Rodney A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41202</id>
<updated>2019-04-10T22:36:34Z</updated>
<published>1987-09-01T00:00:00Z</published>
<summary type="text">Planning is Just a Way of Avoiding Figuring Out What To Do Next
Brooks, Rodney A.
The idea of planning and plan execution is just an intuition based decomposition. There is no reason it has to be that way. Most likely in the long term, real empirical evidence from systems we know to be built that way (from designing them like that) will determine whether its a very good idea or not. Any particular planner is simply an abstraction barrier. Below that level we get a choice of whether to slot in another planner or to place a program which does the right thing. Why stop there? Maybe we can go up the hierarchy and eliminate the planners there too. To do this we must move from a state based way of reasoning to a process based way of acting.
</summary>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CL1 Manual</title>
<link href="https://hdl.handle.net/1721.1/41201" rel="alternate"/>
<author>
<name>Bawden, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/41201</id>
<updated>2019-04-11T01:54:48Z</updated>
<published>1983-09-01T00:00:00Z</published>
<summary type="text">CL1 Manual
Bawden, Alan
CL1 is a prototyping language for programming a Connection Machine. It supports a model of the Connection Machine as a collection of tiny conventional machines (process elements), each with its own independent program counter.
</summary>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Cooperative Networks</title>
<link href="https://hdl.handle.net/1721.1/41200" rel="alternate"/>
<author>
<name>Marroquin, J. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41200</id>
<updated>2019-04-12T09:32:27Z</updated>
<published>1983-07-01T00:00:00Z</published>
<summary type="text">Design of Cooperative Networks
Marroquin, J. L.
In this paper we analyse several approaches to the design of Cooperative Algorithms for solving a general problem: That of computing the values of some property over a spatial domain, when these values are constrained (but not uniquely determined) by some observations, and by some a priori knowledge about the nature of the solution (smoothness, for example).&#13;
Specifically, we discuss the use of: Variational techniques; stochastic approximation methods for global optimization, and linear threshold networks. Finally, we present a new approach, based on the interconnection of Winner-take-all networks, for which it is possible to establish precise convergence results, including bounds on the rate of convergence.
</summary>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MIT Mobile Robots - What's Next?</title>
<link href="https://hdl.handle.net/1721.1/41199" rel="alternate"/>
<author>
<name>Flynn, Anita M.</name>
</author>
<author>
<name>Brooks, Rodney A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41199</id>
<updated>2019-04-12T09:32:26Z</updated>
<published>1987-11-01T00:00:00Z</published>
<summary type="text">MIT Mobile Robots - What's Next?
Flynn, Anita M.; Brooks, Rodney A.
The MIT Mobile Robot Project began in January of 1985 with the objective of building machines that could operate autonomously and robustly in dynamically changing environments. We now have four working robots, each progressively more intelligent and sophisticated. All incorporate some rather novel ideas about how to build a control system that can adequately deal with complex environments. The project has also contributed some innovative and creative technical solutions in terms of putting together sensors, actuators, power supplies and processing power into whole systems that actually work. From our experiences over the past two and a half years, we have gained insight into the real issues and problems and what the goals should be for future robotics research. This paper gives our perspectives on mobile robotics: our objectives, experiences, mistakes and future plans.
</summary>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differential Operators for Edge Detection</title>
<link href="https://hdl.handle.net/1721.1/41198" rel="alternate"/>
<author>
<name>Torre, V.</name>
</author>
<author>
<name>Poggio, Tomaso A</name>
</author>
<id>https://hdl.handle.net/1721.1/41198</id>
<updated>2025-07-24T00:03:16Z</updated>
<published>1983-03-01T00:00:00Z</published>
<summary type="text">Differential Operators for Edge Detection
Torre, V.; Poggio, Tomaso A
We present several results characterizing two differential operators used for edge detection: the Laplacian and the second directional derivative along the gradient. In particular, (a)we give conditions for coincidence of the zeros of the two operators, and (b) we show that the second derivative along the gradient has the same zeros of the normal curvature in the gradient direction.&#13;
Biological implications are also discussed. An experiment is suggested to test which of the two operators may be used by the human visual system.
</summary>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formalizing Reusable Software Components</title>
<link href="https://hdl.handle.net/1721.1/41197" rel="alternate"/>
<author>
<name>Rich, Charles</name>
</author>
<author>
<name>Waters, Richard C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41197</id>
<updated>2019-04-09T17:28:10Z</updated>
<published>1983-07-01T00:00:00Z</published>
<summary type="text">Formalizing Reusable Software Components
Rich, Charles; Waters, Richard C.
There has been a long-standing desire in computer science for a way of collecting and using libraries of standard software components. Unfortunately, there has been only limited success in actually doing this. We believe that the lack of success stems not from any resistance to the idea, nor from any lack of trying, but rather from the difficulty of choosing an appropriate formalism for representing components. In this paper we define five desiderata for a good formalization of reusable software components and discuss many of the formalisms which have been used for representing components in light of these desiderata. We then briefly describe a formalism we are developing — the Plan Calculus — which seeks to satisfy these desiderata by combining together the best features of prior formalisms.
This paper has been accepted by the ITT Workshop on Reusability in Programming, Newport RI, September 7-9, 1983.
</summary>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Merging Illustrations and Printing on Big Paper</title>
<link href="https://hdl.handle.net/1721.1/41196" rel="alternate"/>
<author>
<name>Roylance, Gerald</name>
</author>
<id>https://hdl.handle.net/1721.1/41196</id>
<updated>2019-04-10T20:25:41Z</updated>
<published>1987-07-01T00:00:00Z</published>
<summary type="text">Merging Illustrations and Printing on Big Paper
Roylance, Gerald
How to guide for some of the printing utilities in the AI lab. Describes how TEX tiles are processed and how some illustrations may be merged into the final copy. Also describes how to use TEX to print on 8.5x14 (legal) and 11x17 size paper.
</summary>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Virtual Inclusion</title>
<link href="https://hdl.handle.net/1721.1/41195" rel="alternate"/>
<author>
<name>Chapman, David</name>
</author>
<author>
<name>Agre, Philip E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41195</id>
<updated>2019-04-12T09:32:25Z</updated>
<published>1983-09-01T00:00:00Z</published>
<summary type="text">Virtual Inclusion
Chapman, David; Agre, Philip E.
Several recent knowledge-representation schemes have used virtual copies for storage efficiency. Virtual copes are confusing. In the course of trying to understand, implement, and use Jon Doyle's SDL virtual copy mechanism, we encountered difficulties that led us to define an extension of virtual copies we call virtual inclusion. Virtual inclusion has interesting similarities to the environment structures maintained by a program in a block-structured language. It eliminates the clumsy typed part mechanism of SDL, and handles properly a proposed test of sophisticated virtual copy schemes.
</summary>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Naive Problem Solving and Naive Mathematics</title>
<link href="https://hdl.handle.net/1721.1/41194" rel="alternate"/>
<author>
<name>Chapman, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41194</id>
<updated>2019-04-11T01:54:48Z</updated>
<published>1983-06-01T00:00:00Z</published>
<summary type="text">Naive Problem Solving and Naive Mathematics
Chapman, David
AI problem solvers have almost always been given a complete and correct axiomatization of their problem domain and of the operators available to change it. Here I discuss a paradigm for problem solving in which the problem solver initially is given only a list of available operators, with no indication as to the structure of the world or the behavior of the operators. Thus, to begin it is "blind" and can only stagger about in the world tripping over things until it begins to understand what is going on. Eventually it will learn enough to solve problems in the world as well as if it the world had been explained to it initially. I call this paradigm naive problem solving. The difficulty of adequately formalizing all but the most constrained domains makes naive problem solving desirable.&#13;
I have implemented a naive problem solver that learns to stack blocks and to use an elevator. It learns by finding instances of "naive mathematical cliches" which are common mental models that are likely to be useful in any domain.
</summary>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The New Idiot's Guide to OZ</title>
<link href="https://hdl.handle.net/1721.1/41193" rel="alternate"/>
<author>
<name>Highleyman, Liz A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41193</id>
<updated>2019-04-09T18:56:48Z</updated>
<published>1988-02-01T00:00:00Z</published>
<summary type="text">The New Idiot's Guide to OZ
Highleyman, Liz A.
This is a manual for complete beginners. It assumes no knowledge of the MIT computer systems. This guide will teach you how to log onto the computer called OZ, a DEC PDP-20 computer running the TWENEX (TOPS-20) operating system. You will learn how to use various operating system features, send and receive electronic mail, create and edit files using EMACS, process text using YTEX, and print out your files. This manual has a companion on-line directory on OZ, called &lt;LIZ.GUIDE&gt;, which contains sample programs and examples to use in conjunction with this guide.
</summary>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interfacing to the Programmer's Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41192" rel="alternate"/>
<author>
<name>Pitman, Kent</name>
</author>
<id>https://hdl.handle.net/1721.1/41192</id>
<updated>2019-04-10T19:28:18Z</updated>
<published>1983-02-01T00:00:00Z</published>
<summary type="text">Interfacing to the Programmer's Apprentice
Pitman, Kent
In this paper, we discuss the design of a user interface to the Knowledge Based Editor (KBE), a prototype implementation of the Programmer's Apprentice. Although internally quite sophisticated, the KBE hides most of its internal mechanisms from the user, presenting a simplified model of its behavior which is flexible and easy to use. Examples are presented to illustrate the decisions which have led from high-level design principles such as "integration with existing tools" and "simplicity of user model" to a working implementation which is true to those principles.
This paper has been submitted to SoftFair, an IEEE/NBS/SIGSOFT co-sponsored conference on software development tools, techniques, and alternatives, which will be held at the Hyatt Regency Crystal City, Arlington, VA., July 26-28, 1983.
</summary>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representing Change for Common-Sense Physical Reasoning</title>
<link href="https://hdl.handle.net/1721.1/41191" rel="alternate"/>
<author>
<name>Doyle, Richard J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41191</id>
<updated>2019-04-11T01:54:45Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Representing Change for Common-Sense Physical Reasoning
Doyle, Richard J.
Change pervades every moment of our lives. Much of our success in dealing with a constantly changing world is based in common-sense physical reasoning about processes and physical systems. Processes are the way quantities interact over time. Physical systems can be described as a set of quantities and the processes that operate on them. Representations for causality, time, and quantity are needed to fully characterize change in this domain. Several ideas for these representations are examined and synthesized in this paper towards the goal of constructing a framework to support understanding of, reasoning about, and learning how things work.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Condor Programmer's Manual - Version II</title>
<link href="https://hdl.handle.net/1721.1/41190" rel="alternate"/>
<author>
<name>Narasimhan, Sundar</name>
</author>
<author>
<name>Siegel, David M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41190</id>
<updated>2019-04-12T09:32:22Z</updated>
<published>1987-07-01T00:00:00Z</published>
<summary type="text">The Condor Programmer's Manual - Version II
Narasimhan, Sundar; Siegel, David M.
This is the CONDOR programmer's manual, that describes the hardware and software that form the basis of the real-time computational architecture built originally for the Utah-MIT hand. The architecture has been used successfully to control the hand and the MIT-Serial Link Direct Drive Arm in the past. A number of such systems are being built to address the computational needs of other robotics research efforts in and around the lab. This manual, which is intended primarily for programmers/users of the CONDOR system, represents our effort at documenting the system so that it can be a generally useful research tool.
</summary>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Connection Machine RAM Chip</title>
<link href="https://hdl.handle.net/1721.1/41189" rel="alternate"/>
<author>
<name>Flynn, Anita M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41189</id>
<updated>2019-04-11T01:54:45Z</updated>
<published>1983-01-03T00:00:00Z</published>
<summary type="text">The Connection Machine RAM Chip
Flynn, Anita M.
This document describes the three transistor NMOS dynamic ram circuit used in the connection machine. It was designed and implemented by Brewster Kahle, with the assistance of Jim Cherry, Danny Hillis and Tom Knight. Prototypes were fabricated through the APRA MOSIS facility, using both four and three micro design rules. Jim Li and I tested both runs this fall. They work. This document describes how.
</summary>
<dc:date>1983-01-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of Manipulators with Less Than One Degree of Freedom</title>
<link href="https://hdl.handle.net/1721.1/41188" rel="alternate"/>
<author>
<name>Hillis, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41188</id>
<updated>2019-04-10T19:20:32Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Dynamics of Manipulators with Less Than One Degree of Freedom
Hillis, D.
We have developed an efficient Lagrangian formulation of manipulators with small numbers of degrees of freedom. The efficiency derives from the lack of velocities, accelerations, and generalized forces. The number of additions and multiplications remains constant, independent of the number of joints, as long as the number of joints remains less than one. While this is a restricted class of manipulators, we believe that it is important to understand it fully before studying of more complex systems. Manipulators with less that one degree of freedom are by far the most common manipulators used by industry. We have also noticed that many of the multiple-degree-of-freedom manipulators in our laboratory tend to be used in a zero-degree-of-freedom mode. With this formulation of the dynamics it should be possible in principle to compute the Lagrangian dynamics of manipulators with less than one degree of freedom in real time.
Acknowledgments. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. My thanks to Marvin Minsky, Phil Agre, and David Chapman for pointing out relevant trends in current robotics research. A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. it is not intended that they should be considered papers to which reference can be made in the literature.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Interaction Between Truth Maintenance, Equality, and Pattern-Directed Invocation: Issues of Completeness and Efficiency</title>
<link href="https://hdl.handle.net/1721.1/41187" rel="alternate"/>
<author>
<name>Feldman, Yishai A.</name>
</author>
<author>
<name>Rich, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/41187</id>
<updated>2019-04-12T09:32:24Z</updated>
<published>1987-05-01T00:00:00Z</published>
<summary type="text">The Interaction Between Truth Maintenance, Equality, and Pattern-Directed Invocation: Issues of Completeness and Efficiency
Feldman, Yishai A.; Rich, Charles
We have implemented a reasoning system, called BREAD, which includes truth maintenance, equality, and pattern-directed invocation. This paper reports on the solution of two technical problems arising out of the interaction between these mechanisms. The first result is an algorithm which ensures the completeness of pattern-directed invocation with respect to equality. The second result is an algorithm which reduces a class of redundant proofs.
</summary>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Empirical Study of Program Modification Histories</title>
<link href="https://hdl.handle.net/1721.1/41186" rel="alternate"/>
<author>
<name>Zelinka, Linda M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41186</id>
<updated>2019-04-12T09:32:24Z</updated>
<published>1983-03-01T00:00:00Z</published>
<summary type="text">An Empirical Study of Program Modification Histories
Zelinka, Linda M.
Large programs undergo many changes before they run in a satisfactory manner. For many large programs, modification histories are kept which record every change that is made to the program. By studying these records, patterns of program evolution can be identified. This paper describes a taxonomy of types of changes which was developed by studying several such histories. In addition, it discusses a possible application of this classification in an interactive tool for the updating of user documentation.
</summary>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What to Read: A Biased Guide to AI Literacy for the Beginner</title>
<link href="https://hdl.handle.net/1721.1/41185" rel="alternate"/>
<author>
<name>Agre, Philip E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41185</id>
<updated>2019-04-09T18:39:26Z</updated>
<published>1972-11-01T00:00:00Z</published>
<summary type="text">What to Read: A Biased Guide to AI Literacy for the Beginner
Agre, Philip E.
This note tries to provide a quick guide to AI literacy for the beginning AI hacker and for the experienced AI hacker or two whose scholarship isn't what it should be. most will recognize it as the same old list of classic papers, give or take a few that I feel to be under- or over-rated. It is not guaranteed to be thorough or balanced or anything like that.
Acknowledgements. It was Ken Forbus' idea, and he, Howie Shrobe, Dan Weld, and John Batali read various drafts. Dan Huttenlocher and Tom Knight helped with the speech recognition section. The science fiction section was prepared with the aid of my SF/AI editorial board, consisting of Carl Feynman and David Wallace, and of the ArpaNet SF-Lovers community. Even so, all responsibility rests with me.
</summary>
<dc:date>1972-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gnat Robots (And How They Will Change Robotics)</title>
<link href="https://hdl.handle.net/1721.1/41184" rel="alternate"/>
<author>
<name>Flynn, A. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41184</id>
<updated>2019-04-12T09:32:24Z</updated>
<published>1987-06-01T00:00:00Z</published>
<summary type="text">Gnat Robots (And How They Will Change Robotics)
Flynn, A. M.
A new concept in mobile robots is proposed, namely that of a gnat-sized autonomous robot with on-board sensors, brains, actuators and power supplies, all fabricated on a single piece of silicon. Recent breakthroughs in computer architectures for intelligent robots, sensor integration algorithms and micromachining techniques for building on-chip micromotors, combined with the ever decreasing size of integrated logic, sensors and power circuitry have led to the possibility of a new generation of mobile robots which will vastly change the way we think about robotics.&#13;
Forget about today's first generation robots: costly, bulky machines with parts acquired from many different vendors. What will appear will be cheap, mass produced, slimmed down, integrated robots that need no maintenance, no spare parts, and no special care. The cost advantages of these robots will create new worlds of applications.&#13;
Gnat robots will offer a new approach in using automation technology. We will begin to think in terms of massive parallelism: using millions of simple, cheap, gnat robots in place of one large complicated robot. Furthermore, disposable robots will even become realistic.&#13;
This paper outlines how to build gnat robots. It discusses the technology thrusts that will be required for developing such machines and sets forth some strategies for design. A close look is taken at the tradeoffs involved in choosing components of the system: locomotion options, power sources, types of sensors and architectures for intelligence.
</summary>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Talking to the Puma</title>
<link href="https://hdl.handle.net/1721.1/41183" rel="alternate"/>
<author>
<name>Sobalvarro, Patrick G.</name>
</author>
<id>https://hdl.handle.net/1721.1/41183</id>
<updated>2019-04-10T23:12:21Z</updated>
<published>1982-09-01T00:00:00Z</published>
<summary type="text">Talking to the Puma
Sobalvarro, Patrick G.
The AI Lab's Unimation Puma 600 is a general-purpose industrial robot arm that has been interfaced to a Lisp Machine for use in robotics projects at the lab. It has been fitted with a force-sensing wrist. The Puma is capable of moving payloads of up to 5 pounds at up to 1 meter per second, with positioning accuracy to within a millimeter.&#13;
This paper is a primer on the control of the Puma from a Lisp Machine. The current Lisp Machine interface is preliminary; the Lisp Machine communicates with the Puma is over a serial line in Unimation's VAL language. The interface will probably change over the next year; however, the commands documented in this paper will probably remain much the same.
</summary>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Program Description</title>
<link href="https://hdl.handle.net/1721.1/41182" rel="alternate"/>
<author>
<name>Cyphers, D. Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/41182</id>
<updated>2019-04-10T16:28:14Z</updated>
<published>1982-08-01T00:00:00Z</published>
<summary type="text">Automated Program Description
Cyphers, D. Scott
The Programmer's apprentice (PA) is an automated program development tool. The PA depends upon a library of common algorithms (cliches) as the source of its knowledge about programming. The PA uses these cliches to understand how a program is implemented. This knowledge may also be used to explain to a user of the PA how the program is implemented.&#13;
The problem with any explanation or description is knowing how much information to present, and how much information to hide. A set of simple heuristics for doing this can be used with the cliche representation of a program to produce reasonable descriptions of parts of programs. The system described combines "canned" phrases corresponding to cliche parts to form explanations. The process is fast and appears to be easily extensible to future versions of the PA and other domains.
</summary>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ACE: A Cliché-based Program Structure Editor</title>
<link href="https://hdl.handle.net/1721.1/41181" rel="alternate"/>
<author>
<name>Tan, Yang Meng</name>
</author>
<id>https://hdl.handle.net/1721.1/41181</id>
<updated>2019-04-12T09:32:23Z</updated>
<published>1987-05-01T00:00:00Z</published>
<summary type="text">ACE: A Cliché-based Program Structure Editor
Tan, Yang Meng
ACE extends the syntax-directed paradigm of program editing by adding support for programming clichés. A programming cliché is a standard algorithmic fragment. ACE supports the rapid construction of programs through the combination of clichés selected from a cliché library.&#13;
ACE is also innovative in the way it support the basic structure editor operations. Instead of being based directly on the grammar for a programming language, ACE is based on a modified grammar which is designed to facilitate editing. Uniformity of the user interface is achieved by encoding the modified grammar as a set of clichés.
</summary>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Getting Started Computing at the AI Lab</title>
<link href="https://hdl.handle.net/1721.1/41180" rel="alternate"/>
<author>
<name>Stacy, Christopher C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41180</id>
<updated>2019-04-12T09:32:22Z</updated>
<published>1982-09-07T00:00:00Z</published>
<summary type="text">Getting Started Computing at the AI Lab
Stacy, Christopher C.
This document describes the computing facilities at M.I.T. Artificial Intelligence Laboratory, and explains how to get started using them. It is intended as an orientation document for newcomers to the lab, and will be updated by the author from time to time.
</summary>
<dc:date>1982-09-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>TRIG: An Interactive Robotic Teach System</title>
<link href="https://hdl.handle.net/1721.1/41179" rel="alternate"/>
<author>
<name>McLaughlin, James R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41179</id>
<updated>2019-04-12T09:32:21Z</updated>
<published>1982-06-01T00:00:00Z</published>
<summary type="text">TRIG: An Interactive Robotic Teach System
McLaughlin, James R.
Currently, it is difficult for a non-programmer to generate a complex sensor-based robotic program. Most robot programming methods either generate only very simple programs or are such that they are only useful to programmers. This paper presents an interactive teach system that will allow non-programmers to create a program for a six degree of freedom mechanical robot. In addition to conventional guiding capabilities, the teach system will allow the user to create complex programs containing sensor-based moves (move until touch), loops, and branches.
</summary>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovery Systems: From AM to CYRANO</title>
<link href="https://hdl.handle.net/1721.1/41178" rel="alternate"/>
<author>
<name>Haase, Ken</name>
</author>
<id>https://hdl.handle.net/1721.1/41178</id>
<updated>2019-04-12T09:32:22Z</updated>
<published>1987-03-01T00:00:00Z</published>
<summary type="text">Discovery Systems: From AM to CYRANO
Haase, Ken
The emergence in 1976 of Doug Lenat's mathematical discovery program AM [Len76] [Len82a] was met with suprise and controversy; AM's performance seemed to bring the dream of super-intelligent machines to our doorstep, with amazingly simple methods to boot. However, the seeming promise of AM was not borne out: no generation of automated super-mathematicians appeared. Lenat's subsequent attempts (with his work on the Eurisko program) to explain and alleviate AM's problems were something of a novelty in Artificial Intelligence research; AI projects usually 'let lie' after a brief moment in the limelight with a handful of examples. Lenat's work on Eurisko revealed certain constraints on the design of discovery programs; in particular, Lenat discovered that a close coupling of representation syntax and semantics is neccessary for a discovery program to prosper in a given domain. After Eurisko, my own work on the discovery program Cyrano has revealed more constraints on discovery processes in general in particular, work on Cyrano has revealed a requirement of 'closure' in concept formation. The concepts generated by a discovery program's concept formation component must be usable as inputs to that same concept formation component. Beginning with a theoretical analysis of AM's actual performance, this program presents a theory of discovery and goes on to present the implementation of an experiment — the CYRANO program — based on this theory. (This article is a preliminary version of an invited paper fro the First International Symposium on Artificial Intelligence and Expert Systems, to be held in Berlin on May 18-22 1987.)
</summary>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Code Generation in the Programmer's Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41177" rel="alternate"/>
<author>
<name>Handsaker, Robert E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41177</id>
<updated>2019-04-11T01:54:44Z</updated>
<published>1982-05-01T00:00:00Z</published>
<summary type="text">Code Generation in the Programmer's Apprentice
Handsaker, Robert E.
The Programmer's Apprentice is a highly interactive program development tool. The user interface to the system relies on program text which is generated from an internal plan representation. The programs generated need to be easy for a programmer to read and understand. This paper describes a design for a code generation module which can be tailored to produce code which reflects the stylistic preferences of individual programmers.
</summary>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aspects of the Rover Problem</title>
<link href="https://hdl.handle.net/1721.1/41176" rel="alternate"/>
<author>
<name>Doyle, Richard J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41176</id>
<updated>2019-04-12T09:32:21Z</updated>
<published>1982-12-01T00:00:00Z</published>
<summary type="text">Aspects of the Rover Problem
Doyle, Richard J.
The basic task of a rover is to move about automonously in an unknown environment. A working rover must have the following three subsystems which interact in various ways: 1) locomotion--the ability to move, 2) perception--the ability to determine the three-dimensional structure of the environment, and 3) navigation--the ability to negotiate the environment. This paper will elucidate the nature of the problem in these areas and survey approaches to solving them while paying attention to real-world issues.
</summary>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge-Based Schematics Drafting: Aesthetic Configuration as a Design Task</title>
<link href="https://hdl.handle.net/1721.1/41175" rel="alternate"/>
<author>
<name>Valdes-Perez, Raul E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41175</id>
<updated>2019-04-09T18:19:10Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Knowledge-Based Schematics Drafting: Aesthetic Configuration as a Design Task
Valdes-Perez, Raul E.
Depicting an electrical circuit by a schematic is a tedious task that is a good candidate for automation. Programs that draft schematics with the usual algorithmic approach do not fully exploit knowledge of circuit function, relying mainly on the circuit topology. The extra-topological circuit characteristics are what an engineer uses to understand a schematic; human drafters take these characteristics into account when drawing a schematic.&#13;
This document presents a knowledge base and an architecture for drafting arithmetic digital circuits having a single theme. The relevance and limitations of this architecture and knowledge base for other types of circuit are explored.&#13;
It is argued that the task of schematics drafting is one of aesthetic design. The affect of aesthetic criteria on the program architecture is discussed. The circuit layout constraint language, the program's search regimen, and the backtracking scheme are highlighted and explained in detail.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hidden Cues in Random Line Stereograms</title>
<link href="https://hdl.handle.net/1721.1/41174" rel="alternate"/>
<author>
<name>Nishihara, H. K.</name>
</author>
<author>
<name>Poggio, Tomaso A</name>
</author>
<id>https://hdl.handle.net/1721.1/41174</id>
<updated>2025-07-24T00:04:37Z</updated>
<published>1982-04-01T00:00:00Z</published>
<summary type="text">Hidden Cues in Random Line Stereograms
Nishihara, H. K.; Poggio, Tomaso A
Successful fusion of random-line stereograms with breaks in the vernier acuity range has been interpreted to suggest that the interpolation process underlying hyperacuity is parallel and preliminary to stereomatching. In this paper (a) we demonstrate with computer experiments that vernier cues are not needed to solve the stereomatching problem posed by these stereograms and (b) we provide psychophysical evidence that human stereopsis probably does not use vernier cues alone to achieve fusion of these random-line stereograms.
</summary>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Primer for TEX Users</title>
<link href="https://hdl.handle.net/1721.1/41173" rel="alternate"/>
<author>
<name>Jones, Judi</name>
</author>
<id>https://hdl.handle.net/1721.1/41173</id>
<updated>2019-04-09T16:24:34Z</updated>
<published>1982-03-01T00:00:00Z</published>
<summary type="text">A Primer for TEX Users
Jones, Judi
TEX is our latest text formatter. It is designed specifically for technical text (e.g., mathematics), and produces much higher quality output than other formatters previously available. Donald Knuth designed TEX at Stanford and published a manual TEX and METAFONT New Directions in Typesetting with "Everything you need to know about TEX." The original people who used TEX here set up their own macro files but now Daniel Brotsky has developed a standardized macro package which does the types of formatting usually desired. This macro package will be referred to as TBase in this document.&#13;
The aim of this memo is to help you create your first TEXT file, explain the basic commands for formatting (showing some examples) and clarify possible areas of confusion, giving pointers to the more technical documentation available for the advanced user. It is advisable for someone planning to use TEX to get copes of: INFO;TBASE INFO, NTEXLB;TBASE ORDER, NTEXLB;SAMPLE PRESS, NTEXLB:SAMPLE TEX and a copy of Knuth's manual. This document tries not to duplicate information already explained in the materials just mentioned - only to clarify some areas and set the information forth in an easily digestable manner.
</summary>
<dc:date>1982-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Analysis of Programming in Societies of Behaviors</title>
<link href="https://hdl.handle.net/1721.1/41172" rel="alternate"/>
<author>
<name>Cudhea, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/41172</id>
<updated>2019-04-10T22:36:33Z</updated>
<published>1986-12-01T00:00:00Z</published>
<summary type="text">Critical Analysis of Programming in Societies of Behaviors
Cudhea, Peter
Programming in societies of behavior-agents is emerging as a promising method for creating mobile robot control systems that are responsive both to internal priorities for action and to external world constraints. It is essentially a new approach to finding modularities in real-time control systems in which module boundaries are sought not between separate information processing functions, but between separate task-achieving units. Task achieving units for complex behaviors are created by merging together the task-achieving units from simpler component behaviors into societies with competing and cooperating parts. This paper surveys the areas of agreement and disagreement in four approaches to programming with societies of behaviors. By analyzing where the systems differ, both on what constitutes a task-achieving unit and on how to merge such units together, this paper hopes to lay the groundwork for future work on controlling robust mobile robots using this approach.
</summary>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report on the Second Workshop on Distributed AI</title>
<link href="https://hdl.handle.net/1721.1/41171" rel="alternate"/>
<author>
<name>Davis, Randall</name>
</author>
<id>https://hdl.handle.net/1721.1/41171</id>
<updated>2019-04-12T09:44:45Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Report on the Second Workshop on Distributed AI
Davis, Randall
On June 24, 1981 twenty-five participants from organizations around the country gathered in MIT's Endicott House for the Second Annual Workshop on Distributed AI. The three-day workshop was designed as an informal meeting, centered mainly around brief research reports presented by each group, along with an invited talk. In keeping with the spirit of the meeting, this report was prepared as a distributed document, with each speaker contributing a summary of his remarks.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Guide to ITS Operations: Useful Spells and Incantations</title>
<link href="https://hdl.handle.net/1721.1/41170" rel="alternate"/>
<author>
<name>Stacy, Christopher C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41170</id>
<updated>2019-04-10T22:36:23Z</updated>
<published>1982-01-27T00:00:00Z</published>
<summary type="text">A Guide to ITS Operations: Useful Spells and Incantations
Stacy, Christopher C.
It is said that it is not wise to dabble in the Arts without care and caution, for the spell is at once subtle and dangerous: Look herein! For if you read carefully and closely, you can incant a Word of Magic, and the system might be revived.&#13;
This working paper describes crash recovery procedures for a DEC KA-10 computer running ITS, the Incompatible Timesharing System. It is intended for people not intimately familiar with the system internals who need to handle emergency operation program when a system maintaner is not available.
</summary>
<dc:date>1982-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Requirements Analyst's Apprentice: A Proposal</title>
<link href="https://hdl.handle.net/1721.1/41169" rel="alternate"/>
<author>
<name>Reubenstein, Howard</name>
</author>
<id>https://hdl.handle.net/1721.1/41169</id>
<updated>2019-04-09T18:27:27Z</updated>
<published>1986-09-01T00:00:00Z</published>
<summary type="text">A Requirements Analyst's Apprentice: A Proposal
Reubenstein, Howard
The Requirements Analyst's APprentice (RAAP) partially automates the modeling process involved in creating a software requirement. It uses knowledge of the specific domain and general experience regarding software requirements to guide decisions made in the construction of a requirement. RAAP assists the analyst by maintaining consistency, detecting redundancy of description, and analyzing completeness relative to a known body of requirements experience. RAAP is a tool to be used by an analyst in his dealings with the customer. It helps him translate the customer's informal ideas into a requirements knowledge base. RAAP will have the ability to present its internal representation of the requirement in document form. Document-based requirements analysis is the state of the art. A computer-based, knowledge-based analysis system can provide improvement in quality, efficiency and maintainability over document-based requirements analysis and thus advance the state of the art towards automatic programming. RAAP takes a new approach to automating software development by concentrating on the modeling process involved in system construction (as opposed to the model translation process.) By supporting the intelligent creation of perspicuous models, it is hoped that flaws will become self revealing and the quality of software can be improved. Assistance is proved or the creation of "correct" models and for the analysis of the implications of modeling decisions.
</summary>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Assq Chip and Its Progeny</title>
<link href="https://hdl.handle.net/1721.1/41168" rel="alternate"/>
<author>
<name>Agre, Philip E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41168</id>
<updated>2019-04-11T07:56:52Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">The Assq Chip and Its Progeny
Agre, Philip E.
The Assq Chip lives on the memory bus of the Scheme-81 chip of Sussman et al and serves as a  utility for the computation of a number of functions concerned with the maintenance of linear tables and lists. Motivated by a desire to apply the design methodology implicit in Scheme-81, it was designed in about two months, has a very simple architecture and layout, and is primarily machine-generated. The chip and the design process are described and evaluated in the context of a proposal to construct a Scheme-to-silicon compiler that automates the design methodology used in the Assq Chip.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Program Understanding through Cliché Recognition</title>
<link href="https://hdl.handle.net/1721.1/41167" rel="alternate"/>
<author>
<name>Brotsky, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/41167</id>
<updated>2019-04-10T19:34:53Z</updated>
<published>1981-12-01T00:00:00Z</published>
<summary type="text">Program Understanding through Cliché Recognition
Brotsky, Daniel
We propose research into automatic program understanding via recognition of common data structures and algorithms (clichés). Our goals are two-fold: first, to develop a theory of program structure which makes such recognition tractable; and second, to produce a program (named Inspector) which, given a Lisp program and a library of clichés, will construct a hierarchical decomposition of the program in terms of the clichés it uses.&#13;
Our approach involves assuming constraints on the possible decompositions of programs according to the teleological relations between their parts. Programs are analyzed by translating them into a language-independent form and then parsing this representation in accordance with a context-free web grammar induced by the library of clichés. Decompositions produced by this analysis will in general be partial, since most programs will not be made up entirely of clichés.&#13;
This work is motivated by the belief that identification of clichés used in program, together with knowledge of their properties, provides a sufficient basis for understanding large parts of that program's behavior. Inspector will become one component of a system of programs known as a programmer's apprentice, in which Inspector's output will be used to assist a programmer with program synthesis, debugging, and maintenance.
</summary>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Readable Layout of Unbalanced N-ary Trees</title>
<link href="https://hdl.handle.net/1721.1/41166" rel="alternate"/>
<author>
<name>Solo, David M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41166</id>
<updated>2019-04-12T09:44:53Z</updated>
<published>1986-08-01T00:00:00Z</published>
<summary type="text">Readable Layout of Unbalanced N-ary Trees
Solo, David M.
The automatic layout of unbounded n-ary tree structures is a problem of subjectively meshing two independent goals: clarity and space efficiency. This paper presents a minimal set of subjective aesthetics which insures highly readable structures, without overly restricted flexibility in the layout of the tree. This flexibility underlies the algorithm's ability to produce readable trees with greater uniformity of node density throughout the display than achieved by previous algorithms, an especially useful characteristic where nodes are labelled with text.
</summary>
<dc:date>1986-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programming Cliches and Cliche Extraction</title>
<link href="https://hdl.handle.net/1721.1/41165" rel="alternate"/>
<author>
<name>Cyphers, D. Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/41165</id>
<updated>2019-04-12T09:44:46Z</updated>
<published>1982-02-01T00:00:00Z</published>
<summary type="text">Programming Cliches and Cliche Extraction
Cyphers, D. Scott
The programmer's apprentice (PA) is an automated program development tool. The PA depends upon a library of common algorithms (cliches) as the source of its knowledge about programming. The PA can be made more usable if programmers not familiar with its implementation can add programming knowledge to the PA's library. This paper describes cliches and a technique for adding them to the library.&#13;
Because cliches often do not correspond to complete code, the library can not simply be a collection of programs. Instead, a plan representation is used. The approach taken for adding knowledge to the library is one of cliche extraction. A program containing a particular cliche is converted to its plan. The plan is pruned, with the results of the pruned plan being displayed in a code-like form. Eventually, only the cliche remains. The cliche is then added to the library.
This paper is a revision of an earlier Bachelor's thesis.
</summary>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representing Constraint Systems with Omega</title>
<link href="https://hdl.handle.net/1721.1/41164" rel="alternate"/>
<author>
<name>Koton, Phyllis A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41164</id>
<updated>2019-04-10T22:36:23Z</updated>
<published>1981-11-01T00:00:00Z</published>
<summary type="text">Representing Constraint Systems with Omega
Koton, Phyllis A.
This paper considers two constraint systems, that of Steele and Sussman, and Alan Borning's Thinglab. Some functional difficulties in these systems are discussed. A representation of constraint systems using the description system Omega is presented which is free of these difficulties.
</summary>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Primer for the Act-1 Language</title>
<link href="https://hdl.handle.net/1721.1/41163" rel="alternate"/>
<author>
<name>Theriault, Daniel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/41163</id>
<updated>2019-04-10T07:21:24Z</updated>
<published>1981-06-01T00:00:00Z</published>
<summary type="text">A Primer for the Act-1 Language
Theriault, Daniel G.
This document is intended to describe the current design for computer programming language, Act-1. It describes the Actor computational model, which Act-1 was designed to support. A perspective is provided from which to view the language, with respect to existing computer language systems and to the computer system and environment under development for support of the language. The language is informally introduced in a tutorial fashion and demonstrated through examples. A programming strategy for the language is described, further illustrating its use.
</summary>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Disciplined Use of Simplifying Assumptions</title>
<link href="https://hdl.handle.net/1721.1/41162" rel="alternate"/>
<author>
<name>Rich, Charles</name>
</author>
<author>
<name>Waters, Richard C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41162</id>
<updated>2019-04-12T09:44:52Z</updated>
<published>1981-12-01T00:00:00Z</published>
<summary type="text">The Disciplined Use of Simplifying Assumptions
Rich, Charles; Waters, Richard C.
Simplifying assumptions — everyone uses them but no one's programming tool explicitly supports them. In programming, as in other kinds of engineering design, simplifying assumptions are an important method for dealing with complexity. Given a complex programming problem, expert programmers typically choose simplifying assumptions which, though false, allow them to arrive rapidly at a program which addresses the important features of the problem without being distracted by all of its details. The simplifying assumptions are then incrementally retracted with corresponding modifications to the initial program. This methodology is particularly applicable to rapid prototyping because the main questions of interest can often be answered using only the initial program.&#13;
Simplifying assumptions can easily be misused. In order to use them effectively two key issues must be addressed. First, simplifying assumptions should be chosen which simplify the design problems significantly without changing the essential character of the program which needs to be implemented. Second, the designer must keep track of all the assumptions he is making so that he can later retract them in an orderly manner. By explicitly dealing with these issues, a programming assistant system could directly support the use of simplifying assumptions as a disciplined part of the software development process.
Submitted to the ACM SIGSOFT Second Software Engineering Symposium: Workshop on Rapid Prototyping. Columbia, Maryland, April 19-21, 1982.
</summary>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Presentation Based User Interfaces</title>
<link href="https://hdl.handle.net/1721.1/41161" rel="alternate"/>
<author>
<name>Ciccarelli, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41161</id>
<updated>2019-04-09T16:35:14Z</updated>
<published>1981-07-01T00:00:00Z</published>
<summary type="text">Presentation Based User Interfaces
Ciccarelli, Eugene C.
This research will develop a methodology for designing user interfaces for general-purpose interactive systems. The central concept is the presentation, a structured pictorial or text object conveying information about some abstract object to the user. The methodology models a user interface as a shared communication medium, user and system communicating to each other by manipulating presentations.&#13;
The methodology stresses relations between presentations, especially presentations of the system itself; presentation manipulation by the user; presentation recognition by the system; and how properties of these establish a spectrum of interface styles.&#13;
The methodology suggests a general system base providing mechanisms to support construction of user interfaces. As part of an argument that such a base is feasible and valuable, and to demonstrate the domain independence of the methodology, three test systems will be implemented.
</summary>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proposal For a Study of Commonsense Physical Reasoning</title>
<link href="https://hdl.handle.net/1721.1/41160" rel="alternate"/>
<author>
<name>Forbus, Kenneth D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41160</id>
<updated>2019-04-11T07:56:55Z</updated>
<published>1981-07-01T00:00:00Z</published>
<summary type="text">Proposal For a Study of Commonsense Physical Reasoning
Forbus, Kenneth D.
Our common sense views of physics are the first coin in our intellectual capital; understanding precisely what they contain could be very important both for understanding ourselves and for making machines more like us. This proposal describes a domain that has been designed for studying reasoning about constrained motion and describes my theories about performing such reasoning. The issues examined include qualitative reasoning about shape and physical processes, as well as ways of using knowledge about motion other than "envisioning". Being a proposal, the treatment of these issues is necessarily cursory and incomplete.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505.
</summary>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GROK Doc: An Image Display Tool</title>
<link href="https://hdl.handle.net/1721.1/41159" rel="alternate"/>
<author>
<name>Little, Jim</name>
</author>
<id>https://hdl.handle.net/1721.1/41159</id>
<updated>2019-04-10T22:36:33Z</updated>
<published>1986-04-14T00:00:00Z</published>
<summary type="text">GROK Doc: An Image Display Tool
Little, Jim
The image display tool GROK provides a facility for displaying images on the black-and-white screen of a Symbolics 3600 monitor. It allows display of images and their manipulation through a special window it manages. Images become objects in that window, and are handled by a variety of routines accessible by mouse selection from window menus. GROK is an outgrowth of two programs- Keith Nishihara's GREY*, which provided the concept of an image manipulation and display program for black-and-white screens, and Margaret Fleck's GREYCROK, which formed the nucleus from which GROK mutated. Many of the functions in GROK are lifted directly from GREYCROK.
</summary>
<dc:date>1986-04-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Logo Turtle Graphics for the Lisp Machine</title>
<link href="https://hdl.handle.net/1721.1/41158" rel="alternate"/>
<author>
<name>Lieberman, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/41158</id>
<updated>2019-04-12T09:44:49Z</updated>
<published>1981-05-05T00:00:00Z</published>
<summary type="text">Logo Turtle Graphics for the Lisp Machine
Lieberman, Henry
This paper is a manual for an implementation of Logo graphics primitives in Lisp on the MIT Lisp Machine. The graphics system provides:&#13;
Simple line drawing and erasing using "turtle geometry"&#13;
Flexible relative and absolute coordinate systems, scaling&#13;
Floating point coordinates&#13;
Drawing points, circles, boxes, text&#13;
Automatically filling closed curves with patterns&#13;
Saving and restoring pictures rapidly as arrays of points&#13;
Drawing on color displays, creating new colors&#13;
Three dimensional perspective drawing, two-color stereo display
</summary>
<dc:date>1981-05-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Step Towards Automatic Documentation</title>
<link href="https://hdl.handle.net/1721.1/41157" rel="alternate"/>
<author>
<name>Frank, Claude</name>
</author>
<id>https://hdl.handle.net/1721.1/41157</id>
<updated>2019-04-09T17:18:26Z</updated>
<published>1980-12-01T00:00:00Z</published>
<summary type="text">A Step Towards Automatic Documentation
Frank, Claude
This paper describes a system which automatically generates program documentation. Starting with a plan generated by analyzing the program, the system computes several kinds of summary information about the program. The most notable are: a summary of the cliched computations performed by the loops in the program, and a summary of the types and uses of the arguments to the program. Based on this information, a few English sentences are produced describing each function analysed.
*Visiting Scientist on leave from Schlumberger-Doll Research.&#13;
The views and conclusions contained in this paper are those of the author, and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Department of Defense, or the United States Government.
</summary>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guardians for Concurrent Systems</title>
<link href="https://hdl.handle.net/1721.1/41156" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<author>
<name>Attardi, Giuseppe</name>
</author>
<id>https://hdl.handle.net/1721.1/41156</id>
<updated>2019-04-12T09:44:45Z</updated>
<published>1980-12-01T00:00:00Z</published>
<summary type="text">Guardians for Concurrent Systems
Hewitt, Carl; Attardi, Giuseppe
In this paper we survey the current state of the art on fundamental aspects of concurrent systems. We discuss the notion of concurrency and discuss a model of computation which unifies the lambda calculus model and the sequential stored program model. We develop the notion of a guardian as a module that regulates the use of shared resources by scheduling their access, providing protection, and implementing recovery from hardware failures. A shared checking account is an example of the kind of resource that needs a guardian. We introduce the notions of a customer and a transaction manager for a request and illustrate how to use them to implement arbitrary scheduling policies for a guardian. A proof methodology is presented for proving properties of guardians, such as a guarantee of service for all requests received.
</summary>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report on the Workshop on Distributed AI</title>
<link href="https://hdl.handle.net/1721.1/41155" rel="alternate"/>
<author>
<name>Davis, Randall</name>
</author>
<id>https://hdl.handle.net/1721.1/41155</id>
<updated>2019-04-12T09:44:51Z</updated>
<published>1980-09-01T00:00:00Z</published>
<summary type="text">Report on the Workshop on Distributed AI
Davis, Randall
On June 9-11, 22 people gathered at Endicott House for the first workshop on the newly emerging topic of Distributed AI. They came with a wide range of views on the topic, and indeed a wide range of views of what precisely the topic was.&#13;
In keeping with the spirit of the workshop, this report describing it was prepared in a distributed fashion. Each of the speakers contributed a summary of his comments. Sessions during the workshop included both descriptions of work done or in progress, and group discussions focused on a range of topics. The report reflects the organization, with nine short articles describing research efforts, and four summarizing the informal comments used as the foci for the group discussions.
</summary>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Proposal for Sniffer: a System that Understands Bugs</title>
<link href="https://hdl.handle.net/1721.1/41154" rel="alternate"/>
<author>
<name>Shapiro, Daniel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/41154</id>
<updated>2019-04-10T22:36:22Z</updated>
<published>1980-07-01T00:00:00Z</published>
<summary type="text">A Proposal for Sniffer: a System that Understands Bugs
Shapiro, Daniel G.
This paper proposes an interactive debugging aid that exhibits a deep understanding of a narrow class of bugs. This system, called Sniffer, will be able to find and identify errors, and explain them in terms which are relevant to the programmer. Sniffer is knowledgeable about side-effects. It is capable of citing the data which was in effect at the time an error became manifest.&#13;
The debugging knowledge in Sniffer is organized as a collection of independent experts which know about particular errors. The experts (sniffers) perform their function by applying a feature recognition process to the text for the program, and to the events which took place during the execution of the code. No deductive machinery is involved. The experts are supported by two systems; the cliche finder which identifies small portions of algorithms from a plan for the code, and the time rover which provides complete access to all program states that ever existed.&#13;
Sniffer is embedded in a run-time debugging aid. The user of the system interacts with the debugger to focus attention onto a manageable subset of the code, and then submits a complaint to the sniffer system that describes the behavior which was desired. Sniffer outputs a detailed report about any error which is discovered.
</summary>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Synthesis of Language Ideas for AI Control Structures</title>
<link href="https://hdl.handle.net/1721.1/41153" rel="alternate"/>
<author>
<name>Kornfeld, William A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41153</id>
<updated>2019-04-10T22:36:31Z</updated>
<published>1980-07-01T00:00:00Z</published>
<summary type="text">A Synthesis of Language Ideas for AI Control Structures
Kornfeld, William A.
Two well known programming methodologies for artificial intelligence research are compared, the so-called pattern-directed invocation languages and the object-oriented languages. The features and limitations of both approaches are discussed. We show that pattern-directed invocation is a more general formalism, but entails a serious loss of efficiency. We then go on to demonstrate that a language for artificial intelligence research can be created that contains the best features of both approaches.
</summary>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global Time in Actor Computations</title>
<link href="https://hdl.handle.net/1721.1/41152" rel="alternate"/>
<author>
<name>Clinger, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/41152</id>
<updated>2019-04-11T03:49:27Z</updated>
<published>1979-06-01T00:00:00Z</published>
<summary type="text">Global Time in Actor Computations
Clinger, Will
This research was supported by a National Science Foundation Graduate Fellowship in mathematics.
</summary>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolutionary Programming with the Aid of A Programmers' Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41151" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41151</id>
<updated>2019-04-11T07:56:55Z</updated>
<published>1979-05-01T00:00:00Z</published>
<summary type="text">Evolutionary Programming with the Aid of A Programmers' Apprentice
Hewitt, Carl
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Better Definition of Transactions</title>
<link href="https://hdl.handle.net/1721.1/41150" rel="alternate"/>
<author>
<name>Kerns, Barbara S.</name>
</author>
<id>https://hdl.handle.net/1721.1/41150</id>
<updated>2019-04-12T09:44:44Z</updated>
<published>1979-05-01T00:00:00Z</published>
<summary type="text">Towards a Better Definition of Transactions
Kerns, Barbara S.
This paper builds on a technical report written by Carl Hewitt and Henry Baker called "Actors and Continuous Functionals". What is called a "goal-oriented activity" in that paper will be referred to in this paper as a "transaction". The word "transaction" brings to mind an object closer in function to what we wish to present than does the word "activity".&#13;
This memo, therefore, presents the definitions of a reply and a transaction as given in Hewitt and Baker's paper and points out some discrepancies in their definitions. That is, that the properties of transactions and replies as they were defined did not correspond with our intuitions, and thus the definitions should be changed. The issues of what should constitute a transaction are discussed, and a new definition is presented which eliminates the discrepancies caused by the original definitions. Some properties of the newly defines transactions are discussed, and it is shown that the results of Hewitt and Baker's paper still hold given the new definitions.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preliminary Design of the APIARY for VLSI Support of Knowledge-Based Systems</title>
<link href="https://hdl.handle.net/1721.1/41149" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41149</id>
<updated>2019-04-12T09:44:44Z</updated>
<published>1979-06-01T00:00:00Z</published>
<summary type="text">Preliminary Design of the APIARY for VLSI Support of Knowledge-Based Systems
Hewitt, Carl
Knowledge-based applications will require vastly increased computational resources to achieve their goals. We are working on the development of a VLSI Message Passing Architecture to meet this need. As a first step we present the preliminary design of the APIARY system in this paper. The APIARY is currently in an early stage of implementation at the MIT Artificial Intelligence Laboratory.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building English Explanations from Function Descriptions</title>
<link href="https://hdl.handle.net/1721.1/41148" rel="alternate"/>
<author>
<name>Roberts, Bruce</name>
</author>
<id>https://hdl.handle.net/1721.1/41148</id>
<updated>2019-04-12T09:44:43Z</updated>
<published>1979-04-01T00:00:00Z</published>
<summary type="text">Building English Explanations from Function Descriptions
Roberts, Bruce
An explanatory component is an important ingredient in any complex AI system. A simple generative scheme to build descriptive phrases from Lisp function calls can produce respectable explanations if explanation generators capitalize on the function decomposition reflected in Lisp programs.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research under Office of Naval Research contract N00014-75-C-0389.
</summary>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Security and Modularity in Message Passing</title>
<link href="https://hdl.handle.net/1721.1/41147" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<author>
<name>Attardi, Giuseppe</name>
</author>
<author>
<name>Lieberman, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/41147</id>
<updated>2019-04-12T09:44:51Z</updated>
<published>1979-02-01T00:00:00Z</published>
<summary type="text">Security and Modularity in Message Passing
Hewitt, Carl; Attardi, Giuseppe; Lieberman, Henry
This paper addresses theoretical issues involved for the implementation of security and modularity in concurrent systems. It explicates the theory behind a mechanism for safely delegating messages to shared handlers in order to increase the modularity of concurrent systems. Our mechanism has the property that the actions caused by delegated messages are atomic. That is the handling of a message delegated by a client actor appears to be indivisible to other users of the actor. Our mechanism for delegating communications is a generalization suitable for use in concurrent systems of the sub-class mechanism of SIMULA. Our mechanism has the benefit that it easily lends itself to the implementation of efficient flexible access control mechanisms in distributed systems. It is a generalization of the protection mechanisms provided by capability-based system, access control lists, and the access control mechanisms provided by PDP-10 SIMULA.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</summary>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concurrent Systems Need Both Sequences And Serializers</title>
<link href="https://hdl.handle.net/1721.1/41146" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41146</id>
<updated>2019-04-11T01:54:43Z</updated>
<published>1979-02-01T00:00:00Z</published>
<summary type="text">Concurrent Systems Need Both Sequences And Serializers
Hewitt, Carl
Contemporary concurrent programming languages fall roughly into two classes. Languages in the first class support the notion of a sequence of values and some kind of pipelining operation over the sequence of values. Languages in the second class support the notion of transactions and some way to serialize transactions. In terms of the actor model of computation this distinction corresponds to the difference between serialized and unserialized actors. In this paper the utility of modeling both serialized and unserialized actors in a coherent formalism is demonstrated.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</summary>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The XPRT Description System</title>
<link href="https://hdl.handle.net/1721.1/41145" rel="alternate"/>
<author>
<name>Steels, Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/41145</id>
<updated>2019-04-12T09:32:18Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">The XPRT Description System
Steels, Luc
This paper introduces a frame-based description language and studies methods for reasoning about problems using knowledge expressed in the language.&#13;
The system is based on the metaphor of a society of communicating experts and incorporates within this framework most of the currently known AI techniques, such as pattern-directed invocation, explicit control of reasoning, propagation of constraints, dependency recording, context mechanisms, message passing, conflict resolution, default reasoning, etc.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Examples of Conceptual Grammar</title>
<link href="https://hdl.handle.net/1721.1/41144" rel="alternate"/>
<author>
<name>Steels, Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/41144</id>
<updated>2019-04-10T22:36:21Z</updated>
<published>1978-12-01T00:00:00Z</published>
<summary type="text">Some Examples of Conceptual Grammar
Steels, Luc
This paper gives some examples of the conceptual grammar approach to the representation of linguistic knowledge.&#13;
First we give a short overview of the language we use to represent knowledge. Then we discuss an example that deals with the expression of verbal parameters (such as voice and aspect) in English verbal groups. Finally we discuss an example of a formal language.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</summary>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Introducing Conceptual Grammar</title>
<link href="https://hdl.handle.net/1721.1/41143" rel="alternate"/>
<author>
<name>Steels, Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/41143</id>
<updated>2019-04-09T15:55:06Z</updated>
<published>1978-11-01T00:00:00Z</published>
<summary type="text">Introducing Conceptual Grammar
Steels, Luc
This paper contains an informal and sketchy overview of a new way of thinking about linguistics and linguistic processing known as conceptual grammar.&#13;
Some ideas are presented on what kind of knowledge is involved in a natural language, how this knowledge is organized and represented and how it is activated and acquired.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</summary>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Is a Knowledge Representation System Like  a Piano?</title>
<link href="https://hdl.handle.net/1721.1/41142" rel="alternate"/>
<author>
<name>Smith, Brian Cantwell</name>
</author>
<id>https://hdl.handle.net/1721.1/41142</id>
<updated>2019-04-10T19:32:53Z</updated>
<published>1978-11-01T00:00:00Z</published>
<summary type="text">How Is a Knowledge Representation System Like  a Piano?
Smith, Brian Cantwell
In the summer of 1978 a decision was made to devote a special issue of the SIGART newsletter to the subject of knowledge representation research. To assist in ascertaining the current state of people's thinking on this topic, the editors (Ron Brachman and myself) decided to circulate an informal questionnaire among the representation community. What was originally planned as a simple list of questions eventually developed into the current document, and we have decided to issue it as a report on its own merits. The questionnaire is offered here as a potential aid both for understanding knowledge representation research, and for analysing the philosophical foundations on which that research is based. &#13;
The questionnaire consists of two parts. Part I focuses first on specific details, but moves gradually towards more abstract and theoretical questions regarding assumptions about what knowledge representation is; about the role played by the computational metaphor about the relationships among model, theory, and program; etc. In part II, in a more speculative vein, we set forth for consideration nine hypothesis about various open issues in representation research.
The research reported here was supported by National Institutes of Health Grant No. 1 P41 RR 01096-02 from the Division of Research Resources, and was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology.
</summary>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stepping Motor Control System</title>
<link href="https://hdl.handle.net/1721.1/41141" rel="alternate"/>
<author>
<name>Larson, Noble G.</name>
</author>
<id>https://hdl.handle.net/1721.1/41141</id>
<updated>2019-04-11T01:54:41Z</updated>
<published>1979-02-01T00:00:00Z</published>
<summary type="text">Stepping Motor Control System
Larson, Noble G.
This paper describes a hardware system designed to facilitate position and velocity control of a group of eight stepping motors using a PDP-11. The system includes motor driver cards and other interface cards in addition to a special digital control module. The motors can be driven at speeds up to 3000 rpm. Position feedback is provided by shaft encoders, but tachometers are not used.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-77-C-0389.
</summary>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Specifying and Proving Properties of Guardians for Distributed Systems</title>
<link href="https://hdl.handle.net/1721.1/41140" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<author>
<name>Attardi, Giuseppe</name>
</author>
<author>
<name>Lieberman, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/41140</id>
<updated>2019-04-10T23:12:19Z</updated>
<published>1979-05-01T00:00:00Z</published>
<summary type="text">Specifying and Proving Properties of Guardians for Distributed Systems
Hewitt, Carl; Attardi, Giuseppe; Lieberman, Henry
In a distributed system where many processors are connected by a network and communicate using message passing, many users can be allowed to access the same facilities. A public utility is usually an expensive or limited resource whose use has to be regulated. A guardian is an abstraction that can be used to regulate the use of resources by scheduling their access, providing protection, and implementing recovery from hardware failures. We present a language construct called a primitive serializer which can be used to express efficient implementations of guardians in modular fashion. We have developed proof methodology for proving strong properties of network utilities e.g. the utility is guaranteed to respond to each request which it is sent. This proof methodology is illustrated by proving properties of a guardian which manages two hardcopy printing devices.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-c-0522.
</summary>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Looking in the Shadows</title>
<link href="https://hdl.handle.net/1721.1/41139" rel="alternate"/>
<author>
<name>Woodham, Robert J.</name>
</author>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41139</id>
<updated>2019-04-10T07:12:55Z</updated>
<published>1976-05-01T00:00:00Z</published>
<summary type="text">Looking in the Shadows
Woodham, Robert J.; Horn, Berthold K.P.
The registration of an image with a model of the surface being imaged is an important prerequisite to many image understanding tasks. Once registration is achieved, new image analysis techniques can be explored. One approach is to compare the real image with an image synthesized from the surface model. But, accurate comparison requires and accurate synthetic image.&#13;
More realistic synthetic images can be obtained once shadow information is included. Accurate shadow regions can be determined when a hidden-surface algorithm is applied to the surface model in order to calculate which surface elements can be seen from the light source. We illustrate this technique using LANDSAT imagery registered with digital terrain models. Once shadow information is included, the effect of sky illumination and atmospheric haze can be measured.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Story Understanding: the Beginning of a Consensus</title>
<link href="https://hdl.handle.net/1721.1/41138" rel="alternate"/>
<author>
<name>McDonald, David D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41138</id>
<updated>2019-04-11T01:54:43Z</updated>
<published>1978-06-01T00:00:00Z</published>
<summary type="text">Story Understanding: the Beginning of a Consensus
McDonald, David D.
This paper is written for an Area Examination on the three papers: "A Framed PAINTING: The Representation of a Common Sense Knowledge Fragment" by Eugene Charniak, "Reporter: An Intelligent Noticer" by Steve Rosenberg, and "Using Plans to Understand Natural Language" by Robert Wilensky. Surprisingly, these papers share a common view of what it means to understand a story. The first part of this paper reviews the previous notions of "understanding", showing the progression to today's consensus. The content of the consensus and how the individual papers fit within it is then described. Finally, unsolved problems not adequately dealt with by any of the approaches are presented briefly.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Control, Multiple Description, and Purpose in the Visual Perception of Complex Scenes: A Pogress Report</title>
<link href="https://hdl.handle.net/1721.1/41137" rel="alternate"/>
<author>
<name>Dunlavey, Michael R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41137</id>
<updated>2019-04-11T01:54:42Z</updated>
<published>1975-08-01T00:00:00Z</published>
<summary type="text">Control, Multiple Description, and Purpose in the Visual Perception of Complex Scenes: A Pogress Report
Dunlavey, Michael R.
This memo describes a vision program for recognizing simple furniture comprising assemblies of blocks, in which the same item may be composed in diverse ways. As such, it is concerned with three theoretical issues, perceptual processing, supression of unwanted detail, and segregation and interconnection of information.&#13;
The program's perceptual processing relies on an elaborate, redundant, alterable model of the scene rather than on any clever process structure. This approach aids the interpretation of incomplete, ambiguous portions of the scene as well as simplifies the program. The model is capable of quantitative as well as qualitative alteration, by a constraint-propogation system and a system of frame-shift demons.&#13;
The hierarchical nature of the scene - assemblies of assemblies of blocks - is reflected as hierarchy in the model. Each assembly is represented as having an external aspect, by which it relates to surrounding assemblies, and an internal aspect, listing the parts and relationships composing it. This imposes a natural supression of detail.&#13;
In addition to the vertical layering of the model there are horizontal subdivisions adapted for different computational purposes. There is a 2D section representing the image, a 3D section representing the shape, and  a stability section representing the physical forces and moments acting upon each unit. Each of the sections can be used through any of several indirect reference frames corresponding to different spatial viewpoints. Many computations on the model, such as stability analysis, spatial relationships, and visual matching, are greatly simplified by first selecting the proper spatial viewpoints.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1975-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analsysis by Propagation of Constraints in Elementary Geometry Problem Solving</title>
<link href="https://hdl.handle.net/1721.1/41136" rel="alternate"/>
<author>
<name>Doyle, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/41136</id>
<updated>2019-04-12T09:32:19Z</updated>
<published>1976-06-01T00:00:00Z</published>
<summary type="text">Analsysis by Propagation of Constraints in Elementary Geometry Problem Solving
Doyle, Jon
This paper describes GEL, a new geometry theorem prover. GEL is the result of an attempt to transfer the problem solving abilities of the EL electronic circuit analysis program of Sussman and Stallman to the domain of geometric diagrams. Like its ancestor, GEL is based on the concepts of "one-step local deductions" and "macro-elements." The performance of this program raises a number of questions about the efficacy of the approach to geometry theorem proving embodied in GEL, and also illustrates problems relating to algebraic simplification in geometric reasoning.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transparency</title>
<link href="https://hdl.handle.net/1721.1/41135" rel="alternate"/>
<author>
<name>Stefanescu, Dan</name>
</author>
<id>https://hdl.handle.net/1721.1/41135</id>
<updated>2019-04-12T09:32:19Z</updated>
<published>1975-07-01T00:00:00Z</published>
<summary type="text">Transparency
Stefanescu, Dan
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Tracking of Real World Objects</title>
<link href="https://hdl.handle.net/1721.1/41134" rel="alternate"/>
<author>
<name>Speckert, Glen</name>
</author>
<id>https://hdl.handle.net/1721.1/41134</id>
<updated>2019-04-10T22:36:30Z</updated>
<published>1975-07-01T00:00:00Z</published>
<summary type="text">Visual Tracking of Real World Objects
Speckert, Glen
This paper describes the progress made towards tracking an object visually using a PIN diode attached to a dual mirror deflection system which enables the PIN diode to "optically point" to any position in two-space. A helium neon laser equipted with a similar mirror deflection system was used to point at the object being tracked. Actual objects tracked include a hand, a bouncing ping pong ball, and a white center on a black target attached to a moving metronome.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75C-0643.
</summary>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frame-Based Knowledge Representation</title>
<link href="https://hdl.handle.net/1721.1/41133" rel="alternate"/>
<author>
<name>Steels, Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/41133</id>
<updated>2019-04-12T09:32:18Z</updated>
<published>1978-10-01T00:00:00Z</published>
<summary type="text">Frame-Based Knowledge Representation
Steels, Luc
The paper introduces a language for representing knowledge in a declarative form. With this language it is possible to define knowledge about a certain domain by introducing a number of concepts and by specifying their interrelations.&#13;
The paper is meant to be an informal introduction to the language. We present the available constructs, describe their meaning and present a number of examples.&#13;
In other papers (currently in preparation) we will give a formal semantics of the language, introduce the interference theory and discuss a possible procedural embedding.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</summary>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How People Execute Handwriting</title>
<link href="https://hdl.handle.net/1721.1/41132" rel="alternate"/>
<author>
<name>Hollerbach, John</name>
</author>
<id>https://hdl.handle.net/1721.1/41132</id>
<updated>2019-04-10T19:58:43Z</updated>
<published>1975-07-01T00:00:00Z</published>
<summary type="text">How People Execute Handwriting
Hollerbach, John
Handwriting is shown to be composed mainly of cup-shaped strokes lasting approximately 200 msec. The strokes are based on a hexagonal pattern, with quantized slopes and lengths. Each side of the hexagon is produced by a 40 msec acceleration burst. Smooth writing is produced by merging and rounding these bursts.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643-0003.
</summary>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Presupposition in Lexical Analysis and Discourse</title>
<link href="https://hdl.handle.net/1721.1/41131" rel="alternate"/>
<author>
<name>Bullwinkle, Candace L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41131</id>
<updated>2019-04-12T09:44:43Z</updated>
<published>1975-07-01T00:00:00Z</published>
<summary type="text">Presupposition in Lexical Analysis and Discourse
Bullwinkle, Candace L.
Recent research in linguistic analysis of presuppositions has provided numerous indications of the role of presupposition in lexical analysis. Still others have argued there is no distinction between meaning and the presupposition of a word. In this paper I discuss both issues of what presuppositions are related to lexical analysis and what happens to these presupposition in discourse. Finally, I comment on how this knowledge could be made available to a natural language understanding program.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</summary>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Preliminary Report on a Program for Generating Natural Language</title>
<link href="https://hdl.handle.net/1721.1/41130" rel="alternate"/>
<author>
<name>McDonald, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41130</id>
<updated>2019-04-09T17:11:01Z</updated>
<published>1975-06-01T00:00:00Z</published>
<summary type="text">A Preliminary Report on a Program for Generating Natural Language
McDonald, David
A program framework has been designed in which the linguistic facts and heuristics necessary for generating fluent natural language can be encoded. The linguistic data is represented in annotated procedures and data structures which are designed to make English translations of already formulated messages given in a primary program's internal representation. The messages must include the program's intentions in saying them, in order to adequately specify the grammatical operations required for a translation.&#13;
The pertinant questions in this research have been: what structure does natural language have that allows it to encode multifaceted messages; and how must that structure be taken into account in the design of a generation facility for a computer program.&#13;
This paper describes the control and data structures of the design and and their motivation. It is a condensation of my Master's Thesis &lt;1&gt;, to which the reader is refered for further information. Work is presently underway on implementing the design in LISP and developing a grammar for use in one or more of the domains given below.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</summary>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bargaining Between Goals</title>
<link href="https://hdl.handle.net/1721.1/41129" rel="alternate"/>
<author>
<name>Goldstein, Ira P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41129</id>
<updated>2019-04-10T23:12:18Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Bargaining Between Goals
Goldstein, Ira P.
Bargaining is a process used to modify conflicting demands on an expendable resource so that a satisfactory allocation can be made. In this paper, I consider the design of a bargaining system to handle the problem of scheduling an individual's weekly activities and appointments. The bargaining system is based on the powerful reasoning strategy of producing a simplified linear plan by considering the various constraints independently and then debugging the resulting conflicts.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-evaluation of Actors with Side-effects</title>
<link href="https://hdl.handle.net/1721.1/41128" rel="alternate"/>
<author>
<name>Yonezawa, Akinori</name>
</author>
<id>https://hdl.handle.net/1721.1/41128</id>
<updated>2019-04-10T22:36:29Z</updated>
<published>1975-06-01T00:00:00Z</published>
<summary type="text">Meta-evaluation of Actors with Side-effects
Yonezawa, Akinori
Meta-evaluation is a process which symbolically evaluates an actor and checks to see whether the actor fulfills its contract (specification). A formalism for writing contracts for actors with side-effects which allow sharing of data is presented. Typical examples of actors with side-effects are the cell, actor counterparts of the LISP function rplaca and rplacd, and procedures whose computation depends upon their input history. Meta-evaluation of actors with side-effects is carried out by using situational tags which denotes a situation (local state of an actor systems at the moment of the transmissions of messages). It is illustrated how the situational tags are used for proving the termination of the activation of actors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N000-14-74-C-0643.
</summary>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-evaluation of Actors with Side-effects</title>
<link href="https://hdl.handle.net/1721.1/41127" rel="alternate"/>
<author>
<name>Yonezawa, Akinori</name>
</author>
<id>https://hdl.handle.net/1721.1/41127</id>
<updated>2019-04-12T09:32:18Z</updated>
<published>1975-06-01T00:00:00Z</published>
<summary type="text">Meta-evaluation of Actors with Side-effects
Yonezawa, Akinori
Meta-evaluation is a process which symbolically evaluates an actor and checks to see whether the actor fulfills its contract (specification). A formalism for writing contracts for actors with side-effects is presented. Meta-evaluation of actors with side-effects is carried out by using situational tags which denotes a situation (local state of an actor systems at the moment of the transmissions of messages). And also it is illustrated how the situational tags are used for providing the termination of the activation of actors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0004.
</summary>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Application of Linear Systems Analysis to Image Processing. Some Notes.</title>
<link href="https://hdl.handle.net/1721.1/41126" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<author>
<name>Sjoberg, Robert W.</name>
</author>
<id>https://hdl.handle.net/1721.1/41126</id>
<updated>2019-04-12T09:43:47Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">The Application of Linear Systems Analysis to Image Processing. Some Notes.
Horn, Berthold K.P.; Sjoberg, Robert W.
The Fourier transform is a convenient tool for analyzing the performance of an image-forming system, but must be treated with caution. One of its major uses is turning convolutions into products. It is also used to transform a problem that is more naturally thought of in terms of frequency than time or space. We define the point-spread function and modulation transfer function in a two-dimensional linear system as analogues of the one-dimensional impulse response and its Fourier transform, the frequency response, respectively. For many imagine devices, the point-spread function is rotationally symmeteric. Useful tranforms developed for the special cases of a "pill box,", a gaussian blob, and an inverse scatter function.&#13;
Fourier methods are appropriate in the analysis of a defocused imaging system. We define a focus function as a weighted sum of high frequency terms in the spectrum of the system. This function will be a maximum when the image is in focus, and we can hill-climb on it to determine the best focus. We compare this function against two others, the sum of squares of intensities, and the sum of square of first differences, and show it to be superior.&#13;
Another use of the Fourier transform is in optimal filtering, that is, of filtering to separate additive noise from a desired signal. We discuss the theory for the two-dimensional case, which is actually easier than for a single dimension since causality is not an issue. We show how to consumerist a linear, shift-invariant filter for imaging systems given only the input power spectrum and cross-power spectrum of input versus desired output.&#13;
Finally, we present two ways to calculate the line-spread function given the point-spread function.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinematics, Statics, and Dynamics of Two-D Manipulators</title>
<link href="https://hdl.handle.net/1721.1/41125" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41125</id>
<updated>2019-04-10T22:36:28Z</updated>
<published>1975-06-01T00:00:00Z</published>
<summary type="text">Kinematics, Statics, and Dynamics of Two-D Manipulators
Horn, Berthold K.P.
In order to get some feeling for the kinematics, statics, and dynamics of manipulators, it is useful to separate the problem of visualizing linkages in three-space from the basic mechanics. The general-purpose two-dimensional manipulator is analyzed in this paper in order to gain a basic understanding of the issues without the complications of three-dimensional geometry.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Notes Relating to the Design of a High Quality Image Sensor</title>
<link href="https://hdl.handle.net/1721.1/41124" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41124</id>
<updated>2019-04-10T19:44:41Z</updated>
<published>1975-06-01T00:00:00Z</published>
<summary type="text">Notes Relating to the Design of a High Quality Image Sensor
Horn, Berthold K.P.
Some of the information that as used in arriving at a design for a high quality image input device is documented. The device uses a PIN photo-diode directly coupled to an FET-input op-amp as the sensor and two moving-iron galvanometer-driven mirrors as the deflection system. The disadvantages of a system like this are its long random access time (about 4 milli-seconds) and the long settling time of the diode-amplifier system (about 1 milli-seconds). In almost all other respects such a sensor is superior to other known image sensors. Pictures taken with this device have shown that some of the difficulties experienced in image analysis can be directly traced to the low quality of images read in through vidicons and image dissectors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Facts of Light</title>
<link href="https://hdl.handle.net/1721.1/41123" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41123</id>
<updated>2019-04-12T09:32:19Z</updated>
<published>1975-05-01T00:00:00Z</published>
<summary type="text">The Facts of Light
Horn, Berthold K.P.
This is a random collection of facts about radiant and luminous energy. Some of this information may be useful in the design of photo-diode image sensors, in the set-up of lighting for television microscopes and the understanding of the characteristics of photographic image output devices. A definition of the units of measurement and the properties of lambertian surfaces is included.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representing the Semantics of Natural Language as Constraint Expressions</title>
<link href="https://hdl.handle.net/1721.1/41122" rel="alternate"/>
<author>
<name>Grossman, Richard W.</name>
</author>
<id>https://hdl.handle.net/1721.1/41122</id>
<updated>2019-04-12T09:32:17Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Representing the Semantics of Natural Language as Constraint Expressions
Grossman, Richard W.
The issue of how to represent the "meaning" of an utterance is central to the problem of computer understanding of natural language. Rather than relying on ad-hoc structures or forcing the complexities of natural language into mathematically elegant but computationally cumbersome representations (such as first-order logic), this paper presents a novel representation which has many desirable computational and logical properties. It is proposed to use this representation to structure the "world knowledge" of a natural-language understanding system.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ideas About Management of LISP Data Bases</title>
<link href="https://hdl.handle.net/1721.1/41121" rel="alternate"/>
<author>
<name>Sandewall, Erik</name>
</author>
<id>https://hdl.handle.net/1721.1/41121</id>
<updated>2019-04-11T03:43:28Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Ideas About Management of LISP Data Bases
Sandewall, Erik
The trend toward larger data bases in A.I. programs makes it desirable to provide program support for the activity of building and maintaining LISP data bases. Many techniques can be drawn from present and proposed systems for supporting program maintenance, but there are also a variety of additional problems and possibilities. Most importantly, a system for supporting data base development needs a formal description of the user's data base. The description must at least partly be contributed by the user. The paper discusses the operation of such a support system, and describes some ideas that have been useful in a prototype system.
Work reported herein was conducted partly at Uppsala University, Swden, with support from the Swedish Board of Technical Development, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Issues for a Dynamic Vision System</title>
<link href="https://hdl.handle.net/1721.1/41120" rel="alternate"/>
<author>
<name>Lavin, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41120</id>
<updated>2019-04-10T19:17:20Z</updated>
<published>1974-12-01T00:00:00Z</published>
<summary type="text">Some Issues for a Dynamic Vision System
Lavin, Mark A.
This paper is a thesis-proposal-proposal: a discussion of some issues which seem relevant to the problem of dealing with visual scenes undergoing change. The problem area is broadly stated, some relevant points are noted, and a possible scenario for a thesis is discussed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Evolution of Procedural Knowledge</title>
<link href="https://hdl.handle.net/1721.1/41119" rel="alternate"/>
<author>
<name>Miller, Mark L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41119</id>
<updated>2019-04-10T22:36:27Z</updated>
<published>1975-01-16T00:00:00Z</published>
<summary type="text">The Evolution of Procedural Knowledge
Miller, Mark L.
A focus on planning and debugging procedures underlies the enhanced proficiency of recent programs which solve problems and acquire new skills. By describing complex procedures as constituents of evolutionary sequences of families of simpler procedures, we can augment our understanding of how they were written and how they accomplish their goals, as well as improving our ability to debug them. To the extent that properties of such descriptions are task independent, we ought to be able to create a computational analogue for genetic epistemology, a theory of procedural ontogeny. Since such a theory ought to be relevant to the teaching of procedures and modelling of the learner, it is proposed than an educational application system be implemented, to help to clarify these ideas. The system would provide assistance to students solving geometry construction problems.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1975-01-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Protection and Synchronization in Actor Systems</title>
<link href="https://hdl.handle.net/1721.1/41118" rel="alternate"/>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41118</id>
<updated>2019-04-09T15:54:40Z</updated>
<published>1974-11-01T00:00:00Z</published>
<summary type="text">Protection and Synchronization in Actor Systems
Hewitt, Carl
This paper presents a unified method [called ENCASING] for dealing with the closely related issues of synchronization and protection in actor systems [Hewitt et al. 1973a, 1973b, 1974a; Greif and Hewitt 1975]. Actors are a semantic concept in which no active process is ever allowed to treat anything as an object. Instead a polite request must be extended to accomplish what the activator [process] desires. Actors enable us to define effective and efficient protection schemes. Vulnerable actors can be protected before being passed out by ENCASING their behavior in a guardian which applies appropriate checks before invoking the protected actor. Protected actors can be freely passed out since they work only for actors which have the authority to use them where authority can be decided by an arbitrary procedure. Synchronization can be viewed as a [time-variant] kind of protection in which access is only allowed to the encased actor when it is safe to do so.
</summary>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding LISP Programs: Towards a Programmer's Apprentice</title>
<link href="https://hdl.handle.net/1721.1/41117" rel="alternate"/>
<author>
<name>Rich, Charles</name>
</author>
<author>
<name>Shrobe, Howard E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41117</id>
<updated>2019-04-10T17:17:33Z</updated>
<published>1974-12-01T00:00:00Z</published>
<summary type="text">Understanding LISP Programs: Towards a Programmer's Apprentice
Rich, Charles; Shrobe, Howard E.
Several attempts have been made to produce tools which will help the programmer of complex computer systems. A new approach is proposed which integrates the programmer's intentions, the program code, and the comments, by relating them to a knowledge base of programming techniques. Our research will extend the work of Sussman, Goldstein, and Hewitt on program description and annotation. A prototype system will be implemented which answers questions and detects bug in simple LISP programs.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Actor Semantics of PLANNER-73</title>
<link href="https://hdl.handle.net/1721.1/41116" rel="alternate"/>
<author>
<name>Greif, Irene</name>
</author>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41116</id>
<updated>2019-04-10T22:36:29Z</updated>
<published>1974-11-01T00:00:00Z</published>
<summary type="text">Actor Semantics of PLANNER-73
Greif, Irene; Hewitt, Carl
Work on PLANNER-73 and actors has led to the development of a basis for semantics of programming languages. Its value in describing programs with side-effects, parallelism, and synchronization is discussed. Formal definitions are written and explained for sequences, cells, and a simple synchronization primitive. In addition there is discussion of the implications of actor semantics for the controversy over elimination of side-effects.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CONS</title>
<link href="https://hdl.handle.net/1721.1/41115" rel="alternate"/>
<author>
<name>Thomas, Knight</name>
</author>
<id>https://hdl.handle.net/1721.1/41115</id>
<updated>2019-04-11T01:54:41Z</updated>
<published>1974-11-01T00:00:00Z</published>
<summary type="text">CONS
Thomas, Knight
DRAFT: Comments and corrections, technical or typographical, are solicited.&#13;
This work was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The LISP Machine</title>
<link href="https://hdl.handle.net/1721.1/41114" rel="alternate"/>
<author>
<name>Greenblatt, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/41114</id>
<updated>2019-04-10T22:36:26Z</updated>
<published>1974-11-01T00:00:00Z</published>
<summary type="text">The LISP Machine
Greenblatt, Richard
This work was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FED, the Font "EDitor" and Font Formats</title>
<link href="https://hdl.handle.net/1721.1/41113" rel="alternate"/>
<author>
<name>Cohen, Joseph D.</name>
</author>
<author>
<name>Jarvis, J. Pitts</name>
</author>
<id>https://hdl.handle.net/1721.1/41113</id>
<updated>2019-04-12T09:44:43Z</updated>
<published>1974-10-01T00:00:00Z</published>
<summary type="text">FED, the Font "EDitor" and Font Formats
Cohen, Joseph D.; Jarvis, J. Pitts
This memo describes FED, a program used for compiling and inspecting fonts: AST font format, a text format which can be used to create and edit fonts: and KST font format, the binary format used by SCRIMP, TJ6, and PUB.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MAPPER Information</title>
<link href="https://hdl.handle.net/1721.1/41112" rel="alternate"/>
<author>
<name>Taenzer, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41112</id>
<updated>2019-04-12T09:44:42Z</updated>
<published>1974-09-01T00:00:00Z</published>
<summary type="text">MAPPER Information
Taenzer, David
This working paper describes a program on the Mini-Robot PDP-11 which is used for looking at picture files created by the VIDIN program. It may be used by ITS vision programmers to examine Vidicon picture files before sending them over to ITS.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conversations Between Programs</title>
<link href="https://hdl.handle.net/1721.1/41111" rel="alternate"/>
<author>
<name>McDonald, David D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41111</id>
<updated>2019-04-12T09:44:51Z</updated>
<published>1974-09-01T00:00:00Z</published>
<summary type="text">Conversations Between Programs
McDonald, David D.
This paper discusses the problem of getting a computer to speak, generating natural language that is appropriate to the situation and is what it wants to say. It describes, at a general level, a program which will embody a theory of how the various types of available information are used in the linguistic process as well as the possible packaging for some of that information and the experimental situation in which the program will be developed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wait-and-See Strategies for Parsing Natural Language</title>
<link href="https://hdl.handle.net/1721.1/41110" rel="alternate"/>
<author>
<name>Marcus, Mitchell P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41110</id>
<updated>2019-04-09T19:01:03Z</updated>
<published>1974-08-01T00:00:00Z</published>
<summary type="text">Wait-and-See Strategies for Parsing Natural Language
Marcus, Mitchell P.
The intent of this paper is to convey one idea central to the structure of a natural language parser currently under development, the notion of wait-and-see strategies. This notion will hopefully allow the recognition of the structure of natural language input by a process that is deterministic and "backupless", that can have strong expectations but still be immediately responsive to the actual structure of the input. The notion is also discussed as a paradigm for recognition processes in general.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis of a Network with a Given System Function</title>
<link href="https://hdl.handle.net/1721.1/41109" rel="alternate"/>
<author>
<name>Sussman, Gerald Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/41109</id>
<updated>2019-04-12T09:44:50Z</updated>
<published>1974-06-01T00:00:00Z</published>
<summary type="text">Synthesis of a Network with a Given System Function
Sussman, Gerald Jay
I have just completed teaching two sections of 6.011 (Elementary Network Theory). One of the topics covered was synthesis of active filters by the "method of unilateral 2-ports". The explanation of this technique by the lecturer, John Kassakian, is of interest to those of us studying problem solving and the evolution of expertise. The evolution of the method of unilateral 2-ports seems to ft beautifully into the paradigm of synthesis of the solution to a problem by debugging of an almost-right plan. Of course, skill is acquired by incorporating the results of debugging, as we expect.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Another Approach to English</title>
<link href="https://hdl.handle.net/1721.1/41108" rel="alternate"/>
<author>
<name>Brooks, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/41108</id>
<updated>2019-04-12T09:44:42Z</updated>
<published>1974-06-01T00:00:00Z</published>
<summary type="text">Another Approach to English
Brooks, Martin
A new approach to building descriptions of English is outlined and programs implementing the ideas for sentence-sized fragments are demonstrated.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>XGP Font Catalog</title>
<link href="https://hdl.handle.net/1721.1/41107" rel="alternate"/>
<author>
<name>Knight, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/41107</id>
<updated>2019-04-11T01:54:40Z</updated>
<published>1974-05-24T00:00:00Z</published>
<summary type="text">XGP Font Catalog
Knight, Thomas
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1974-05-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advice on the Fast-paced World of Electronics</title>
<link href="https://hdl.handle.net/1721.1/41106" rel="alternate"/>
<author>
<name>McDermott, Drew</name>
</author>
<id>https://hdl.handle.net/1721.1/41106</id>
<updated>2019-04-12T09:44:10Z</updated>
<published>1974-05-01T00:00:00Z</published>
<summary type="text">Advice on the Fast-paced World of Electronics
McDermott, Drew
This paper is a reprint of a sketch of an electronic-circuit-designing program, submitted a a Ph.D. proposal. It describes the electronic design problem with respect to the classic trade-off between expertise and generality. The essence of the proposal is to approach the electronics domain indirectly, by writing an "advice-taking" program (in McCarthy's sense) which can be told about electronics, including heuristic knowledge about the use of specific electronics expertise. The core of this advice taker is a deductive program capable of deducing what its strategies should be.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grey Scale Display Slave</title>
<link href="https://hdl.handle.net/1721.1/41105" rel="alternate"/>
<author>
<name>Beeler, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/41105</id>
<updated>2019-04-12T09:44:10Z</updated>
<published>1974-05-01T00:00:00Z</published>
<summary type="text">Grey Scale Display Slave
Beeler, Michael
The programs SNAP and ZSLAVE are components of a new grey scale display system. The object is to produce photographs, from a computer display, which have grey scale resolution comparable to that of a the visual input devices and the vision data at the A.I. Lab.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinematics of the MIT-AI-VICARM Manipulator</title>
<link href="https://hdl.handle.net/1721.1/41104" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<author>
<name>Inoue, Hirochika</name>
</author>
<id>https://hdl.handle.net/1721.1/41104</id>
<updated>2019-04-12T09:44:48Z</updated>
<published>1974-05-01T00:00:00Z</published>
<summary type="text">Kinematics of the MIT-AI-VICARM Manipulator
Horn, Berthold K.P.; Inoue, Hirochika
This paper describes the basic geometry of the electric manipulator designed for the Artificial Intelligence Laboratory by Victor Scheinman while on leave from Stanford University. The procedure for finding a set of joint angles that will place the terminal device in a desired position and orientation is developed in detail. This is on of the basic primitives that an arm controller should have. The orientation is specified in terms of Euler-angles. Typically eight sets of joint angles will produce the same terminal device position and orientation.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</summary>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>X-Y Table User's Manual</title>
<link href="https://hdl.handle.net/1721.1/41103" rel="alternate"/>
<author>
<name>Larson, Noble</name>
</author>
<id>https://hdl.handle.net/1721.1/41103</id>
<updated>2019-04-10T23:12:18Z</updated>
<published>1974-05-01T00:00:00Z</published>
<summary type="text">X-Y Table User's Manual
Larson, Noble
This working paper describes the mini-robot group's X-Y table and associated hardware.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Projects in Automatic Programming</title>
<link href="https://hdl.handle.net/1721.1/41102" rel="alternate"/>
<author>
<name>Goldstein, Ira</name>
</author>
<author>
<name>Sussman, Gerald Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/41102</id>
<updated>2019-04-12T09:44:09Z</updated>
<published>1974-04-01T00:00:00Z</published>
<summary type="text">Some Projects in Automatic Programming
Goldstein, Ira; Sussman, Gerald Jay
This paper proposes three research topics within the general framework of Automatic Programming. The projects are designing (1) a student programmer, (2) a robot programmer and (3) a physicist's helper. The purpose of these projects is both to explore fundamental ideas regarding the nature of programming as well as to propose practical applications of AI research. The reason for offering this discussion as a Working Paper is to suggest possible research topics which members of the laboratory may be interested in pursuing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Application of Line-labeling and other Scene-analysis Techniques to the Problem of Hidden-line Removal</title>
<link href="https://hdl.handle.net/1721.1/41101" rel="alternate"/>
<author>
<name>Lavin, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41101</id>
<updated>2019-04-10T22:36:20Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">An Application of Line-labeling and other Scene-analysis Techniques to the Problem of Hidden-line Removal
Lavin, Mark A.
The problem of hidden-line drawings of scenes composed of opaque polyhedra is considered. The use of Huffnan labeling is suggested as a method if simplifying the task and increasing its intuitive appeal. The relation between the hidden-line problem and scene recognition is considered. Finally, an extension to the hidden-line processor, allowing dynamic viewing of changing scenes, is suggested. That process can be made far more efficient through the use of Change-Driven Processing, where computations on unchanging inputs are not repeated.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Intelligence Approaches to Medical Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/41100" rel="alternate"/>
<author>
<name>Rubin, Andee</name>
</author>
<id>https://hdl.handle.net/1721.1/41100</id>
<updated>2019-04-12T09:32:16Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">Artificial Intelligence Approaches to Medical Diagnosis
Rubin, Andee
The differential diagnosis of hematuria, blood in the urine, is studied from the point of view of identifying crucial structures and processes in medical diagnosis. The thesis attempts to fit the problem of medical diagnosis into the framework of other A.I. problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, plausibility and the structure of hypotheses within the world of kidney disease.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mini-Robot Group User's Guide</title>
<link href="https://hdl.handle.net/1721.1/41099" rel="alternate"/>
<author>
<name>Billmers, Meyer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41099</id>
<updated>2019-04-10T23:12:17Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">Mini-Robot Group User's Guide
Billmers, Meyer A.
This working paper describes the facilities of the mini-robot group and the software available to persons using those facilities.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Hypothesis-Driven Recognition System for the Blocks World</title>
<link href="https://hdl.handle.net/1721.1/41098" rel="alternate"/>
<author>
<name>Kuipers, Benjamin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41098</id>
<updated>2019-04-12T09:32:17Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">An Hypothesis-Driven Recognition System for the Blocks World
Kuipers, Benjamin J.
This paper presents a visual recognition program in which recognition process is driven by hypotheses about the object being recognized. The hypothesis suggests which features to examine next, refines its predictions based on observed information, and selects a new hypothesis when observations contradict its predictions. After presenting the program, the paper identifies and discusses a number of theoretical issues raised by this work.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge About Interfacing Descriptions</title>
<link href="https://hdl.handle.net/1721.1/41097" rel="alternate"/>
<author>
<name>Dunlavey, Michael R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41097</id>
<updated>2019-04-12T09:44:09Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">Knowledge About Interfacing Descriptions
Dunlavey, Michael R.
This concentrates on interactions between knowledge stated in diverse representations. It proposes a vision program that classifies any complicated object as an elaborated instance of a simple on it already understands. The resulting global-local connections facilitate evaluation of overall properties, such as visual shape and ability to support other objects.&#13;
Flexibility is achieved through simultaneous use of multiple equivalent representations. These are coordinated via interfacing rules for giving hints, constraining choices, and filling in missing detail, making use of the great redundancy in most visual scenes.&#13;
An important feature of the system consists of domain-dependent rules for guiding the flow of control and choosing hypothesis.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Qualitative Knowledge, Causal Reasoning, and the Localization of Failures</title>
<link href="https://hdl.handle.net/1721.1/41096" rel="alternate"/>
<author>
<name>Brown, Allen L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41096</id>
<updated>2019-04-10T16:35:31Z</updated>
<published>1974-03-01T00:00:00Z</published>
<summary type="text">Qualitative Knowledge, Causal Reasoning, and the Localization of Failures
Brown, Allen L.
A research program is proposed, the goal of which is a computer system that embodies the knowledge and methodology of a competent radio repairman.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Video Ergo Scio</title>
<link href="https://hdl.handle.net/1721.1/41095" rel="alternate"/>
<author>
<name>Marr, David</name>
</author>
<author>
<name>Hewitt, Carl</name>
</author>
<id>https://hdl.handle.net/1721.1/41095</id>
<updated>2019-04-10T17:17:34Z</updated>
<published>1973-11-01T00:00:00Z</published>
<summary type="text">Video Ergo Scio
Marr, David; Hewitt, Carl
An approach to vision research is described that combines ideas about low level processing with more abstract notions about the representation of knowledge in intelligent systems. A particular problem, of the representation of knowledge about the three-dimensional world, is discussed: the outline of a solution is given, and an experimental world of simple mechanical assemblies is described, in which the solution may be implemented and tested. A tentative summary is given of the knowledge that is required for operating in this world, and a research project is proposed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1973-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GT40 Utility Pograms and the LISP Display Slave</title>
<link href="https://hdl.handle.net/1721.1/41094" rel="alternate"/>
<author>
<name>Beeler, Michael</name>
</author>
<author>
<name>Cohen, Joseph D.</name>
</author>
<author>
<name>White, John L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41094</id>
<updated>2019-04-12T09:44:08Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">GT40 Utility Pograms and the LISP Display Slave
Beeler, Michael; Cohen, Joseph D.; White, John L.
This memo describes two GT40 programs: URUG, an octal micro-debugger: and VT07, a Datapoint simulator and general display package. There is also a description of the MITAI LISP display slave, and how it uses VT07 as a remote graphics slave.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Functions and Frames in the Learning of Structures</title>
<link href="https://hdl.handle.net/1721.1/41092" rel="alternate"/>
<author>
<name>Freiling, Michael J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41092</id>
<updated>2019-04-12T09:43:47Z</updated>
<published>1973-12-01T00:00:00Z</published>
<summary type="text">Functions and Frames in the Learning of Structures
Freiling, Michael J.
This paper discusses methods for enhancing the learning abilities of the Winston program, first by representing functional properties of the objects considered, and secondly by embedding individual models in a hierarchically organized system to provide for economy of recognition. An example is presented illustrating the use of these methods.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hypothesis-Frame System for Recognition Problems</title>
<link href="https://hdl.handle.net/1721.1/41091" rel="alternate"/>
<author>
<name>Fahlman, Scott E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41091</id>
<updated>2019-04-10T17:17:34Z</updated>
<published>1973-12-01T00:00:00Z</published>
<summary type="text">A Hypothesis-Frame System for Recognition Problems
Fahlman, Scott E.
This paper proposes a new approach to a broad class of recognition problems ranging from medical diagnosis to vision. The features of this approach include a top-down hypothesize-and-test style and the use of a great deal of high-level knowledge about the subject. This knowledge is packaged into small groups of related facts and procedures called frames.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Circular Scan</title>
<link href="https://hdl.handle.net/1721.1/41090" rel="alternate"/>
<author>
<name>Winston, Patrick H.</name>
</author>
<author>
<name>Lerman, Jerome B.</name>
</author>
<id>https://hdl.handle.net/1721.1/41090</id>
<updated>2019-04-10T22:36:25Z</updated>
<published>1972-03-01T00:00:00Z</published>
<summary type="text">Circular Scan
Winston, Patrick H.; Lerman, Jerome B.
Previous feature point detectors have been local in their support and have been universally designed for objects without appreciable texture. We have invented (or perhaps reinvented) a scheme using correlation between concentric or osculating circles which shows some promise of being a first step into the texture domain.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1972-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Aspects of Medical Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/41089" rel="alternate"/>
<author>
<name>Sussman, Gerald J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41089</id>
<updated>2019-04-11T03:10:21Z</updated>
<published>1973-12-01T00:00:00Z</published>
<summary type="text">Some Aspects of Medical Diagnosis
Sussman, Gerald J.
Since mid July Steve Pauker, Jerome Kassirer, and I (Gerald Jay Sussman) have been observing the diagnostic process of expert physicians with the goal of abstracting the underlying procedures being followed. One purpose of this position paper is to summarize our preliminary conclusions. I will attempt to pinpoint those aspects of the process we feel we understand, and where we are confused or unsure. I will also attempt to indicate some possible theoretical underpinnings of our ideas. Finally, I will propose what I consider to be a coherent research protocol for the development of these ideas.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</summary>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative Aspects of the Computation Performed by Visual Cortex in the Cat, With a Note on a Function of Lateral Inhibition</title>
<link href="https://hdl.handle.net/1721.1/41088" rel="alternate"/>
<author>
<name>Marr, David</name>
</author>
<author>
<name>Pettigrew, J. D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41088</id>
<updated>2019-04-10T20:02:48Z</updated>
<published>1973-12-01T00:00:00Z</published>
<summary type="text">Quantitative Aspects of the Computation Performed by Visual Cortex in the Cat, With a Note on a Function of Lateral Inhibition
Marr, David; Pettigrew, J. D.
A quantitative summary is given of the computation that is performed by visual cortex in the cat. Part of this computation seems to be achieved using a sample-and-average technique; some quantitative features of this technique are briefly set out.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Working Papers are informal papers intended for internal use.
</summary>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A scenario of Planning and Debugging in Electronic Circuit Design</title>
<link href="https://hdl.handle.net/1721.1/41087" rel="alternate"/>
<author>
<name>Sussman, Gerald J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41087</id>
<updated>2019-04-11T03:10:21Z</updated>
<published>1973-12-01T00:00:00Z</published>
<summary type="text">A scenario of Planning and Debugging in Electronic Circuit Design
Sussman, Gerald J.
The purpose of this short document is to exhibit how a HACKER-like top-down planning and debugging system can be applied to the problem of the design and debugging of simple analog electronic circuits. I believe, and I hope to establish, that this kind of processing goes on at all levels of the problem-solving process--from specific, concrete applications, like Electronic Design, through abstract piecing together and debugging of problem-solving strategies.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Working Papers are informal papers intended for internal use.
</summary>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Knowledge</title>
<link href="https://hdl.handle.net/1721.1/41086" rel="alternate"/>
<author>
<name>Freuder, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41086</id>
<updated>2019-04-11T03:10:19Z</updated>
<published>1973-10-01T00:00:00Z</published>
<summary type="text">Active Knowledge
Freuder, Eugene C.
A progress report on the work described in Vision Flashes 33 and 43 on recognition of real objects. Emphasis is on the "active" use of knowledge in directing the flow of visual processing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracking Wires on Printed Circuit Boards</title>
<link href="https://hdl.handle.net/1721.1/41085" rel="alternate"/>
<author>
<name>Finin, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/41085</id>
<updated>2019-04-11T03:10:18Z</updated>
<published>1973-10-01T00:00:00Z</published>
<summary type="text">Tracking Wires on Printed Circuit Boards
Finin, Tim
This working paper describes a collection of LISP programs written to examine the backs of printed circuit boards. These programs find and trace the conductive wires plated on the insulating material. The "pads", or solder connections between these plated wires and leads from components on the front of the board, are also recognized and located by these programs.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finding Components on a Circuit Board</title>
<link href="https://hdl.handle.net/1721.1/41083" rel="alternate"/>
<author>
<name>Lozano-Perez, Tomas</name>
</author>
<id>https://hdl.handle.net/1721.1/41083</id>
<updated>2019-04-09T18:29:34Z</updated>
<published>1973-09-01T00:00:00Z</published>
<summary type="text">Finding Components on a Circuit Board
Lozano-Perez, Tomas
This paper describes a set of programs written in LISP that recognize resistors on circuit boards. The approach leans heavily on a thorough examination of the features found in representative intensity arrays and on representing the important points procedurally. The programs attempt to exploit evidence as it is gathered. The issues of hypothesis formation and change are considered. This paper represents a continuation of research described in a S. B. thesis of the same title submitted at M.I.T. on June, 1973.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Does Vision Need a Special-purpose Language?</title>
<link href="https://hdl.handle.net/1721.1/41082" rel="alternate"/>
<author>
<name>Fahlman, Scott E.</name>
</author>
<id>https://hdl.handle.net/1721.1/41082</id>
<updated>2019-04-11T03:10:18Z</updated>
<published>1973-09-01T00:00:00Z</published>
<summary type="text">Does Vision Need a Special-purpose Language?
Fahlman, Scott E.
This paper briefly discusses the following questions: What are the benefits of special-purpose languages? When is a field ready for such a language? Are any parts of our current vision research ready?
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The TRACK Program Package</title>
<link href="https://hdl.handle.net/1721.1/41081" rel="alternate"/>
<author>
<name>Lerman, Jerome B.</name>
</author>
<author>
<name>Woodham, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41081</id>
<updated>2019-04-11T03:10:17Z</updated>
<published>1973-08-01T00:00:00Z</published>
<summary type="text">The TRACK Program Package
Lerman, Jerome B.; Woodham, Robert J.
A collection of LISP functions has been written to provide vidisector users with the following three line-oriented vision primitives:&#13;
(i) given an initial point and an estimated initial direction, track a line in that direction until the line terminates.&#13;
(ii) given two points, verify the existence of a line joining those two points.&#13;
(iii) given the location of a vertex, find suspect directions for possible lines emanating from that vertex.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structured Descriptions</title>
<link href="https://hdl.handle.net/1721.1/41080" rel="alternate"/>
<author>
<name>Gabriel, Richard P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41080</id>
<updated>2019-04-10T17:17:20Z</updated>
<published>1973-08-01T00:00:00Z</published>
<summary type="text">Structured Descriptions
Gabriel, Richard P.
A descriptive formalism along with a philosophy for its use and expansion are presented wherein descriptions are of a highly structured nature. This descriptive system and the method of recognition are extended to the rudiments of a general system of machine vision.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hierarchy in Descriptions</title>
<link href="https://hdl.handle.net/1721.1/41079" rel="alternate"/>
<author>
<name>Dunlavey, Michael R.</name>
</author>
<id>https://hdl.handle.net/1721.1/41079</id>
<updated>2019-04-11T03:10:17Z</updated>
<published>1973-05-01T00:00:00Z</published>
<summary type="text">Hierarchy in Descriptions
Dunlavey, Michael R.
Organization of knowledge requires the flexible use of hierarchy in descriptions. This memo attempts to catalog the issues related to recognizing and executing such descriptions, drawing examples primarily from the blocks world.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Package of LISP Functions for Making Movies and Demos</title>
<link href="https://hdl.handle.net/1721.1/41078" rel="alternate"/>
<author>
<name>Lerman, Jerome B.</name>
</author>
<id>https://hdl.handle.net/1721.1/41078</id>
<updated>2019-04-11T04:03:09Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">A Package of LISP Functions for Making Movies and Demos
Lerman, Jerome B.
A collection of functions have been written to allow LISP users to record display calls in a disk file. This file can be UREAD into a small LISP to reproduce the display effects of the program without doing the required computations. Such a file can be regarded as a 'movie' or 'demo' file and can easily be used with the KODAK movie camera to produce a hard copy.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Suggestions and Advice</title>
<link href="https://hdl.handle.net/1721.1/41077" rel="alternate"/>
<author>
<name>Freuder, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41077</id>
<updated>2019-04-11T07:56:51Z</updated>
<published>1973-03-01T00:00:00Z</published>
<summary type="text">Suggestions and Advice
Freuder, Eugene C.
Results of scene analysis, as they are achieved, direct and advise the flow of subsequent processing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical Arm Control</title>
<link href="https://hdl.handle.net/1721.1/41076" rel="alternate"/>
<author>
<name>Waters, Richard C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41076</id>
<updated>2019-04-10T22:36:26Z</updated>
<published>1973-03-19T00:00:00Z</published>
<summary type="text">Mechanical Arm Control
Waters, Richard C.
This paper discusses three main problems associated with the control of the motion of a mechanical arm.&#13;
1) Transformation between different coordinate systems used to describe the state of the arm.&#13;
2) Calculation of detailed trajectories for the arm to follow when moving from point A to B.&#13;
3) Calculation of the forces that must be applied to the joints of the arm to make it move along a specified path.&#13;
Each of the above problems is amenable to exact solution, however, the resulting equations are, in general, to complex to be used in a real time application. Throughout this paper we investigate methods for getting approximate solutions to these equations.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-03-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Gloss of Glossy Things</title>
<link href="https://hdl.handle.net/1721.1/41075" rel="alternate"/>
<author>
<name>Lavin, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41075</id>
<updated>2019-04-09T18:19:31Z</updated>
<published>1973-03-01T00:00:00Z</published>
<summary type="text">The Gloss of Glossy Things
Lavin, Mark A.
This paper discusses the visual phenomenon of gloss. It is shown that the perception of this phenomenon derives from two effects (1) that the image reflected by a glossy surface lies in a different plane from the surface, and (2) that the highlights in a glossy scene are abnormally bright. The perception of gloss seems to arise as a side effect of depth perception and lightness judgment.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Review of Human Vision Facts</title>
<link href="https://hdl.handle.net/1721.1/41074" rel="alternate"/>
<author>
<name>Ankcorn, John</name>
</author>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<author>
<name>Winston, Patrick H.</name>
</author>
<id>https://hdl.handle.net/1721.1/41074</id>
<updated>2019-04-12T09:43:46Z</updated>
<published>1973-03-20T00:00:00Z</published>
<summary type="text">Review of Human Vision Facts
Ankcorn, John; Horn, Berthold K.P.; Winston, Patrick H.
This note is a collection of well known interesting facts about human vision. All parameters are approximate. Some may be wrong. There are sections on retina physiology, eye optics, light adaptation, psychological curios, color and eyeball movement.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-03-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Description of Visual Texture by Computers</title>
<link href="https://hdl.handle.net/1721.1/41073" rel="alternate"/>
<author>
<name>Gaschnig, John Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/41073</id>
<updated>2019-04-10T17:17:32Z</updated>
<published>1973-03-09T00:00:00Z</published>
<summary type="text">Description of Visual Texture by Computers
Gaschnig, John Gary
Some general properties of textures are discussed for a restricted class of textures. A program is described which inputs a scene using vidisector camera, discerns the texture elements, calculates values for a set of descriptive features for each texture element, and displays the distribution of each feature. The results of the experiments indicate that the descriptive method used may be useful in characterizing more complex textures. This is essentially the content of a Bachelor's thesis completed in June, 1972.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-03-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climber: A Vertex-Finder</title>
<link href="https://hdl.handle.net/1721.1/41072" rel="alternate"/>
<author>
<name>Slesinger, Steve</name>
</author>
<id>https://hdl.handle.net/1721.1/41072</id>
<updated>2019-04-11T07:56:51Z</updated>
<published>1973-02-01T00:00:00Z</published>
<summary type="text">Climber: A Vertex-Finder
Slesinger, Steve
A LISP program has been written which returns the location of a vertex in a suspected region, as well as an indication of the certainty of success.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Projective Approach to Object Description</title>
<link href="https://hdl.handle.net/1721.1/41071" rel="alternate"/>
<author>
<name>Hollerbach, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41071</id>
<updated>2019-04-12T09:44:08Z</updated>
<published>1972-12-15T00:00:00Z</published>
<summary type="text">The Projective Approach to Object Description
Hollerbach, John M.
A methodology is presented for generating descriptions of objects from line drawings. Using projection of planes, objects in a scene can be parsed and described at the same time. The descriptions are hierarchical, and lend themselves well to approximation. Possible application to curved objects is discussed.
This paper reproduces a thesis proposal of the same title submitted to the EE Department for the M.S. degree.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-12-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>DDD: Density Distribution Determination</title>
<link href="https://hdl.handle.net/1721.1/41069" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41069</id>
<updated>2019-04-12T09:43:46Z</updated>
<published>1973-03-08T00:00:00Z</published>
<summary type="text">DDD: Density Distribution Determination
Horn, Berthold K.P.
This paper presents a solution to the problem of determining the distribution of an absorbing substance inside a non-opaque non-scattering body from images or ray samplings. It simultaneously solves the problem of determining the distribution of emitting substance in a transparent non-scattering medium. The relation to more common vision problems is discussed.
This is largely a cleaned up version of a solution found sometime ago when two other related problems were of interest. The one is the special situation when the density can have only two values, which has been solved for special cases by J. Kloustad. The other is the problem of shape determination from silhouettes, that is when the density is infinite in a simple region.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashed are informal papers intended for internal use.
</summary>
<dc:date>1973-03-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>VISHEM: A bag of "robotics" formulae</title>
<link href="https://hdl.handle.net/1721.1/41068" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41068</id>
<updated>2019-04-12T09:43:46Z</updated>
<published>1972-12-01T00:00:00Z</published>
<summary type="text">VISHEM: A bag of "robotics" formulae
Horn, Berthold K.P.
Here collected you will find a number of methods for solving certain kinds of "algebraic" problems found in vision and manipulation programs for our AMF arm and our TVC eye. They are collected here to avoid the need to regenerate them when needed and because I wanted to get rid of a large number of loose sheets of paper in my desk. Documented are various methods hidden in a number of old robotics and vision programs. Some are due to Tom Binford and Bill Gosper.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Working papers are informal papers intended for internal use.
</summary>
<dc:date>1972-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recognition of Real Objects</title>
<link href="https://hdl.handle.net/1721.1/41067" rel="alternate"/>
<author>
<name>Freuder, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41067</id>
<updated>2019-04-11T07:56:50Z</updated>
<published>1972-10-01T00:00:00Z</published>
<summary type="text">Recognition of Real Objects
Freuder, Eugene C.
High level semantic knowledge will be employed in the development of a machine vision program flexible enough to deal with a class of "everyday objects" in varied environments.&#13;
This report is in the nature of a thesis proposal for future work.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Feedback in a Coordinated Hand-Eye System</title>
<link href="https://hdl.handle.net/1721.1/41066" rel="alternate"/>
<author>
<name>Woodham, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41066</id>
<updated>2019-04-12T09:44:07Z</updated>
<published>1972-08-01T00:00:00Z</published>
<summary type="text">Visual Feedback in a Coordinated Hand-Eye System
Woodham, Robert J.
A system is proposed for the development of new techniques for the control and monitoring of a mechanical arm-hand. The use of visual feedback is seen to provide new interactive capabilities in a machine hand-eye system. The proposed system explores the use of visual feedback in such operations as the pouring and stirring of liquids, the location of objects for grasping, and the simple rote learning of new arm motions.
This paper reproduces a thesis proposal of the same title submitted to the Dept. of Electrical Engineering for the degree of Master of Science.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Approach to Three-Dimensional Decomposition and Description of Polyhedra</title>
<link href="https://hdl.handle.net/1721.1/41065" rel="alternate"/>
<author>
<name>Hollerbach, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41065</id>
<updated>2019-04-11T07:56:50Z</updated>
<published>1972-07-01T00:00:00Z</published>
<summary type="text">An Approach to Three-Dimensional Decomposition and Description of Polyhedra
Hollerbach, John M.
This paper presents a description methodology for trihedral planar solids that, as in Roberts' approach, decomposes an object into simpler components. The present approach, however, is more sophisticated and results in a more natural description. Hidden vertices are located in the process of description generation. Also, it is shown how the 3-D coordinates of the vertices can be obtained from the 2-D coordinates.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Summary of Selected Vision Topics</title>
<link href="https://hdl.handle.net/1721.1/41064" rel="alternate"/>
<author>
<name>Winston, Patrick H.</name>
</author>
<id>https://hdl.handle.net/1721.1/41064</id>
<updated>2019-04-14T07:15:08Z</updated>
<published>1972-07-01T00:00:00Z</published>
<summary type="text">Summary of Selected Vision Topics
Winston, Patrick H.
This is an introduction to some of the MIT AI vision work of the last few years. The topics discussed are 1) Waltz's work on line drawing semantics, 2) heterarchy, 3) the ancient learning business and 4) copying scenes. All topics are discussed in more detail elsewhere in working paper ot theses.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Working papers are informal papers intended for internal use.
</summary>
<dc:date>1972-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shedding Light on Shadows</title>
<link href="https://hdl.handle.net/1721.1/41063" rel="alternate"/>
<author>
<name>Waltz, David L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41063</id>
<updated>2019-04-12T09:44:07Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">Shedding Light on Shadows
Waltz, David L.
This paper describes methods which allow a program to analyze and interpret a variety of scenes made up of polyhedra with trihedral vertices. Scenes may contain shadows, accidental edge alignments, and some missing lines. This work is based on ideas proposed initially by Huffman and Clowes; I have added methods which enable the program to use a number of facts about the physical world to constrain the possible interpretations of a line drawing, and have also introduced a far richer set of descriptions than previous programs have used.
This paper replaces Vision Flash 21.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Program to Output Stored Pictures</title>
<link href="https://hdl.handle.net/1721.1/41062" rel="alternate"/>
<author>
<name>Woodham, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/41062</id>
<updated>2019-04-11T04:03:11Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">A Program to Output Stored Pictures
Woodham, Robert J.
A program called LPTSEE has been written for use with the MIT vision system. LPTSEE makes use of the overprint capability of the line printer to allow the user to output a stored picture image.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using the Vidisector and the Store Picture Facility</title>
<link href="https://hdl.handle.net/1721.1/41061" rel="alternate"/>
<author>
<name>Lerman, Jerome B.</name>
</author>
<id>https://hdl.handle.net/1721.1/41061</id>
<updated>2019-04-11T07:56:49Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">Using the Vidisector and the Store Picture Facility
Lerman, Jerome B.
The stored picture facility (FAKETV) allows LISP users, and to some extent machine language users, to access a library of stored images rather than live vidisector scenes. The vidisector functions in LISP have been slightly restructured so that input from stored images or live images can be handled with no changes to the user's program. The procedure for creating stored images is also described.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Vision Potpourri</title>
<link href="https://hdl.handle.net/1721.1/41060" rel="alternate"/>
<author>
<name>Finin, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/41060</id>
<updated>2019-04-12T09:44:50Z</updated>
<published>1972-06-01T00:00:00Z</published>
<summary type="text">A Vision Potpourri
Finin, Tim
This paper discusses some recent changes and additions to the vision system. Among the additions are the ability to use visual feedback when trying to acurately position an object and the ability to use the arm as a sensory device. Also discussed are some ideas and a description of preliminary work on a particular sort of high level three-dimensional reasoning.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</summary>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Position Extraction Using Stereo Eye Systems with a Relative Rotational Motion Capability</title>
<link href="https://hdl.handle.net/1721.1/41059" rel="alternate"/>
<author>
<name>Corwin, Daniel W.</name>
</author>
<id>https://hdl.handle.net/1721.1/41059</id>
<updated>2019-04-11T04:03:09Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Visual Position Extraction Using Stereo Eye Systems with a Relative Rotational Motion Capability
Corwin, Daniel W.
This paper discusses the problem of context-free position estimation using a stereo vision system with moveable eyes. Exact and approximate equations are developed linking position to measureable quantities of the image-space, and an algorithm for rough form. An estimate of errors and resolution limits is provided.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Scenes With Shadows</title>
<link href="https://hdl.handle.net/1721.1/41058" rel="alternate"/>
<author>
<name>Waltz, David L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41058</id>
<updated>2019-04-10T22:27:14Z</updated>
<published>1971-11-01T00:00:00Z</published>
<summary type="text">Understanding Scenes With Shadows
Waltz, David L.
The basic problem of this research is to find methods which will enable a program to construct a three dimensional interpretation from the line drawing of a scene, where the scene may have shadows and various degeneracies. These methods differ from those used in earlier related programs in that they use region information extensively, and include formalisms for eye and lighting position. The eventual result of this research will be a program which should be able to successfully treat scenes with far fewer restrictions than present programs will tolerate.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Progress in Extending the VIRGIN Program</title>
<link href="https://hdl.handle.net/1721.1/41053" rel="alternate"/>
<author>
<name>Dowson, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/41053</id>
<updated>2019-04-11T01:54:19Z</updated>
<published>1971-09-01T00:00:00Z</published>
<summary type="text">Progress in Extending the VIRGIN Program
Dowson, Mark
The VIRGIN program will interpret pictures of simple scenes. This paper describes a program, SINNER, which will deal with picture which contain cracks and shadows. In addition to handling pictures of this richer world, SINNER employs heuristics which use knowledge about the structure of the three dimensional world to reduce the number of interpretations of some pictures and to augment the efficiency of the parsing process.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finding the Skeleton of a Brick*</title>
<link href="https://hdl.handle.net/1721.1/41052" rel="alternate"/>
<author>
<name>Finin, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/41052</id>
<updated>2019-04-14T07:19:35Z</updated>
<published>1973-03-01T00:00:00Z</published>
<summary type="text">Finding the Skeleton of a Brick*
Finin, Tim
TC-SKELETON's duty is to help find the dimensions of brick shaped objects by searching for sets of three complete edges, on for each dimension. The program was originally written by Patrick Winston, and then was refined and improved by Tim Finin.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense, and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Reproduction of this document, in whole or in part, is permitted for any purpose of the United States Government.&#13;
This memo was first issued in August 1971 as A.I Vision Flash 19.
</summary>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The FINDSPACE Problem</title>
<link href="https://hdl.handle.net/1721.1/41051" rel="alternate"/>
<author>
<name>Sussman, Gerald Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/41051</id>
<updated>2019-04-10T22:27:15Z</updated>
<published>1971-08-03T00:00:00Z</published>
<summary type="text">The FINDSPACE Problem
Sussman, Gerald Jay
The FINDSPACE problem is that of establishing a volume in space where an object of specified dimensions will fit. The problem seems to have two subproblems: the hypothesis generation problem of finding a likely spot to try, and the verification problem of testing that spot for occupancy by other objects. This paper treats primarily the verification problem.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research contract number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resolving Visual Ambiguity with a Probe</title>
<link href="https://hdl.handle.net/1721.1/41050" rel="alternate"/>
<author>
<name>Gaschnig, John</name>
</author>
<id>https://hdl.handle.net/1721.1/41050</id>
<updated>2019-04-10T22:27:13Z</updated>
<published>1971-07-01T00:00:00Z</published>
<summary type="text">Resolving Visual Ambiguity with a Probe
Gaschnig, John
The eye-hand robot at the Artificial Intelligence Laboratory now possesses the ability to occasionally copy simple configurations of blocks, using spare parts about whose presence it knows. One problem with which it cannot cope well is that of ambiguous scenes. This paper studies two types of ambiguity present in some scenes -- occlusion and illusion --  and proposes some ideas about effectively resolving the ambiguities through the use of the hand as an information detection device to work in conjunction with the eye.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Binford-Horn LINEFINDER</title>
<link href="https://hdl.handle.net/1721.1/41049" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41049</id>
<updated>2019-04-10T22:27:14Z</updated>
<published>1971-07-01T00:00:00Z</published>
<summary type="text">The Binford-Horn LINEFINDER
Horn, Berthold K.P.
This paper briefly describes the processing performed in the course of producing a line drawing from vidisector information.
</summary>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wandering About the Top of the Robot</title>
<link href="https://hdl.handle.net/1721.1/41048" rel="alternate"/>
<author>
<name>Winston, Patrick H.</name>
</author>
<id>https://hdl.handle.net/1721.1/41048</id>
<updated>2019-04-09T19:09:30Z</updated>
<published>1971-07-01T00:00:00Z</published>
<summary type="text">Wandering About the Top of the Robot
Winston, Patrick H.
Part I of this paper describes some of the new functions in the system. The discussion is seasoned here and there with parenthetical code fragments that may be ignored by readers unfamiliar with PLANNER.&#13;
Part II discussed the scenario evoked in a simple sample copy effort and Part III provides some technical notes helpful to those who wish to use the system.
</summary>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What Corners Look Like</title>
<link href="https://hdl.handle.net/1721.1/41047" rel="alternate"/>
<author>
<name>Dowson, Mark</name>
</author>
<author>
<name>Waltz, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41047</id>
<updated>2019-04-11T01:54:18Z</updated>
<published>1971-06-01T00:00:00Z</published>
<summary type="text">What Corners Look Like
Dowson, Mark; Waltz, David
An algorithm is presented which provides a way of telling what a given trihedral corner will look like if viewed from a particular angle. The resulting picture is a junction of two or more lines each labelled according to Huffman's convention. Possible extensions of the algorithm are discussed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two Problems in Analyzing Scenes</title>
<link href="https://hdl.handle.net/1721.1/41046" rel="alternate"/>
<author>
<name>Finin, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/41046</id>
<updated>2019-04-12T09:44:07Z</updated>
<published>1971-06-01T00:00:00Z</published>
<summary type="text">Two Problems in Analyzing Scenes
Finin, Tim
This paper is based on a B.S. thesis supervised by Patrick Winston. It deals with some previously unexplored problems in the analysis of visual scenes. The scenes consist of two dimensional line drawings of simple objects such as blocks and wedges. The problems have come out of the work that Patrick Winston has done and in discussing them I will be assuming the environment of his system. The first problem asks the questions "When is an object standing? When is it lying?" In the course of answering this question a method is developed for determining the relative true dimensions of an object from its two dimensional oblique projection. The second problem develops methods for discovering when on object is in front of another in situations where previous methods have failed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Circular Array Sensors</title>
<link href="https://hdl.handle.net/1721.1/41045" rel="alternate"/>
<author>
<name>Trawick, Charles D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41045</id>
<updated>2019-04-12T09:44:45Z</updated>
<published>1980-04-01T00:00:00Z</published>
<summary type="text">Applications of Circular Array Sensors
Trawick, Charles D.
The application of the Reticon RO-64 annular photo-diode array to the task of optical tracking of special targets, direct optical focusing, and automatic printed circuit board inspection were studied. In order to facilitate this work, a digital camera unit incorporating the array was designed and constructed.&#13;
Of the three applications investigated, the tracking task proved to be the most successful, since multiple targets were tracked in real time using the array. In the focusing application, the digital approach was found to be too slow for real-time use, and suggestions were made for the analog implementation of a focusing algorithm using the array. The printed circuit board inspection algorithm detected errors successfully, but the inefficiency of image acquisition with the array is a serious drawback, leading to the conclusion that linear arrays of similar design would provide faster and less expensive inspection.&#13;
Thus the annular geometry is best suited to the onetime sampling of points on a circle in an image, as in the case of the tracking and focusing tasks. The focusing task suffers mainly from the amount of computation required to achieve focus, and from its competition with more established indirect focusing techniques.
Submitted to the Department of Electrical Engineering and Computer Science on January 18, 1980 in partial fulfillment of the requirements for the Degree Master of Science in Electrical Engineering and Computer Science
</summary>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Suggestions for Genetic A.I.</title>
<link href="https://hdl.handle.net/1721.1/41044" rel="alternate"/>
<author>
<name>Drescher, Gary L.</name>
</author>
<id>https://hdl.handle.net/1721.1/41044</id>
<updated>2019-04-11T07:56:56Z</updated>
<published>1980-02-01T00:00:00Z</published>
<summary type="text">Suggestions for Genetic A.I.
Drescher, Gary L.
This paper presents suggestions for "Genetic A.I.": an attempt to model the genesis of intelligence in human infants, particularly as described by Piaget's theory of the Sensorimotor period. The paper includes a synopsis of Sensorimotor intelligence, followed by preliminary suggestions for a mechanism (the "Schema mechanism") for its development, and a hypothetical Scenario which partially reinterprets Sensorimotor development in terms of that mechanism.&#13;
The Schema mechanism focuses on Piaget's concept of the competition and evolution of mental "schemas." The schema is modelled here as an assertion that one partial state of the mechanism's world-representation is transformable to another via a given action, taken when the schema is "activated". A proposed process of "correlation" allows a schema's assertion to be extended or revised in response to empirically-observed effects of the schema's activation. Correlation uses the the formation and activation of schemas to propose and test hypothesis, in contrast with the passive tabulation characteristic of associationist mechanisms. Further features are proposed to enable schemas to become coordinated into composite structures, "compound actions", which can be used by other schemas; and to synthesize new "items" (state-elements) when existing ones prove inadequate to model the world.&#13;
The Scenario outlines how the Schema mechanism might begin to make its way through the progression of Sensorimotor stages; development culminating in Piaget's third stage is discussed. This development includes learning about the visual and tactile effects of eye and hand motions-- eg, learning how to look directly at an object, or to move a hand into view; and the organization of that knowledge to designate the tactile properties of "visual objects", and vice versa-- eg knowing how to touch an object which is seen-- paving the way to a sensory-modality-invariant representation of objects and space.&#13;
The Schema mechanism attempts to "learn from scratch", without built-in expertise or built-in structure in its learning domains. In the past there has been little success among AI programs of this genre. But many such attempts have suffered from mechanisms which were trivial in that they placed the full burden of acquiring and structuring knowledge on one or two simple tricks, whereas, I claim, the present effort shows a willingness to incorporate a multiplicity of elements in a complicated mechanism. In addition, the Schema mechanism benefits from its orientation around a nontrivial theory of development. Piaget gives a comprehensive account of the infant's evolution of primitive problem-solving and domain-specific (chiefly object-manipulation) knowledge; this account is used here as a roadmap that describes the proper course for the mechanism to follow. Thus, there is a nontrivial (or at least nonarbitrary) sequence of target abilities to use as a framework for evaluating and revising the mechanism's performance.
</summary>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formalizing the Expertise of the Assembly Language Programmer</title>
<link href="https://hdl.handle.net/1721.1/41043" rel="alternate"/>
<author>
<name>Duffey, Roger DuWayne II</name>
</author>
<id>https://hdl.handle.net/1721.1/41043</id>
<updated>2019-04-11T07:44:47Z</updated>
<published>1980-09-01T00:00:00Z</published>
<summary type="text">Formalizing the Expertise of the Assembly Language Programmer
Duffey, Roger DuWayne II
A novel compiler strategy for generating high quality code is described. The quality of the code results from reimplementing the program in the target language using knowledge of the program's behavior. The research is a first step towards formalizing the expertise of the assembly language programmer. The ultimate goal is to formalize code generation and implementation techniques in the same way that parsing and code generation techniques have been formalized. An experimental code generator based on the reimplementation strategy will be constructed. The code generator will provide a framework for analyzing the costs, applicability, and effectiveness of various implementation techniques. Several common code generation problems will be studied. Code written by experienced programmers and code generated by a conventional optimizing compiler will provide standards of comparison.
</summary>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operating the Lisp Machine</title>
<link href="https://hdl.handle.net/1721.1/41042" rel="alternate"/>
<author>
<name>Moon, David A.</name>
</author>
<author>
<name>Wechsler, Allan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/41042</id>
<updated>2019-04-09T18:39:13Z</updated>
<published>1981-04-01T00:00:00Z</published>
<summary type="text">Operating the Lisp Machine
Moon, David A.; Wechsler, Allan C.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 67, but is not guaranteed to remain 100% accurate.&#13;
This document explains how to use the Lisp Machine from a non-programmer's point of view. It explains the general characteristics of the user interface, particularly the window system and the program-control commands. This document is intended to tell you everything you need to know to sit down at a Lisp machine and run programs, but does not deal with the writing of programs. Many arcane commands and user-interface features are also documented herein, although the beginning user can safely ignore them.
</summary>
<dc:date>1981-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Use of Thread Memory in Amnesic Aphasia and Concept Learning.(note 0)</title>
<link href="https://hdl.handle.net/1721.1/41041" rel="alternate"/>
<author>
<name>Vaina, Lucia M.</name>
</author>
<author>
<name>Greenblatt, Richard D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41041</id>
<updated>2019-04-10T22:36:22Z</updated>
<published>1979-09-05T00:00:00Z</published>
<summary type="text">The Use of Thread Memory in Amnesic Aphasia and Concept Learning.(note 0)
Vaina, Lucia M.; Greenblatt, Richard D.
We propose a new type of semantic memory, called thread memory. The primitives of this memory are threads, defined as keyed multilink, loop-free chains, which link semantic nodes. All links run from superordinate categories to subordinate categories. This is the opposite direction to those in the usual tree structure in that brother nodes in the tree share the structure above their common ancestors. The most valuable feature of the thread memory is its capacity to learn. A program which can learn concepts using as data children's primer books, was written by R. Greenblatt and runs on the LISP-MACHINE at the MIT-AI Laboratory. We have considered the thread memory as working hypothesis for exploring the mechanisms of naming deficits in aphasia and the ways of rehabilitation.
</summary>
<dc:date>1979-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lisp Machine Choice Facilities</title>
<link href="https://hdl.handle.net/1721.1/41040" rel="alternate"/>
<author>
<name>Moon, David A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41040</id>
<updated>2019-04-12T09:44:52Z</updated>
<published>1981-06-01T00:00:00Z</published>
<summary type="text">Lisp Machine Choice Facilities
Moon, David A.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 70, but is not guaranteed to remain 100% accurate. To understand some portions of this document may depend on background information which is not contained in any published documentation.&#13;
The window system contains several facilities to allow the user to make choices. These all work by displaying some arrangement of choices in a window; by pointing to one with the mouse the user can select it. This document explains what the various facilities are, how to use them, and how to customize them for your own purposes.
</summary>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptual Phrases and Deterministic English Parsing</title>
<link href="https://hdl.handle.net/1721.1/41039" rel="alternate"/>
<author>
<name>Dill, David</name>
</author>
<id>https://hdl.handle.net/1721.1/41039</id>
<updated>2019-04-12T09:44:44Z</updated>
<published>1979-08-01T00:00:00Z</published>
<summary type="text">Conceptual Phrases and Deterministic English Parsing
Dill, David
The grammar of many of the lower-level constituents of grammatical structures in English has not been a area of exciting new linguistic discovery, in contrast with study of clause-level constituents. The syntax of these conceptual phrases, as they are termed here, seems to be somewhat ad hoc, which presents problems for their specification for the purpose of computer understanding of natural language.&#13;
This report concludes that their irregular behavior stems from a closer relationship between the syntax and the semantics of these than other English constructs. Conceptual phrases all correspond to objects in a single, tightly constrained semantic class, and as a result, semantic knowledge about them can be used to 'optimize' the process of communicating them.&#13;
The unique nature of conceptual phrases is exploited to provide a combined syntactic and semantic description for them, consisting of syntactically augmented frames, that is much simpler than individual syntactic or semantic descriptions. An example representation for numbers is given, along with an analysis of some problems that occur when a practical implementation is attempted.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exact Reproduction of Colored Images</title>
<link href="https://hdl.handle.net/1721.1/41038" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41038</id>
<updated>2019-04-10T22:36:27Z</updated>
<published>1980-12-01T00:00:00Z</published>
<summary type="text">Exact Reproduction of Colored Images
Horn, Berthold K.P.
The problem of producing a colored image from a colored original is analyzed. Conditions are determined for the production of an image, in which the colors cannot be distinguished from those in the original by a human observer. If the final image is produced by superposition of controlled amounts of colored lights, only a simple linear transform need be applied to the outputs of the image sensors to produce the control inputs required for the image generators. In systems which depend instead on control of the concentration or fractional area covered by colored dyes, a more difficult computation is called for. This calculation may for practical purposes be expressed in table look-up form.&#13;
The conditions for exact reproduction of colored images should prove useful in the design and analysis of image processing systems whose final output is intended for human viewing. Judging by the design of many existing systems, these rules are not generally known or adhered to. Modern computational techniques make it practical to tackle this problem now. Adherence to design constraints developed here is of particular important where colors are to be judged when the original is not directly accessible to the observer as, for example, when it is on another planet.
</summary>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steps Toward a Psycholinguistic Model of Language Production</title>
<link href="https://hdl.handle.net/1721.1/41037" rel="alternate"/>
<author>
<name>McDonald, David D.</name>
</author>
<id>https://hdl.handle.net/1721.1/41037</id>
<updated>2019-04-10T22:36:30Z</updated>
<published>1979-04-01T00:00:00Z</published>
<summary type="text">Steps Toward a Psycholinguistic Model of Language Production
McDonald, David D.
This paper discusses what it would mean to have a psychological model of the language production process: what such a model would have to account for, what it would use as evidence. It outlines and motivates one particular model including: presumptions about the input to the process, a characterization of language production as a process of selection under constraint, and the principle stipulations of the model. This paper is an introduction, which is largely nontechnical and uses only simple examples. A detailed presentation of the architecture of the model, its grammar, and its interface to the speaker will be forthcoming in other papers.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating a Semantic Network in LMS</title>
<link href="https://hdl.handle.net/1721.1/41036" rel="alternate"/>
<author>
<name>Koton, Phyllis A.</name>
</author>
<id>https://hdl.handle.net/1721.1/41036</id>
<updated>2019-04-11T07:56:54Z</updated>
<published>1980-09-29T00:00:00Z</published>
<summary type="text">Simulating a Semantic Network in LMS
Koton, Phyllis A.
A semantic network is a collection of nodes and the links between them. The nodes represent concepts, functions and entities, and the links represent relationships between varoius nodes. Any semantic network must be supplied with a language of conventions for representing knowledge as nodes and links in the network, so that storage and retrieval of knowledge can be carried out efficiently.&#13;
This thesis examines two approaches to the problem of representing real-world knowledge in a computer: one designed for use on serial computers, the other design to run on a parallel network machine. The two formalisms are shown to be nearly identical, and a simulation of the parallel language in the serial language is given.
Submitted to the Department of Electrical Engineering and Computer Science on January 1, 1980 in partial fulfillment of the requirements for the Degree of Bachelor of Science.
</summary>
<dc:date>1980-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Logical Control Theory Applied to Mechanical Arms</title>
<link href="https://hdl.handle.net/1721.1/41035" rel="alternate"/>
<author>
<name>Pankiewicz, Ronald Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/41035</id>
<updated>2019-04-12T09:44:48Z</updated>
<published>1979-02-01T00:00:00Z</published>
<summary type="text">Logical Control Theory Applied to Mechanical Arms
Pankiewicz, Ronald Joseph
A new control algorithm based upon Logical Control Theory is developed for mechanical manipulators. The controller uses discrete tesselations of state space and a finite set of fixed torques to regulate non-rehearsed movements in real time. Varying effective inertia, coupling between degrees of freedom, and fictional, gravitational and Coriolis forces are readily handled. A logical controller was implemented on a mini-computer for the MIT Scheinman Vicarm. The controller's performance compares favorably with that of controllers designed according to existing methodologies as used, for example, in the control of present day industrial manipulators.
Submitted to the Department of Electrical Engineering and Computer Science on January 19, 1979 in partial fulfillment of the requirements for the Degrees of Master of Science and Electrical Engineer.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
Thesis supervisor:&#13;
Berthold K. P. Horn,&#13;
Associate Professor of Electrical Engineering and Computer Science
</summary>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Last Whole XGP Font Catalog</title>
<link href="https://hdl.handle.net/1721.1/41034" rel="alternate"/>
<author>
<name>Christman, David P.</name>
</author>
<author>
<name>Sjoberg, Robert W.</name>
</author>
<id>https://hdl.handle.net/1721.1/41034</id>
<updated>2019-04-09T16:24:25Z</updated>
<published>1980-03-01T00:00:00Z</published>
<summary type="text">The Last Whole XGP Font Catalog
Christman, David P.; Sjoberg, Robert W.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</summary>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Numerical Method for Shape-From-Shading From A Single Image</title>
<link href="https://hdl.handle.net/1721.1/41033" rel="alternate"/>
<author>
<name>Strat, Thomas M.</name>
</author>
<id>https://hdl.handle.net/1721.1/41033</id>
<updated>2019-04-09T16:30:04Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">A Numerical Method for Shape-From-Shading From A Single Image
Strat, Thomas M.
The shape of an object can be determined from the shading in a single image by solving a first-order, non-linear partial differential equation. The method of characteristics can be used to do this, but it suffers from a number of theoretical difficulties and implementation problems. This thesis presents an iterative relaxation algorithm for solving this equation on a grid of points. Here, repeated local computations eventually lead to a global solution.&#13;
The algorithm solves for the surface orientation at each point by employing an iterative relaxation scheme. The constraint of surface smoothness is achieved while simultaneously satisfying the constraints imposed by the equation of image illumination. The algorithm has the distinct advantage of being capable of handling any reflectance function whether analytically or empirically specified.&#13;
Included are brief overviews of some of the more important shape-from-shading algorithms in existence and a list of potential applications of this iterative approach to several image domains including scanning electron microscopy, remote sensing of topography and industrial inspection.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
Thesis Supervisor: Berthold K. P. Horn&#13;
Title: Associate Professor of Electrical Engineering and Computer Science
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Worms of Ganymedes - Hazards of Image "Restoration"</title>
<link href="https://hdl.handle.net/1721.1/41032" rel="alternate"/>
<author>
<name>Horn, Berthold K.P.</name>
</author>
<id>https://hdl.handle.net/1721.1/41032</id>
<updated>2019-04-11T07:56:53Z</updated>
<published>1980-09-01T00:00:00Z</published>
<summary type="text">Worms of Ganymedes - Hazards of Image "Restoration"
Horn, Berthold K.P.
</summary>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fair Power Domain for Actor Computations</title>
<link href="https://hdl.handle.net/1721.1/40809" rel="alternate"/>
<author>
<name>Clinger, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/40809</id>
<updated>2019-04-11T03:54:20Z</updated>
<published>1979-06-01T00:00:00Z</published>
<summary type="text">A Fair Power Domain for Actor Computations
Clinger, Will
Actor-based languages feature extreme concurrency, allow side effects, and specify a form of fairness which permits unbounded nondeterminism. This makes it difficult to provide a satisfactory mathematical foundation for the semantics.&#13;
Due to the high degree of parallelism, an oracle semantics would be intractable. A weakest precondition semantics is out of the question because of the possibility of unbounded nondeterminism. The most attractive approach, fixed point semantics using power domains, has not been helpful because the available power domain constructions, although very general, seemed to deal inadequately with fairness.&#13;
By taking advantage of the relatively complex structure of the actor computation domain C, however, a power domain P(C) can be defined which is similar to Smyth's weak power domain but richer. Actor systems, which are collections of mutually recursive primitive actors with side effects, may be assigned meanings as least fixed points of their associated continuous functions acting on this power domain. Given a denotation A ∈ P(C), the set of possible complete computations of the actor system it represents is the set of least upper bounds of a certain set of "fair" chain in A, and this set of chains is definable within A itself without recourse to oracles or an auxiliary interpretive semantics.&#13;
It should be emphasized that this power domain construction is not nearly as generally applicable as those of the Plotkin [Pl] and Smyth [Sm], which can be used with any complete partial order. Fairness seems to require that the domain from which the power domain is to be constructed contain sufficient operational information.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</summary>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The J%JOIN Package</title>
<link href="https://hdl.handle.net/1721.1/40802" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/40802</id>
<updated>2019-04-12T09:44:50Z</updated>
<published>1971-04-02T00:00:00Z</published>
<summary type="text">The J%JOIN Package
Griffith, Arnold K.
The J%JOIN program creates links between the elements of a set of line segments on the basis of their geometric proximity. According to the value of the third argument, (T or NIL), the program will either place a set of links in an array, suitable for use by the program P%PURPOSE, or will return a set of "re-adjusted" line segments with the property that lines apparently converging on a common vertex are assigned identical end points at the appropriate ends. Twelve geometric parameters are used to control the joining procedure.&#13;
Starred sections (*) are for reference only; J%JOIN may be successfully used by someone familiar with only the unstarred sections of this memo.
Work reported herein was supported by the Artificial Intelligence Laboratory, an M.I.T. research program sponsored by the Advanced Research Projects Agency of the Department of Defense under office of Naval Research contract number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-04-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Line Proposer P%PROPOSE1, and Additional Notes on "F%FEATUREPOINTS" and "GVERIFY1"</title>
<link href="https://hdl.handle.net/1721.1/40801" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/40801</id>
<updated>2019-04-12T09:44:48Z</updated>
<published>1971-04-02T00:00:00Z</published>
<summary type="text">The Line Proposer P%PROPOSE1, and Additional Notes on "F%FEATUREPOINTS" and "GVERIFY1"
Griffith, Arnold K.
The line proposer P%PROPOSE1 is described in the first part of this memo. It makes use of links provided by the J%JOIN program, in proposing possibly missing lines in a line drawing of simple plane-faced objects. The remainder of this paper updates the descriptions of "F%FEATUREPOINTS" and "GVERIFY1" given in flashes #3 and #2 respectively.
Work reported herein was supported by the Artificial Intelligence Laboratory, an M.I.T. research program sponsored by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-04-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>What's What</title>
<link href="https://hdl.handle.net/1721.1/40800" rel="alternate"/>
<author>
<name>Winston, Patrick H.</name>
</author>
<id>https://hdl.handle.net/1721.1/40800</id>
<updated>2019-04-12T09:44:06Z</updated>
<published>1971-03-01T00:00:00Z</published>
<summary type="text">What's What
Winston, Patrick H.
An outline of the modules used in the copy demonstration, the reasons for doing robotics, and some possible directions for further work.
</summary>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heterarchy in the M.I.T. Robot</title>
<link href="https://hdl.handle.net/1721.1/40799" rel="alternate"/>
<author>
<name>Winston, Patrick H.</name>
</author>
<id>https://hdl.handle.net/1721.1/40799</id>
<updated>2019-04-12T09:44:49Z</updated>
<published>1971-03-01T00:00:00Z</published>
<summary type="text">Heterarchy in the M.I.T. Robot
Winston, Patrick H.
Work reported herein was conducted at the Artificial Intelligence Laboratory, an M.I.T. research program supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</summary>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Use .VSCAN</title>
<link href="https://hdl.handle.net/1721.1/40798" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/40798</id>
<updated>2019-04-09T18:44:33Z</updated>
<published>1971-03-01T00:00:00Z</published>
<summary type="text">How to Use .VSCAN
Griffith, Arnold K.
</summary>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Views on Vision</title>
<link href="https://hdl.handle.net/1721.1/39837" rel="alternate"/>
<author>
<name>Freuder, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/39837</id>
<updated>2019-04-11T04:03:08Z</updated>
<published>1971-02-01T00:00:00Z</published>
<summary type="text">Views on Vision
Freuder, Eugene C.
</summary>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Object Partition Problem</title>
<link href="https://hdl.handle.net/1721.1/39836" rel="alternate"/>
<author>
<name>Freuder, Eugene C.</name>
</author>
<id>https://hdl.handle.net/1721.1/39836</id>
<updated>2019-04-11T04:03:07Z</updated>
<published>1971-02-01T00:00:00Z</published>
<summary type="text">The Object Partition Problem
Freuder, Eugene C.
</summary>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feature Point Generation Programs</title>
<link href="https://hdl.handle.net/1721.1/39835" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/39835</id>
<updated>2019-04-11T07:56:54Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Feature Point Generation Programs
Griffith, Arnold K.
The programs in this set extract, from a raster of intensity values over some scene, a set of points which are adjudged to lie along the boundaries of objects in the scene. Intensities may be obtained directly from the new vidissector, or from a previously created file of intensity values.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Line Verifier GVERIFY1</title>
<link href="https://hdl.handle.net/1721.1/39834" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/39834</id>
<updated>2019-04-12T09:44:06Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">The Line Verifier GVERIFY1
Griffith, Arnold K.
A line verifier is presented which, given the co-ordinates of the end points of the hypothesized line, returns a (possibly) more accurate version of the end points, together with an estimate of the probability that there is a line in the region between the two end points given. No estimate is given as to the actual extent of the line: the increased accuracy of the returned end points lies in the accuracy of the slope and intercept of the line through them.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The L%LINES Package</title>
<link href="https://hdl.handle.net/1721.1/39814" rel="alternate"/>
<author>
<name>Griffith, Arnold K.</name>
</author>
<id>https://hdl.handle.net/1721.1/39814</id>
<updated>2019-04-09T16:57:38Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">The L%LINES Package
Griffith, Arnold K.
The program (L%LINES X Y) takes feature point output from the FP%FPOINTS program (q.v.) for horizontal and vertical scans (X and Y respectively); and outputs a list consisting of two lists of line segments, represented in an obvious manner, obtained from the respective arguments. "Feature points" are points on the field of view which seem to lie along some edge in the scene. The line segments output by L%LINES are obtained by examining a set of feature points for straight chains of points.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
</feed>
