To perform effectively in many environments, an agent must be able to manage multiple tasks in a complex, time-pressured, and partially uncertain world. For example, air traffic control consists almost entirely of routine activity, with its complexity arising largely from the need to manage multiple tasks. Tasks like guiding a plane to a destination airport typically involves issuing a series of standard turn and descent authorizations. Since such routines must be carried out over tens of minutes, the task of handling any individual plane must be periodically interrupted to handle new arrivals or resume a previously interrupted plane-handling task. This talk describes APEX, an agent architecture designed specifically to cope with the time pressure and uncertainty inherent in such environments. APEX incorporates and builds on multitask management capabilities found in previous systems, but it also introduces novel features.
Emotion is nature's unique solution to challenges faced in the design of any intelligent system. Findings in psychology and neuroscience have overturned long-standing views that emotion is in conflict with rational thought and have worked out a number of the mechanisms through which emotion helps an organism adapt to its environment. In this talk, I will discuss how an architectural examination of these findings can begin to abstract the function emotion plays in human information processing and map it to issues in agent architectural design. In contrast to contemporary computational models of emotion that have focused on external behavior, I will harken back to earlier architectural models such as those from Simon and Johnson-Laird and Oatley, but reexamine issues they raised in light of recent psychological findings. I will relate some of our initial work on the architectural implications of recent psychological theories. Cognitive appraisal theory dominates recent psychological thought on emotion and emphasizes the adaptive function of emotion and the close relationship between cognition, emotion, and motivation. I will describe this one theory in some detail to give a flavor of how contemporary research on human emotion can influence autonomous agent design and can begin to address some of the central areas of concern for the development of cognitive systems.
Our perspective on cognitive architectures arises primarily from our experience in the research and development of "heavy" agents and behavior models that are long-lived and exhibit high degrees of competence while interacting with complex environments. We have experience with a variety of existing architectures, and are designing a new agent architecture with the aim of directly supporting development of such agents. Industrial-strength heavy agents add a number of requirements to the long list that already exists for cognitive architectures. In particular, such agents may be applied across a wide variety of tasks and situations, requiring the comprehensive support of many architectural options. The underlying architecture must scale to support agents that run efficiently, for a long time, with very large knowledge bases. It must be engineered to provide a robust and stable platform for low-fault-tolerant applications. Finally, in order to be successful, it must support maximal ease of use for architectural researchers, agent developers, subject-matter experts, and end users. We feel all of these concerns must be designed into the architecture from the start. The presentation will discuss some of the initial efforts we have made to lay out such a design.
EPIC consists of a multithread-capable production system surrounded by perceptual-motor modules that represent fundamental properties of human performance. Models built with EPIC reveal the critical role of perceptual-motor constraints and executive strategies in multitask performance, and provide a framework for new theoretical work on working memory and learning of both simple and multiple-task skills. Results thus far suggests new issues for learning mechanisms in production system architectures.
Hierarchical, hybrid control architectures attempt to capture the strengths of both deliberative and reactive control approaches. They have a continuous/reactive control layer, a symbolic/deliberative control layer and a middle layer that mediates between continuous and symbolic control. This talk will describe a particular instance of such an architecture called 3T, which is being developed at NASA Johnson Space Center and has been used to control over a dozen robotic and non-robotic applications. Each control layer of 3T will be described in detail, including representations and reasoning mechanisms. Motivation is derived from external goals that can come into any control layer. 3T embodies adjustable autonomy, which means that user interaction can be fine-grained or coarse-grained. Examples of adjustable autonomy will be given.
I start this talk with a review of Soar and the constraints inherent to incorporating learning into a general cognitive architecture, illustrated by chunking. I will discuss why I am now investigating adding learning mechanisms to Soar and I will present our preliminary design and implementation of episodic and reinforcement learning in Soar. I will conclude with a discussion of the promises and challenges of integrating these learning mechanisms.
After describing the latest version of ACT-R, I will present new work on the chunking of goal hierarchies that builds on three recent developments in ACT-R: a production composition mechanism (Taatgen, 2001), the abandonment of architectural support for subgoaling, and the incorporation of perceptual/motor components (Byrne & Anderson, 2001). I will present preliminary results from a simple HCI task model that embodies a theory of learning hierarchically controlled behavior. The model does not initially contain task-specific production rules but learns them via a general interpretive process working over a declarative goal hierarchy. Learning gradually collapses the hierarchy and creates new control symbols in a way quite similar to the original Rosenbloom and Newell (1986) work on chunking, but without architecturally distinguished subgoals. The model is supported qualitatively by new psychological data that provides direct evidence for both the role of the hierarchy in controlling behavior and its gradual collapse. I will draw some tentative lessons from the work for symbolic cognitive architectures more generally.
I will describe recent research we have done in collaboration with USC-ISI on agent-based systems that can access and interact with Web sites. We have developed several systems that act as "intelligent assistants" for information gathering and planning tasks. Our work has focused on developing methods for automatically learning the structure of web sites, so that an agent can extract data from a site and/or execute transactions against a site. One of the critical aspects of our research is how to break up the learning problem into modular, learnable subcomponents. This talk will focus on the issues of modularity and abstraction, and how these issues impact the cognitive architecture one would ideally design for Web agents.
This presentation will describe CIRCA, the Cooperative Intelligent Real-Time Control Architecture, including a detailed discussion of the core planning system and several recent variations. CIRCA is designed to control autonomous systems operating in hazardous or mission-critical environments, including adversarial domains. To meet real-time deadlines, CIRCA provides guaranteed timeliness and formally verifiable correctness. The latter part of the talk will focus on CIRCA's motivation mechanisms: the specification of goals and threats to system safety.
This talk takes a tour of an extremely large-scale application, with concommittent sub-challenges to cognitive architectures in the areas of planning, attention management, representation and understanding, and interaction.
In this talk, we describe Icarus, an integrated architecture for intelligent agents in which affective values play a central role. The framework incorporates long-term and short-term memories for concepts and skills, and it includes mechanisms for recognizing concepts, calculating reward, nominating and selecting skills based on expected values, executing those skills in a reactive manner, repairing these skills when they fail, and abandoning skills when their expected values are low. We illustrate these processes with examples from the domain of highway driving, and we relate Icarus' assumptions to five principles of architectural design.
I introduce CogAff, a framework for thinking about integrated architectures that combines different forms of representation and mechanisms. I also present a special instance of this framework, H-Cogaff, a conjectured architecture for human-like systems that incorporates diverse concurrently active components and accommodates many features of human mental function, including emotions and learning. H-Cogaff supports more varieties of emotion and learning than are normally considered, along with many affective states, including desires, pleasures, pains, attitudes, and moods. These ideas have implications both for applications of AI (e.g., in digital entertainment or learning environments) and for scientific theories about human minds and brains.
We investigate the interaction between implicit embodied skills and explicit generic knowledge in skill learning, and we report computational models that account for their relations. In particular, we study ways that interaction between these two processes improve or hamper learning, and we describe CLARION, an architecture that captures a wide range of quantitative data about such effects. We focus on a "bottom-up" approach that first learns implicit knowledge and then explicit knowledge upon that base. We are also carrying out experiments with human subjects to further explicate the interaction between implicit and explicit processes and to further test our cognitive architecture.