Soar News and Announcements
- New Soar publication:Kirk, J., Mininger, A., Laird, J. 2016: Learning task goals interactively with visual demonstrations. Biologically Inspired Cognitive Architectures. New York, New York, 2016.
- The proceedings for the the 36th Soar Workshop which was held on June 7-10 at the University of Michigan campus is now online at this page.
- New Soar publication:Kirk, J. and Laird, J. 2016: Learning General and Efficient Representations of Novel Games Through Interactive Instruction. In Proceedings of the Fourth Annual Conference on Advances in Cognitive Systems. Evanston, Illinois
- New Soar publication: Jones, S. J., Wandzel, A. R., Laird, J. E. 2016: Efficient Computation of Spreading Activation Using Lazy Evaluation. Proceedings of the 14th International Conference on Cognitive Modeling (ICCM). University Park, Pennsylvania
- New Soar publication: Mininger, A., & Laird, J. 2016: Interactively Learning Strategies for Handling References to Unseen or Unknown Objects. In Proceedings of the Fourth Annual Conference on Advances in Cognitive Systems.
- New Soar publication: Lindes, Peter and John E. Laird (2016). Toward Integrating Cognitive Linguistics and Cognitive Language Processing. Proceedings of the 14th International Conference on Cognitive Modeling (ICCM). University Park, Pennsylvania.
- New Soar publication: Li, J., Jones, S. J., Mohan, S., Derbinsky, N. 2016: Architectural Mechanisms for Mitigating Uncertainty during Long-Term Declarative Knowledge Access. Proceedings of the 4th Annual Conference on Advances in Cognitive Systems (ACS). Evanston, Illinois.
- New Soar publication: Mohan, S., 2015: From Verbs to Tasks: An Integrated Account of Learning Tasks from Situated Interactive Instruction. Ph.D. Thesis, University of Michigan, 2015.
- New Soar publication: Mohan, S., Kirk, J., Mininger, A., Laird, J. E., 2015 : Agent Requirements for Effective and Efficient Task-Oriented Dialog. AAAI 2015 Fall Symposium Series, 2015.
- New Tool:First version of SublimeText Soar Tools now released. This extension allows the cross-platform editor SublimeText to provide Soar-specific functionality.
- Soar 9.5.0 beta is now available for download! This release of Soar includes a new, more powerful version of chunking, which we call explanation-based chunking (EBC). It also includes various important bug fixes, new commands for semantic memory, a new reinforcement learning policy. This version is still beta. The official stable version is still 9.4.0
- Soar 9.4.0 is now available for download! This release of Soar includes the the new spatial visual system (SVS).
If you are new to Soar and just getting started, please check out the Soar Tutorial page here . It includes a binary distribution of Soar 9.4, several test environments, demo agents, the Soar manual and a nine-section tutorial that will touch on all the core aspects of Soar, from basic concepts like rules and the decision cycle to advanced topics like chunking, reinforcement learning, episodic and semantic memory.
The offical Soar manual can be found here.
If you have questions about running/building Soar or writing Soar agents, you can send a message to the soar-help mailing list, which is read by many helpful members of the community. You can join the mailing list at this page and then send your question to firstname.lastname@example.org.
Welcome to the Soar Home Page
What is Soar?
Soar is a general cognitive architecture for developing systems that exhibit intelligent behavior. Researchers all over the world, both from the fields of artificial intelligence and cognitive science, are using Soar for a variety of tasks. It has been in use since 1983, evolving through many different versions to where it is now Soar, Version 9.
We intend ultimately to enable the Soar architecture to:
- work on the full range of tasks expected of an intelligent agent, from highly routine to extremely difficult, open-ended problems
- represent and use appropriate forms of knowledge, such as procedural, semantic, episodic, and iconic
- employ the full range of problem solving methods
- interact with the outside world, and
- learn about all aspects of the tasks and its performance on them.
In other words, our intention is for Soar to support all the capabilities required of a general intelligent agent.
The ultimate in intelligence would be complete rationality which would imply the ability to use all available knowledge for every task that the system encounters. Unfortunately, the complexity of retrieving relevant knowledge puts this goal out of reach as the body of knowledge increases, the tasks are made more diverse, and the requirements in system response time more stringent. The best that can be obtained currently is an approximation of complete rationality. The design of Soar can be seen as an investigation of one such approximation. Below is the primary principle which is the basis of Soar's design and which guides its attempt to approximate rational behavior.
- All decisions are made through the combination of relevant knowledge at run-time. In Soar, every decision is based on the current interpretation of sensory data, the contents of working memory created by prior problem solving, and any relevant knowledge retrieved from long-term memory. Decisions are never precompiled into uninterruptible sequences.
For many years, a secondary principle has been that the number of distinct architectural mechanisms should be minimized. Through Soar 8, there has been a single framework for all tasks and subtasks (problem spaces), a single representation of permanent knowledge (productions), a single representation of temporary knowledge (objects with attributes and values), a single mechanism for generating goals (automatic subgoaling), and a single learning mechanism (chunking). We have revisited this assumption as we attempt to ensure that all available knowledge can be captured at runtime without disrupting task performance. This is leading to multiple learning mechanisms (chunking, reinforcement learning, episodic learning, and semantic learning), and multiple representations of long-term knowledge (productions for procedural knowledge, semantic memory, and episodic memory).
Two additional principles that guide the design of Soar are functionality and performance. Functionality involves ensuring that Soar has all of the primitive capabilities necessary to realize the complete suite of cognitive capabilities used by humans, including, but not limited to reactive decision making, situational awareness, deliberate reasoning and comprehension, planning, and all forms of learning. Performance involves ensuring that there are computationally efficient algorithms for performing the primitive operations in Soar, from retrieving knowledge from long-term memories, to making decisions, to acquiring and storing new knowledge.
For further background on Soar, we recommend The Soar Cognitive Architecture Laird, J. E.(2012), The Soar Papers: Readings on Integrated Intelligence, Rosenbloom, Laird, and Newell (1993), A Gentle Introduction to Soar: 2006 update, and Unified Theories of Cognition, Newell (1990). Also available are Soar: A Functional Approach to General Intelligence and Soar: A comparison with Rule-Based Systems. A full list of publications is available on the Soar publications page. Entries on the Soar Knowledge Base and the older Soar FAQ also provide answers to many common questions about Soar.
We would like to extend a special thank you to DARPA, ONR and AFOSR for their continued support of Soar and projects related to Soar.
- agent debugging (9)
- blocks world (8)
- chunking (6)
- debugger (1)
- documentation (2)
- eaters (13)
- episodic memory (4)
- hierarchical task decomp (8)
- kernel programming (11)
- look-ahead search (8)
- means-ends analysis (3)
- natural language (1)
- nlp (2)
- reinforcement learning (11)
- robotics (1)
- semantic memory (2)
- sml (4)
- soar ide (5)
- soartech (4)
- tanksoar (7)
- taxi driver simulator (3)
- tower of hanoi (4)
- water jug (5)
- wordnet (2)
- working memory activation (1)
Command-Line Options for the Java Debugger and CLISoar Java Debugger Command Line Options
-remote Use a remote connection (with default ip and port values)
-ip xxx Use this IP value (implies remote connection)
-port ppp Use this port (implies remote connection, without any remote options we start a local kernel)
-agent <name> On a remote connection select this agent as initial agent
-agent <name> On a local connection...