Using the explain command without any arguments will display a summary of
which rule firings the explainer is watching for learning. It also shows which
chunk or justification the user has specified is the current focus of its
output, i.e. the chunk being discussed.
Tip: This is a good way to get a chunk id so that you don't have to type or
paste in a chunk name.
explain formation provides an explanation of the initial rule that fired which
created a result. This is what is called the 'base instantiation' and is what
led to the chunk being learned. Other rules may also be base instantiations if
they previously created children of the base instantiation's results. They also
will be listed in the initial formation output.
This is probably one of the most common things you will do while using the
explainer. You are essentially browsing the instantiation graph one rule at a
time.
Tip: Use i, which is an alias to explain instantiation, to quickly view an
instantiation, for example:
In most cases, users spend most of their time browsing the explanation trace.
This is where chunking learns most of the subtle relationships that you are
likely to be debugging. But users will also need to examine the working memory
trace to see the specific values matched.
To switch between traces, you can use the explain e and the explain w commands.
Tip: Use et and 'wt', which are aliases to the above two commands, to quickly
switch between traces.
This feature explains any constraints on the value of variables in the chunk
that were required by the problem-solving that occurred in the substate. If
these constraints were not met, the problem-solving would not have occurred.
Explanation-based chunking tracks constraints as they apply to identity sets
rather than how they apply to specific variables or identifiers. This means that
sometimes constraints that appear in a chunk may have been a result of
conditions that tested sub-state working memory element. Such conditions don't
result in actual conditions in the chunk, but they can provide constraints.
explain constraints allows users to see where such constraints came from.
This feature is not yet implemented. You can use explain stats to see if any
transitive constraints were added to a particular chunk.
explain identity will show the mappings from variable identities to identity
sets. If available, the variable in a chunk that an identity set maps to will
also be displayed. (Requires a debug build because of efficiency cost.)
Variable identities are the ID values that are displayed when explaining an
individual chunk or instantiation. An identity set is a set of variable
identities that were unified to a particular variable mapping. The null identity
set indicates identities that should not be generalized, i.e. they retain their
matched literal value even if the explanation trace indicates that the original
rule had a variable in that element.
By default, only identity sets that appear in the chunk will be displayed in the
identity analysis. To see the identity set mappings for other sets, change the
only-chunk-identities setting to off.
The explainer has an option to create text files that contain statistics about
the rules learned by an agent during a particular run. When enabled, the
explainer will write out a file with the statistics when either Soar exits or a
soar init is executed. This option is still considered experimental and in
beta.
While explanation-based chunking makes it easier for people to now incorporate
learning into their agents, the complexity of the analysis it performs makes it
far more difficult to understand how the learned rules were formed. The
explainer is a new module that has been developed to help ameliorate this
problem. The explainer allows you to interactively explore how rules were
learned.
When requested, the explainer will make a very detailed record of everything
that happened during a learning episode. Once a user specifies a recorded chunk
to "discuss", they can browse all of the rule firings that contributed to the
learned rule, one at a time. The explainer will present each of these rules with
detailed information about the identity of the variables, whether it tested
knowledge relevant to the the superstate, and how it is connected to other rule
firings in the substate. Rule firings are assigned IDs so that user can quickly
choose a new rule to examine.
The explainer can also present several different screens that show more verbose
analyses of how the chunk was created. Specifically, the user can ask for a
description of (1) the chunk’s initial formation, (2) the identities of
variables and how they map to identity sets, (3) the constraints that the
problem-solving placed on values that a particular identity can have, and (4)
specific statistics about that chunk, such as whether correctness issues were
detected or whether it required repair to make it fully operational.
Finally, the explainer will also create the data necessary to visualize all of
the processing described in an image using the new ’visualize’ command. These
visualizations are the easiest way to quickly understand how a rule was formed.
Note that, despite recording so much information, a lot of effort has been put
into minimizing the cost of the explainer. When debugging, we often let it
record all chunks and justifications formed because it is efficient enough to do
so.
Use the explain command without any arguments to display a summary of which rule
firings the explainer is watching. It also shows which chunk or justification
the user has specified is the current focus of its output, i.e. the chunk being
discussed.
Tip: This is a good way to get a chunk id so that you don’t have to type or
paste in a chunk name.
Soar's visualize command allows you to create images that represent processing
that the explainer recorded. There are two types of explainer-related
visualizations.
(1) The visualizer can create an image that shows the entire instantiation graph
at once and how it contributed to the learned rule. The graph includes arrows
that show the dependencies between actions in one rule and conditions in others.
This image is one of the most effective ways to understand how a chunk was
formed, especially for particularly complex chunks. To use this feature, first
choose a chunk for discussion. You can then issue the visualize command with
the appropriate settings.
(2) The visualizer can also create an image that shows how identities were
joined during identity analysis. This can be useful in determining why two
elements were assigned the same variable.