=============================================================== S T A T I C L E A R N I N G O F D E F A U L T R U L E S =============================================================== This file is a summary of the main steps involved in the use of the static (a.k.a. "one-shot") algorithm for learning default rules. We will need to revise these steps after developing a dynamic version of the learning algorithm. -= Input and output =- The learning algorithm inputs the current STP world model, old STP world models (if available), the weight of each old model, and a set of default rules. Note that the weight of the current STP model is always 1.0, and the input does not include this weight. Also note that the input set of default rules may include both complete rules, which do not require learning, and rules with "unknown" effects. The learning algorithm outputs a set of new learned rules, which are not part of the input rule set. If some rules in the input set include both known and "unknown" effects, the learning algorithm removes known effects, replaces "unknown" effects with the learned data, and outputs the resulting rules. For example, suppose that the input includes the following rule: Priority: 2 Applicability: Room types: Classroom Defaults: Projectors: 1 Mikes: Unknown Suppose further that the learning algorithm has determined that the usual number of microphones in classrooms is 0. Then, it removes the previously known effect, and outputs the following rule: Priority: 2 Applicability: Room types: Classroom Defaults: Mikes: 0 -= Main steps =- The invocation of the learning algorithm involves the following steps. () If the system already contains a file with previously learned rules, then move this file to a backup. () Remove all inherited and default values from the current STP world model, stored in memory; that is, keep only the values with the priority of 1001. () Re-run the inheritor and re-apply the input default rules, thus adding default values to the STP world model; note that this step does not use old learned rules. () Run the one-shot learning of default rules, using the current STP world model, stored in memory, as well as old STP world models (if available). Note that we do not apply any default rules to the old STP world models before using them in learning. () Output the new learned rules into a file, which replaces the old file with learned rules. () Apply the learned rules to add default values to the current STP world model, stored in memory, in addition to the previously added default values. -= Learning situations =- We invoke the one-shot algorithm for learning default rules in the following situations. () When opening the Space-Time Module in the beginning of the war games, before the start of the interactive war games with the user. () After the end of the war games, when the Space-Time Module receives the "End of War Games" signal. () During the "batch learning" in the middle of war games; see the "Batch learning" file for additional information about this situation. Note that we do not invoke the one-shot learning algorithm during the actual tests; in particular, we do not invoke it when opening the Space-Time Module before the actual tests.