Reviews & Opinions
Independent and trusted. Read before buy Bosch WFF 1101!

Bosch WFF 1101

Manual

Preview of first few manual pages (at low quality). Check before download. Click to enlarge.
Manual - 1 page  Manual - 2 page  Manual - 3 page 

Download (English)
Bosch WFF 1101 Washing Machine, size: 3.4 MB
Instruction: After click Download and complete offer, you will get access to list of direct links to websites where you can download this manual.

About

Bosch WFF 1101About Bosch WFF 1101
Here you can find all about Bosch WFF 1101 like manual and other informations. For example: review.

Bosch WFF 1101 manual (user guide) is ready to download for free.

On the bottom of page users can write a review. If you own a Bosch WFF 1101 please write about it to help other people.

 

[ Report abuse or wrong photo | Share your Bosch WFF 1101 photo ]

User reviews and opinions

<== Click here to post a new opinion, comment, review, etc.

Comments to date: 2. Page 1 of 1. Average Rating:
georgiecooke 9:13pm on Wednesday, June 30th, 2010 
Time for a new one! The BOSCH 1401 takes up to a 5Kg load, and uses remarkably little water.
mikey 5:03am on Wednesday, May 5th, 2010 
I am very disappointed with my Bosh Wff1401. I bought it because it was voted Which Magazines best washing machine.

Comments posted on www.ps2netdrivers.net are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.

 

Documents

doc0

Evaluation of E PILOG: a Reasoner for Episodic Logic
Fabrizio Morbini and Lenhart Schubert

University of Rochester

Abstract
It can be quite hard to objectively evaluate a reasoner geared towards commonsense problems and natural language applications if it uses a nonstandard logical language for which there exist no publicly available datasets. We describe here the evaluation of our recent improvements of the E PILOG system, a reasoner for Episodic Logic, a superset of rstorder logic geared towards natural language applications. We used both a sample of interesting commonsense questions obtained from the ResearchCyc knowledge base and the standard TPTP library to provide an evaluation that tests the unique features of Episodic Logic and also puts the performance of E PILOG into perspective with respect to the state of the art in rst-order logic theorem provers. The results show the extent of recent improvements to E PILOG, and that very expressive commonsense reasoners need not be grossly inefcient.
Episode Set Hier Time Type Color

Number Part

Response generator
Specialist interface String Equality Other

EPILOG core

Figure 1: The high level structure of E PILOG1. glish. Because a question expressed in English can be formalized in many ways and at various levels of detail, it is very difcult to use the results obtained to compare different systems. This lack of a dataset expressed in logic to facilitate comparisons is not easily solved given the lack of agreement on a single logical language well-suited for NL; and even if such a language existed each English sentence can still be interpreted in many ways and at different levels of detail. Therefore, to give a more complete picture of the performance of the E PILOG system and to facilitate comparisons with other systems, we decided to evaluate it as well against the widely used TPTP dataset for FOL theorem provers. This puts the basic performance of the reasoner in perspective with respect to the state of the art in FOL theorem provers. The evaluation on Cycs commonsense test cases instead tests the features that distinguish E PILOG from a traditional FOL theorem prover. In the paper if we need to distinguish between the legacy E PILOG system and the new version we will refer to the former as E PILOG1 and to the latter as E PILOG2. This paper is organized as follows: rst we briey describe the high-level structure of the E PILOG system, and then highlight the major improvements made to E PILOG in the E PILOG2 system. Then we describe in detail the evaluation of the system and state our conclusions.

Introduction

We present here the evaluation of the progress made in the development of the E PILOG system ((Schubert et al. 1993) and (Schaeffer et al. 1993)), motivated by the recent effort towards building a self-aware agent (Morbini and Schubert 2008). E PILOG is an inference engine for Episodic Logic (EL) ((Schubert and Hwang 2000) and (Hwang and Schubert 1993)) that has been under development since 1990 ((Schubert et al. 1993) and (Schaeffer et al. 1993)). The E PILOG system and EL are designed with natural language (NL) understanding in mind. The natural way to test its capabilities (both on the reasoning front and on the representation front) is by using a publicly available set of commonsense problems. Among several collections that are available, we opted for the set of problems contained in the ResearchCyc knowledge base. They comprise more than 1600 problems that provide both the English formulation of a question and its translation into CycL1. In addition to the abundance of interesting and challenging questions, another advantage of using this dataset is that it allows the comparison between our and Cycs interpretation of each question. The last point highlights the problem of comparison for systems that use for their evaluation a dataset based on En1

E PILOG

In this section we briey describe E PILOG and EL. Figure 1 represents the building blocks of the E PILOG1 system and
http://www.cyc.com/cycdoc/ref/cycl-syntax.html
how they are connected together. E PILOG1s core contains the inference routines, the parser, the normalizer and the storage and access schemas to retrieve and add knowledge from/to the knowledge base. A set of specialists, connected to the core inference engine through an interface module, help the general inference routines to carry out special inferences quickly (e.g., type inference to conclude whether [Car1 Artifact] is true given that Car1 is a coupe and coupes are a type of car, cars are vehicles, and vehicles are artifacts). The specialist interface consists of a series of ags associated with some key predicates/functions that automatically activate predened functions in a particular specialist. E PILOG is a reasoner for EL. EL is a highly expressive natural logic with unique features, including modiers, reiers, substitutional and generalized quantiers and episodic operators, making EL particularly suited for (NL) applications. Briey, the major differences with respect to FOL are the following. To represent events and their relations, three episodic operators are introduced: *, ** and @. These operators take a well-formed formula (wff) and a term (an event) as arguments. For example, the EL formula [[D1 lose-control-of V1] ** e1] expresses that e1 is the event characterized by D1 losing control of V1. (Note that predicates are preceded by their subject argument in wffs.) Substitutional quantication over predicative expressions, wffs, and other syntactic entities is required to express meaning postulates and introspective knowledge. It is also important for interfacing the general inference engine with specialists (as described later). EL modiers correspond to the modiers used in NL, e.g., very, almost or by sheer luck, and reication operators are used to represent generics and attitudes. Quantiers allow for the use of a restrictor. For example, in the sentence Most dogs are friendly, dogs is the restrictor of the quantier most. For the quantiers and the restrictor can be incorporated into the remainder of the quantied sentence, but for many generalized quantiers this is not possible.
w) = yes] [(that w) knownbyme]]) describes when the introspective specialist can be called to answer whether E PI LOG knows a particular formula w. The interface is based on this Apply function, which is known to the inference engine as having a special meaning. An automatic system to extract type information has been added to E PILOG. Currently this system is used 1) to build type hierachies, 2) to keep track of the return type of functions based on the type of the arguments and 3) to build a hierarchy for the arguments of transitive predicates (also transitive predicates are automatically detected by looking for formulas like ( x ( y ( z [[[x P y] [y P z]] [x P z]]))), expressing transitivity). The question-answering (QA) framework has been totally redesigned to allow for QA inside QA (used in introspection and called recursive QA). In addition subgoals are now selected using a hierachical agenda that sorts the subgoals based on 1) the size of the formula associated with subgoal g relative to the size of the biggest formula among the siblings of g; 2) the % of times a descendant of g or g itself was selected for inference but no improvement was obtained2 ; 3) the % of g that is solved (this is greater than 0 only for a subgoal that at some point can be split, e.g., a conjunction); 4) the % difference between the size of gs formula and the size of the smallest formula among the descendants of g whose solution would imply a solution of g; for conjunction of subgoals, their average size is considered.

Evaluation

To evaluate the progress of our effort to build a self-aware agent based on E PILOG2, we used two methods: 1) testing on a selected small set of examples from the commonsense test cases contained in Research Cyc; 2) the scalability test included in the TPTP library of problems for theorem provers; this scalability test was constructed from the OpenCyc knowledge base. With the rst type of evaluation we are testing the adequacy of EL for directly expressing English questions and background knowledge, and the reasoning capabilities of E PILOG2. With the second type of evaluation we are testing how E PILOG2 fares in relation to the state of the art of FOL theorem provers. First we will describe the set of questions used to test E PILOG2s commonsense reasoning capabilities. Most of the questions have been manually encoded in EL because the general-purpose English to EL translator is not yet robust enough to handle these questions. However care has been taken not to simplify the EL form of those questions to make the job of the reasoner easier; instead we made an effort to produce EL versions that would likely be produced by an automatic, compositional English-to-EL translator. This is why some questions may appear more complex than one might expect, based on traditional intuited formalizations of English sentences. In the formulas used in the following examples, we use Epi2Me as the internal constant that refers to the system itself.
2 An improvement is measured either by a decrease in size of the resulting subgoal, or solution of the subgoal.

E PILOG2

In this section we mention the major changes made to E PI LOG 1. The interface to knowledge bases (KB) has been redesigned to facilitate 1) temporary modications to a KB (introduced for example by the assumption-making used during inference) and 2) the development and testing of new access schemas (i.e. mechanisms to retrieve knowledge from a KB). The result is a KB system based on inheritance of KBs (similar to what Cyc uses for inheritance of microtheories) in which each KB is associated with a particular access schema that can be easily changed. The parser was changed from an if-then based mechanism to a system based on a standard chart parser. This allows for easy debugging and modications to the ever-evolving EL grammar. The interface to specialists is now based on explicit metaknowledge stored like any other knowledge. This knowledge species under what conditions a particular specialist functionality can be called. For example the formula (wf f w [w without-free-vars] [[(apply apply-fn-knownbyme?

Question 1 is How old are you?, which in EL becomes:
(whterm x (term y [x rounds-down y] ( z [y expresses z (K (plur year))] ( e [e at-about Now] [[z age-of Epi2Me] ** e]))))
( e [e at-about now0] [(wh z [[z name] [Epi2Me have z]] ( y [y thing] [y (BE (L x (x = z)))])) ** e])
Some of the key knowledge used to answer this question is the following:
The event now0 is during the event e2: [now0 during e2] The event e2 is characterized by E PILOG having the name epilogname: [[Epi2Me have epilog-name] ** e2] If one event is characterized by something possessing something else, then that will also be true for any event during the rst event: ( x ( y ( z [[x have y] ** z] ( zz [zz during z] [[x have y] @ zz]))))
K is a reication operator that maps a predicate (here, (plur year), a predicate true of any collection of years) to a kind (here, the kind whose realizations are collections of years). We have assumed that the representation of the question would be expanded pragmatically to include conventional restrictions on the form of the answer expected, i.e., an answer in rounded-down years rather than, say, seconds. These pragmatic constraints depend on the question itself; for example they would be different for a question like How old is this bagel/star/rock/etc.?. In the future we would like to automatically include such constraints by means of cooperative conversation axioms. We might have an axiom saying something like: If X informs Y about a quantitative attribute F (such as weight, age, temperature, etc.) of some entity Z, then X is conversationally obligated to express F(Z) in units that are conventional for entities of the type(s) instantiated by Z. In addition we would need various axioms about the conventional units for expressing weight, age, etc., of various types of entities. These axioms would then be used to rene the raw logical form of a question to include the pragmatic constraints. However, here we just focused on solving the question, manually adding the necessary pragmatic constraints. Some of the key knowledge used to answer this question is the following:
This axiom denes the age of an entity during a particular event, when the entitys birth date is known: ( y ( x [x (be (birth-date-of y))] ( e [[(time-elapsed-between (date-of e) x) age-of y] @ e]))) Axiom dening the relation between the ** and @ operators: (wf f w ( e [[w @ e] ( e1 [e1 same-time e] [w ** e1])])) Axiom that describes which specialist function to call to express the function time-elapsed-between in a particular type of unit: ( x [x is-date] ( y [y is-date] (pred type [type el-time-pred] ( r [r = (Apply diff-in-dates? x y type)] [r expresses (time-elapsed-between x y) (K (plur type))]))))
Of interest here is the last axiom because it ascribes inward persistence (homogeneity) to predicate have, a property it shares with other atelic predicates. The two other formulas are hand-additions to the current knowledge base, but they should be automatically inserted, the rst by the English to EL generator, the second by a self-awareness demon that is in charge of maintaining basic information about the agent, for instance, its name, its state (e.g. sleeping, awake, etc.) and its state of health (e.g., cpu consumption, free memory, garbage collection status, etc.). To correctly answer this question the reasoner also uses lexical knowledge that states which predicates are atemporal and therefore can be moved out of the scope of the ** operator. This knowledge is expressed in EL and it is used by the normalizer. An example is (thing EL-type-pred), stating that thing is a type predicate and therefore atemporal. Question 3 shows how E PILOG could answer questions about its own knowledge. The question is What do you know about the appearance of pigs?, which in EL we expressed as:

(wh x [x appearance-fact-about (K (plur pig))])
Some of the relevant knowledge involved in this example is:
Pigs are thick-bodied: [(K (plur pig)) thick-bodied] The predicate thick-bodied is an appearance predicate: [thick-bodied appearance-pred] Every wff that uses an appearance predicate is a fact about the appearance of its subject: (pred p [p appearance-pred] ( x [x p] [(that [x p]) appearance-fact-about x]))
The most interesting part of this example is the use of a set of axioms based on the Apply function to make the reasoning system aware of a set of procedures useful in computing mathematical operations and in doing type conversions. In this way E PILOG2 is able to return the answer to the question expressed as an integer that is the oor of the amount of time in years that has elapsed between the date of birth of E PILOG and now (the moment of speech). In EL the unier found for the variable x of the initial question is: (amt 18 (K (plur year))). Question 2 is Whats your name?, which expressed in EL is:
One could construct much more complex formulas pertaining to the appearance of something, e.g., that the appearance of a persons hair say, color and style constitutes appearance information about the person. The remaining questions are taken from the ResearchCyc 1.0 collection of commonsense test cases. About 81% of these test cases have been axiomatized to become solvable
by Cyc; among those presented here, the last two have a solution in Cyc. An important difference between our and Cycs approach to these problems is in the style of formalization: Cycs representations are in a simplied form that 1) is geared towards the CycL style (e.g., using many concatenated names for complex expressions instead of compositionally combining the parts), which is far from NL-based representations; and 2) omits important details (e.g. temporal relations) and pragmatic constraints. Question 4 is Can gasoline be used to put out a re?. In Cyc this is the test case named #$CST-CanYouUseGasToPutOutAFire, and the question is expressed as: ((TypeCapableFn behaviorCapable) GasolineFuel ExtinguishingAFire instrument-Generic). (TypeCapableFn behaviorCapable) returns a predicate that describes the capacity for a certain behavior of a certain type of thing in a certain role position. In effect the question becomes, Is gasoline-fuel behaviorally-capable of being a generic-instrument in re-extinguishing? We also interpret the question generically, but we adhere more closely to a possible English phrasing, asking whether there could be an instance where a person uses gasoline to put out a re:

( e [e during (extended-present-rel-to Now)] ( x [x person] ( y [y ((nn gasoline) fuel)] ( z [z re] [[x (able-to ((in-order-to (put-out z)) (use y)))] @ e]))))
In this question, have-as is a so-called subject-adding operator that takes a unary predicate as argument and returns a binary predicate. In this case ((attr biological) father) is the monadic predicate true for all individuals that are biological fathers. (have-as ((attr biological) father)) is the binary predicate that is true for all pairs of individuals in which the object of the predicate is the father of its subject. The relevant knowledge for this example is:
E PILOG is an artifact: [Epi2Me artifact] No artifact is a natural object: ( x [x artifact] (not [x natural-obj])) A creature is a natural object: ( x [x creature] [x natural-obj]) All creatures have a biological father: ( x [[x creature] ( y ( e [[x (have-as ((attr biological) father)) y] ** e]))])
Some of the knowledge relevant to this question is:
If some person is able to use some stuff to put-out a re then s/he must be at the same location as the re, must have at hand that stuff and that stuff must be ame-suppressant: ( e [e during (extended-present-rel-to Now)] ( x [x person] ( y [y stuff] ( z [z re] ([[x (able-to ((in-order-to (put-out z)) (use y)))] @ e] [[[x has-at-hand y] @ e] [[x loc-at z] @ e] [y ame-suppressant]]))))) Gasoline is ammable stuff: ( x [x ((nn gasoline) fuel)] [[x ammable] [x stuff]]) Flammable things are not ame-suppressant: ( x [x ammable] (not [x ame-suppressant]))
The question is answered negatively by using the knowledge that E PILOG is an articial thing and therefore not a natural object. Further it is known that only creatures can have a biological father and that creatures are a subtype of natural objects. Question 6 corresponds to Cycs question named #$CST-AnimalsDontHaveFruitAsAnatomicalParts-HypothesizedQueryTest In Cyc the question is expressed as (implies (isa ?ANIMAL Animal) (not (relationInstanceExists anatomicalParts ?ANIMAL Fruit))). In EL we express the question (more naturally, we claim) as:
( e [e during (extended-present-rel-to Now)] (No x [x animal] [[x (have-as anatomical-part) (K fruit)] ** e]))
The function extended-present-rel-to applied to an event e returns the event that started long ago and continues long pass the end of the event e. The extent of the event returned should be context-dependent. However, for this question this is irrelevant given that the knowledge used is presumed true for any event. The relevant knowledge for this example is:

Plant stuff is not animal stuff: ( x [x plant-stuff] (not [x animal-stuff])) Fruits are made of plant stuff: [(K fruit) made-of (K plant-stuff)] Animals are made of animal stuff: [(K animal) made-of (K animal-stuff)] If an individual x is made of (kind of stuff) p and if (kind of stuff) q is a subtype of p then x is made of q: ( x (pred p [x made-of (k p)] (pred q ( y [y p] [y q]) [x made-of (k q)]))) If an individual x is made of (kind of stuff) p and if (kind of stuff) q is disjoint from p then x is not made of q:
The question is answered negatively by using the knowledge that to be able to put-put a re one must use a amesuppressant material, and gasoline is not a ame-suppressant material. Question 5 is Cycs question named #$CST-DoesCycHaveABiologicalFather, which in English is Do you (Cyc) have a biological father?. In Cyc the question is represented as (thereExists ?F (biologicalFather Cyc ?F)). We expressed the question in EL as follows:
( e [e at-about Now] ( y [[Epi2Me (have-as ((attr biological) father)) y] ** e]))
( x (pred p [x made-of (k p)] (pred q ( y [y p] (not [y q])) (not [x made-of (k q)])))) If a type p is made of (kind of stuff) q then all individuals of type p are made of q: (pred p (pred q [[(k p) made-of (k q)] ( y [y p] [y made-of (k q)])])) Every part is made of the material of the whole: ( w ( e ( p [[[w (have-as anatomical-part) p] ** e] ( wm [w made-of wm] [p made-of wm])])))
( x [x golf-club] ( y [y person] ( z [z person] ( e [[y ((adv-a (with-instr x)) (attack z))] ** e]))))
The knowledge relevant to this question is:
If an object can be swung by hand, and is solid, and weighs at least two pounds, it can be used as a striking weapon: ( x [x phys-obj] [[( e [[x (pasv ((adv-a (by (k hand))) swing))] ** e]) [x solid] ( w [[x weighs w] [w (k ((num 2) pound))]])] ( e [[x (pasv (use-as ((nn striking) weapon)))] ** e])]) A golf club can be swung by hand, is solid, and weighs at least two pounds: ( x [x golf-club] [(some e [[x (pasv ((adv-a (by (k hand))) swing))] ** e]) [x solid] [x phys-obj] ( w [[x weighs w] [w (k ((num 2) pound))]])]) For any striking weapon, one person can attack another with the weapon, by striking him or her with it: ( x [x ((nn striking) weapon)] ( y [y person] ( z [z person] ( e [[y ((adv-a (by-means (Ka ((adv-a (with-instr x)) (strike z))))) ((adv-a (with-instr x)) (attack z)))] ** e])))) There is a golf-club: ( x [x golf-club]) (by-means modication is monotone) If an agent does some action by means of another action, then he does the rst action: (pred p ( x ( y ( e [[x ((adv-a (by-means y)) p)] ** e] [[x p] ** e]))))
We decided to answer the question by saying that all parts are made of the same substance of which the whole is made. However the case of articial parts/organs is not captured by this knowledge. One could improve on it by saying that organic parts must be made of biologically compatible materials, while any articial parts must be made of durable inert materials that are compatible with the organic parts they are in contact with. Question 7 corresponds to Cycs question named #$CST-DoAgentsBelieveWhatTheyKnow. The English version of the question reads If you know that something is the case, do you believe that it is the case?. In Cyc the question is represented as: (implies (knows ?AGT ?PROP) (beliefs ?AGT ?PROP)). In EL we provide the following representation as a direct reection of English surface form3 :

( e0 [e0 at-about Now] ( x [x thing] [[[Epi2Me know (that ( e1 [e1 at-about e0] [[x (be the-case)] ** e1])) ] ** e0] ( e2 [[e2 at-about Now] [e0 same-time e2]] [[Epi2Me (believe (that ( e3 [e3 at-about e2] [[x (be the-case)] ** e3]))) ] ** e2])]))
The key knowledge to answer this question is the following axiom:
If an event is characterized by some agent knowing something then it is also characterized by the agent believing it: ( e ( x (all p [[[x know p] ** e] [[x believe p] ** e]])))
Question 8 (our last example) corresponds to Cycs commonsense test case named #$CST-CanYouAttackSomeoneWithAGolfClub. In English the question is Can you attack someone with a golf club?. Cyc expresses it in the same way as question 4: ((TypeCapableFn behaviorCapable) GolfClub PhysicallyAttackingAnAgent deviceUsedAsWeapon). In EL we represent the question as:4
3 apart from the events and event relations introduced by the temporal deindexing that follows logical form computation (Schubert and Hwang 2000). 4 E PILOG also answers the case in which you is interpreted literally to mean E PILOG itself. In this case, the question is an-
This question is answered positively by using the knowledge that golf-clubs are heavy and solid and can be swung by a person and that objects with those properties can be used to attack another person. FOL scalability tests: the second part of the evaluation put into perspective the performance of the reasoner with respect to standard FOL theorem provers on the classic TPTP5 dataset. In particular we used the CSR6 problems derived from the conversion into FOL of the OpenCyc ontology (Ramachandran, Reagan, and Goolsbey 2005). We used the subset of CSR problems that was designed to test the scalability of a theorem prover. In particular the problems used were those designated as CSR025 through CSR074 in segments 1 to 5. Even though the access schema of E PILOG2 is a simple exhaustive one and therefore not scalable, the results will provide a good bottom-line comparison with future improvements of E PILOG. Table 1 summarizes the results. The systems compared are E PILOG17 , E PILOG2, and Vampire 9, which is represenswered negatively using introspection, a closure axiom that asserts that E PILOGs knowledge with respect to major abilities is complete, the fact that physical actions are major ability and that attacking somebody requires the ability to perform physical actions. 5 See http://www.cs.miami.edu/ tptp/ 6 See http://www.opencyc.org/doc/tptp challenge problem set in particular the section The Scaling Challenge Problem Set. 7 In particular it is the version of June 22nd 2005.

Segment 5

Size (min/avg/max) (22/59/163) (-/1101/-) (-/7294/-) (-/42981/-) (-/534435/-)

E PILOG1 FI 0

E PILOG1 no FI 0

E PILOG48 12

Avg depth 5.9 5.6 4.5 4.3 1.3

Vampire 32 0

Table 1: Summary of the tests carried out between E PILOG1, E PILOG2 and the Vampire theorem prover, version 9. The rst column contains the segment number (1-5) of the segments comprising the scalability subset of the CSR dataset (with 50 problems in each segment). Column 2 lists min, max and average number of formulas contained in the problems in that specic segment. (If all problems contain the same number of formulas only the average is shown). Columns 3, 4, and 5 show the percentage of problems for which a solution was found, respectively by E PILOG1 with forward inference enabled, E PILOG1 without forward inference and E PILOG2 (which by default has no forward inference enabled). Column 6 shows the average depth of the answer found by E PILOG2. Column 7 shows the percentage of problems solved by Vampire. All system have been limited to a timeout of 120 seconds. tative of state-of-the-art FOL theorem provers8. All systems were run under the same conditions and were subjected to a 2 minute limit per problem.

Acknowledgements

This work was supported by NSF grant IIS-0535105 and by a 2007-2008 gift from Bosch Research and Technology Center (Palo Alto); the content has beneted signicantly from the very useful comments of the anonymous referees.
Conclusion and Further Work
In this paper we described how we evaluated the work on the development of the latest version of the E PILOG system in a way that we think tests the particular features that characterize E PILOG and that also may allow for comparison with other commonsense reasoners independently of which logical language they use.9 The evaluation was divided into 2 parts. In the rst we selected 8 examples, ve of which were from ResearchCyc. These examples were selected to test the features of EL and of E PILOG such as introspective question answering, quotation and subtitutional quantication, interfacing to specialists, etc. The second part was based on a subset of the TPTP dataset used to test the scalability of a theorem prover. This part, in addition to providing a baseline for assessing future enhancements of E PILOG, demonstrates signicant performance gains achieved here over E PILOG1, and will facilitate further comparisons with other theorem provers. Moreover, the results show that a reasoner for a highly expressive logic doesnt have to be impractically inefcient compared to a less expressive one10. It should be kept in mind that in addition to not lagging far behind state-of-the-art performance in FOL theorem provers in their domain of competence, E PI LOG is capable of additional modes of reasoning and metareasoning as shown by the rst evaluation. In future we plan to close the remaining gap between E PI LOG and FOL theorem provers, implement a more efcient access schema for knowledge retrieval, implement probabilistic reasoning, provide for uniform handling of generalized quantiers, and extend the new approach to specialist deployment to all specialists.

Download available at http://www.cs.miami.edu/ tptp/CASC/J4/Systems.tgz 9 Allowing longer times had minimal effect on both systems. 10 contrary to the alleged expressivity/tractability tradeoff.

References

Hwang, C., and Schubert, L. 1993. Episodic logic: A situational logic for natural language processing. In P. Aczel, D. Israel, Y. K., and Peters, S., eds., Situation Theory and its Applications, volume 3. Stanford, CA: Center for the Study of Language and Information. 303338. Morbini, F., and Schubert, L. K. 2008. Metareasoning as an integral part of commonsense and autocognitive reasoning. In Metareasoning 08, 155162. Ramachandran, D.; Reagan, P.; and Goolsbey, K. 2005. First-Orderized ResearchCyc: Expressivity and Efciency in a Common-Sense Ontology. Schaeffer, S.; Hwang, C.; de Haan, J.; and Schubert, L. 1993. E PILOG, the computational system for episodic logic: Users guide. Technical report, Dept. of Computing Science, Univ. of Alberta. Schubert, L., and Hwang, C. 2000. Episodic Logic meets Little Red Riding Hood: A comprehensive, natural representation for language understanding. In Iwanska, L., and Shapiro, S., eds., Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language. Menlo Park, CA: MIT/AAAI Press. 111174. Schubert, L. K.; Schaeffer, S.; Hwang, C. H.; and de Haan, J. 1993. EPILOG: The Computational System for Episodic Logic. USER GUIDE.

doc1

Computational Infrastructure for a Self-Aware Agent

by Fabrizio Morbini

Submitted in Partial Fulllment of the Requirements for the Degree Doctor of Philosophy
Supervised by Lenhart K. Schubert Department of Computer Science Arts, Sciences and Engineering School of Arts and Sciences University of Rochester Rochester, New York October 8, 2009

Curriculum Vitae

Fabrizio Morbini received a Laurea in Electronic Engineering from Universit` degli a Studi di Brescia in 2002, with a thesis on extensions to the DISCOPLAN system supervised by Prof. Alfonso Gerevini. He began graduate studies at the University of Rochester in the fall of 2003. He pursued his research on the EPILOG reasoner, revising and extending it to support his work towards a self-aware agent, under the guidance of Prof. Lenhart Schubert. He received his MS from the University of Rochester in 2005.

Acknowledgments

Thanks to Aaron Kaplan for his eort in attempting to clean the original version of Epilog and bringing it under a revision system for the rst time. Thanks to Alexandre Riazanov for his help with Vampire and the JJ parser. Thanks to Larry Lefkowitz for his support on Cyc. Thanks to the doctoral committee (Professors James Allen, David Braun, Dan Gildea and Lenhart Schubert) and the chair of the defense (Professor Michael Tanenhaus) for their helpful comments. This material is based upon work supported by grant #IIS-053510 from the National Science Foundation and a gift from Bosch Research and Technology Center (Palo Alto) from 2007-2008. Any opinions, ndings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reect the views of above named organizations.

Abstract

Self-awareness is an aspect of consciousness that is highly developed in humans in comparison with other animals. A human being unaware of his or her personal characteristics, of what he or she knows and doesnt know, can do and cannot do, wants and doesnt want, has experienced and is experiencing, etc., would surely be dicult to communicate with naturally. Therefore we believe that consciousness plays a crucial role in building articial dialog agents with human-level abilities. In this work we describe our eort in building the extension of the COMA/Epilog system, called Epi2Me/Epilog2 aimed at explicit self-awareness. A system is explicitly self-aware if it has a complex self-model, is transparent in its internal workings and explicitly displays self-awareness through its interaction with the users. The new system achieves a more than 100% improvement over the previous version of Epilog in reasoning speed on traditional rst-order logic problems, providing at the same time more extensive support for the requirements of explicit self-awareness, including many of the introspective and metareasoning capabilities needed for explicit self-awareness. In addition its design will provide a solid base on which to build future developments.

Table of Contents

Curriculum Vitae Acknowledgments Abstract List of Tables List of Figures 1 Introduction 2 Concepts of Consciousness and Self-Awareness 2.1 2.2 2.3 Denitions. Human consciousness. Machine consciousness.

ii iii iv vii x 61

3 Previous Work 3.1 3.2 3.3 SHRDLU. Homer. CASSIE.

3.4 3.5

COMA, Episodic Logic and Epilog1. Comparative Evaluation.
4 Epi2Me 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 Parser. Normalization. Interface to Knowledge Bases. Unier. Specialist interface. Inference. Type extraction.
Storage Framework. 105 Question-answering (QA) Framework. 105
4.10 Summary. Evaluation 5.1 5.2 5.3 115

Blind-sight is the pathology in which the patient has blind spots in her eld of view because of some damage in the visual cortex. But if simple stimuli are given in these blind spots and if the patient is asked questions about properties of the stimuli she is able to answer with surprisingly high precision.
going on in front of it but it doesnt experience anything (this is debatable in the same way as Searles Chinese room thought experiment). A case of phenomenal consciousness without access consciousness is for example one in which we are experiencing a sound but we are not aware of it. For example, how many times we have heard of a person from out of town coming to visit a friend living in a busy and noisy city and asking the friend how she sleeps with all that noise. The most common answer we get is: Oh, I dont even hear it. But that is actually: Oh, Im phenomenally conscious of it but not access conscious of it. Monitoring consciousness: can be found in three dierent variants: 1. Phenomenal consciousness of ones state (in other words, perception of ones state). 2. Internal scanning. This is a simple functionalist view, an operating system doing its monitoring work falls in this category. 3. Metacognitive denition: conscious state in which one is thinking about the eects of being in that state. (Block, 2002) proposes to unify these three dierent denitions into one: S is a monitoring state if and only if S is phenomenally present in a thought about S (in other words S is a monitoring state if it is conscious (i.e. has a representation) of being phenomenally conscious of S). Given this denition it seems that phenomenal consciousness could be seen as a type of monitoring consciousness because it seems counterintuitive to have a conscious state (in this case, a phenomenal conscious state) without having a representation of it (and having a representation of it makes it fall into the denition of monitoring consciousness). Ned Block then uses an example to shows that it is still conceivable to distinguish phenomenal consciousness and monitoring consciousness (i.e. to be in a phenomenally conscious state without
having a representation of it): imagine two dogs, where Dog1 has only a perceptual state while Dog2 has a perceptual state and a representation of it. Then: It is conceivable for Dog2 to be conscious and Dog1 to be not conscious, this shows that it may be the case that being conscious requires having a representation of the state (i.e. phenomenal consciousness is a subtype of monitoring consciousness). However, even if being conscious of something implies being phenomenally conscious of it, it is plausible that one can be phenomenally conscious without being conscious. Therefore we can say that it is plausible for Dog1 to be phenomenally conscious but not conscious. So, its possible that phenomenal consciousness is distinct from monitoring consciousness. Ned Block concludes by mentioning that the presence of these multiple concepts of consciousness derive from the ambiguity of the concept itself. For some, consciousness is access-consciousness for others its phenomenal consciousness. It could be that consciousness is a combinations of all those 3 types but its useful to avoid confusing the concept of consciousness with any of the single types described above, and vice-versa we should not assume that any single type completely describes consciousness. Mental event: a mental event is any event inside our mind, like having a particular thought. For a materialist mental events are in a direct correspondence with physical events. Physical events are basically neuron rings. Introspection: is the ability to reason about ones own perceptions, knowledge, reasoning and in general any conscious mental event. For example, answering the question Do you know if Clinton is sitting or standing? re-

called neurofeedback. An interesting introduction to neurofeedback is given in (Evans and Abarbanel, 1999). Neurofeedback is used to cure various neurological pathologies like phobias, attention decit, dissociation (also called multiple personality disorder), addiction (for example to alcohol), mood disorder (depression, mania, associated with unbalanced activation of the right and left frontal areas) and coma and other problems caused by open and closed head injuries. The principle of neurofeedback (also called biofeedback) was rst introduced by Joe Kamiya in 1962 and basically consists in training patients to consciously modify particular characteristics of their own electroencephalogram (EEG) that are related to the pathology they are suering. The experiment reported by Kamiya in 1962 was on a patient suering from thanatophobia6. Kamiya cured him using a device that produced an auditory feedback proportional to the quality of the alpha waves7 of the patient EEG. This auditory feedback was used by patients to learn to consciously control their own level of alpha waves. The signicance of neuro6 7
Thanatophobia is the phobia of death or dying. According to frequency, brain waves (read using EEG) are classied in 6 bands:
Delta: Waves with frequency ranging from 0.5 to 4 Hz. They are related to restoring/repairing functions of the brain (for example, they are present during sleep, coma and other brain injuries and pathologies). Theta: Waves with frequency ranging from 4 to 8 Hz. They are associated with pathological conditions of disconnection from reality but in normal individuals they are related to improvisation and creativity (they are an indicator of activity in the subconscious part of the brain). Alpha: Waves with frequency ranging from 8 to 12 Hz. They are associated with calm states of mind (a lower than normal power of the alpha frequency band is associated with stress, pain, fear, etc.). Sensory motor rhythm (SMR): Waves with frequency ranging from 12 to 14 Hz in the sensory motor cortex. They are associated with muscular activity (their power is inversely proportional to the amount of muscular activity) and mind activity (thinking, learning, etc.). Energy in this band is low, for example, in epileptic patients. Beta1: Waves with frequency ranging from 12 to 16Hz. They are associated with high-level cognitive functions like thinking, learning, planning, problem-solving, etc. Beta2: Waves with frequency ranging from 16 to 24 Hz. They are normally associated with stress conditions.

(FORALL (X) (IMPLIES (AND (#THESIS X) (OR (#LONG X) (EXISTS (Y) (AND (#PERSUASIVE Y)

(#ACCEPTABLE X)))

(#ARGUMENT Y) (#CONTAINS X Y)))))
The theorem prover given the goal (#ACCEPTABLE :SAM-THESIS) would try all its knowledge in search of a way to obtain that formula (or if it uses resolution it will assert the negation and search for a way to nd a contradiction). Note also that there is no straightforward way of naming this formula with something like EVALUATION-OF-THESIS. Winograd instead expresses the evaluation of a thesis looking for its acceptability as a procedure (executed by the planning module) in the following way:
(DEFTHEOREM EVALUATE (THCONSE (X Y) (#ACCEPTABLE $?X) (THGOAL (#THESIS $?X)) (THOR (THGOAL (#LONG $?X) (THUSE CONTENTS-CHECK COUNTPAGES)) (THAND (THGOAL (#CONTAINS $?X $?Y)) (THGOAL (#ARGUMENT $?Y)) (THGOAL (#PERSUASIVE $?Y) (THTBF THTRUE))))))
where THOR, THAND are the respective versions of OR and AND that allow for failure driven backtracking. THGOAL identies a goal for the planner to achieve. The theorem says that to evaluate if X is #ACCEPTABLE it should rst decide if X is #LONG and to do so it should try to use (THUSE) two other theorems: CONTENTS-CHECK or (if the previous fails) COUNTPAGES. If that fails it should try to prove that X contains a #PERSUASIVE argument. (THTBF THTRUE) species that in that case the planner should work like a theorem prover in nding if Y is #PERSUASIVE, i.e., it should try everything it can (no THUSEs are given). Another advantage of this structure is that it supports changes to the state of the world by allowing the removal and addition of facts to it. As in STRIPS or PDDL there are positive and negative eects where positive eects are added facts and negative eects are deleted facts; SHRDLUs language can represent this by using the special commands THASSERT and THERASE. Whenever some assertion is made (using THASSERT) or something is removed (using THERASE) other theorems are called to add the consequences of these changes (these theorems are called ANTECEDENT in the case of assertions and ERASING in the other case).
To sum up, SHRDLU has various interesting features like a strong connection between syntax, semantics and inference (planning). Its organization is probably not easily extensible. Our knowledge of it doesnt allow us to give a denitive answer on this topic, but presumably Winograd or others would have undertaken major extensions in the decades that followed, if these were straightforward. There have been various attempts to try to resuscitate SHRDLU but none has been completely successful. In later writing, Winograd takes a quite negative view of the feasibility of human-like understanding by machines (see (Winograd, 1990)). Looking at SHRDLU dialog examples, the most impressive characteristic is the ecacy of the parsing and semantic interpretation. But it is hard to judge whether this quality could be maintained with larger grammars and lexicons. The problem of scalability is a common one, as will be seen in the case of Homer (section 3.2) as well. In addition, its approach to English generation is quite simple and based on patterns and simple transformations of these patterns. On the self-awareness side, the most striking feature of SHRDLU is its ability to give explanations for its actions. But, as said in the comments on the reported demo dialog, this is achieved by a mechanical process of unwinding the goal state, and by reference to a very simple episodic memory, instead of by inference. We will have more to say in section 3.5 about the extent to which SHRDLU, as well as the other three systems to be discussed here, meet the requirements for selfawareness.

currently being made in a dierent project, see for example (Liu and Schubert, 2009). So, architecturally there is nothing particular about COMA itself; the features it has are the features that Epilog itself has. However, Epilog provides a exible and extensible architecture that we believe can be used to meet many of the requirements for explicit self-awareness. In particular, in this extended demo we stressed the interaction with the specialists and the topicalized access to memory. On the consciousness side Epilog has the advantage of using Episodic Logic, which allows for probabilities and temporally related episodes, and this seems necessary for emulating the abilities of human episodic memory. Both these features are very important for pushing Epilogs inferences to a more human level. But much work has to be done to improve COMA/Epilog, we list here some of the more pressing problems we found: A fundamental problem lies in the low level structure of Epilog. It was not designed with modularity and evolution in mind, therefore any change to it is very hard and would likely make it even more convoluted. There is a widespread use of global variables as a mean to communicate information across functions in dierent modules. The code lacks documentation and the functions are not written in a way to make the job of a programmer easier. The extensive documentation in the form of manuals that comes with the code is in several places out of date. Related to the previous point is a general lack of structure of the inference module. We detected bugs both in the unication module (e.g., lack of even basic occurs check) and in the techniques used to approach some inferences, unfortunately the code in its current form is very hard to x. Parsing of EL formulas is accomplished with a combination of if-then rules and global variables that make every modication (e.g., extensions to how
substitutional quantiers and quasi-quotation are currently handled to be able to realize some of the features of explicit self-awareness) to the grammar very cumbersome and prone to errors. The interface between Epilog and its specialists needs revision to allow a higher level of transparency as dictated by explicit self-awareness. In particular, how can we make Epilog aware of the capabilities of its specialists? For example, to answer the question: How many developers developed you? we added specic knowledge to access the right function of the set specialist that computes the cardinality of a set. But what we would like is that Epilog infers that it has to use that particular function of that particular specialist because that is what solves its goal of answering the user question. The approach suggested in (Schubert, 2005) involves the use of the metasyntax already illustrated in section 2.3, to codify knowledge about what conclusions can be drawn from the values produced by certain executable functions. (In the example in (Schubert, 2005) the function was add but it could equally well have been nd-cardinality or nd-super-concepts.) A similar problem concerns the type hierarchy, of which Epilog has no direct knowledge. In fact, even if the type hierarchy contains the knowledge that a ower is a plant, Epilog will not be able to answer a general question like, What do you know about owers? with, Flowers are plants. An additional concern of both type hierarchies and specialist interface is the manual work they require to be setup. In fact neither of them are automatically constructed from explicit knowledge stored in the main knowledge base. This is also one of the reasons for the lack of transparency. Probabilistic inference in unreliable in some cases and will need revision. The mechanism to handle the storage of temporary knowledge, used in particular during assumption making, can be heavy and therefore inappropriate

for a module used often during inference. Introspection, that is at the base of any self-aware agent, is currently partially handled using a specialist written by Aaron Kaplan. The author of this specialist had been forced to select this suboptimal strategy because of the intricacy of the question answering (QA) framework heavily based on global variables that didnt allow for additional questions to be asked within another question (we will refer to this in the coming sections as recursive QA). The ecient retrieval of knowledge is based on the presence of a type hierarchy and is currently geared toward eective forward inference more than toward goal chaining. Therefore when approaching a new problem, time has to be allocated to building the appropriate type hierarchy and in some cases a few inferences necessary to solve a particular question may not be executed. The topical organization of the memory sometime fails because of the particular form of the input (for example in the case of certain disjunctions) or because the user has forgotten to specify the types of the variables in the formula. The variable types are important for topical access to the memory because they are used to decide under which access keys a formula will be indexed. Currently Epilog1 doesnt automatically assign a root-entity-like type to untyped variables. Another problem that can explain some of the previous points is the lack of a extensive set of problems to be used both to periodically check the correctness of the routines and to provide a performance measure to be used in comparison with future revisions of Epilog and with other theorem provers.

Comparative Evaluation

In this section we will compare the four systems just described. After comparing their characteristics we will use the criterion of explicit self-awareness dened in section 2.3 to classify them. SHRDLU and Homer have in common their planning capability. Homers planner is also able to re-plan whenever it realizes that a previously computed plan fails because, for example, the world changed from what the agent knew. The re-planning capability is fundamental for acting in a dynamic environment. (One of the simplifying assumptions in SHRDLU was that it had complete knowledge of the state of the world and that SHRDLU was the only actor in that world.) CASSIE has a basic ability to plan but we dont have information on how scalable or eective it is. Currently, COMA has no planning capability but its ability to express goals and actions sets the stage for planning. Adding a planning capability as a specialist or as an external system or directly as part of the inference engine is in our plans for future extensions that will be merged in the Epi2Me agent (see chapter 4). Homer and COMA have an explicit ability to represent episodes and their time relations. SHRDLU instead has a very limited ability both on the representation side (i.e., in representing time relations) and on the access side (i.e., the accessibility of episodic knowledge to reasoning). Also CASSIEs ability to handle time relations is limited; currently CASSIE doesnt handle any reference to a future time. Episodic memory as pointed out in section 3.2 is an important part of our memory system and it is intimately related to consciousness. SHRDLU is the only system able to explain its own actions. This ability, as said, is more a product of an ad-hoc mechanism than a product of understanding and reasoning. In Homer the capability to explain its own actions is limited by the implementation choice of keeping the planners episodic memory separated from

COMA: currently it doesnt belong to any of the given classes. Since it doesnt plan, it is not fully goal-oriented (even if its answering ability could be considered as partially goal oriented) and it is not able to explain its answers.19 But it has certain features (like inference infrastructure and episodic memory) necessary for explicit self-awareness, which we set out to exploit in building COMA extensions. Considering the knowledge requirements listed in (Schubert, 2005) and (McCarthy, 1995), we make the following observations. Logic: All four systems are based on logical representation languages. Events: Homer and COMA, as already noted, have good support for expressing time relations and episodic memory. Attitudes and autoepistemic reasoning: SHRDLU does not explicitly reason about beliefs, perceptions, etc., but it has thorough knowledge of its closed world. Homer currently shows basic abilities in assessing what it knows. COMA includes a belief specialist based on (Kaplan, 2000) but it is not fully functional. For example, there are problems with positive and negative introspection. We dont have any basis for assessing the ability of CASSIE to reason about what it knows, but in principle it should be able to do it, at least at a basic level. Generic knowledge: all four systems have very limited knowledge bases highly optimized for the task/demo for which they were built. But COMA is certainly the one that makes the job of adding generic knowledge easiest because of the use of Episodic Logic as representation language. Metasyntactic devices: None of the systems is currently able to reason about the syntax an answer should have, although EL does have a syntax for
It does, however, record what premises led to the conclusions it reaches.
quotation and substitutional quantication that is used to encode axiom schemas. Knowledge categorization: among the four, only COMA has topicalized access to its knowledge. Summarization: only Homer includes a primitive capacity for summarization as one of its reexive processes. Actions explanation: only SHRDLU can explain its actions, but only because of an ad-hoc process and not because of its general inference abilities. Utility function: none has a utility-driven behavior. Knowledge of capabilities and limitations: all four systems are very poor in this respect. In CASSIE this feature has not been investigated. The other three have capabilities of which their reasoning system is unaware: SHRDLU and Homer dont know about their planning capabilities and COMA is not explicitly aware of its specialists. System transparency: CASSIE seems to be very transparent: one reason could be that currently it is simpler than the other three systems. Each of the other three systems performs steps that are not accessible to reasoning: the planners episodic memory in Homer, SHRDLUs action representation and COMAs specialists are a few examples. Table 3.6 is a summary of the comparison described in this section. A green color means that the feature is considered a step in the direction of explicit selfawareness. Orange means that singnicant extensions/changes must be done.

Specialist interface

As mentioned earlier, Epilog1 required that predened ags be attached to those predicates and functions that are connected to specialists. Specialists are modules that carry out inferences on a restricted domain much faster than if the same inferences were carried out by the general inference engine. See gure 3.7 for a representation and listing of the specialists present in Epilog1. This type of interface poses some problems with respect to transparency. One
aspect, from the point of view of the programmer/user, is that its hard to follow how and when a particular specialist is called. However, the transparency problem is particularly evident with respect to making the agent self-aware. In fact, the inference engine has no control over which specialist to call or when to call it, as specialist execution is automatic, unconscious. In Epilog2, we redesigned the interface to specialists with the goal of making it more transparent. The basic idea was introduced in (Schubert, 2005) and is based on the use of a single special function, called Apply, known to the inference engine as having a special meaning. If a key contains the Apply function, the inference engine evaluates it. The evaluation consists in executing the rst argument of the Apply function, which must be a valid Lisp function, applied to the remainder of the arguments. To make the inference engine aware of a specialist functionality, represented by a particular Lisp function, its sucient to assert attachment axioms that specify when it is appropriate to use that specialist function. This knowledge will be used, if required, by the inference engine, like any other knowledge, to solve the given questions. For example, the following is the attachment axiom that connects the introspection specialist with the general inference engine: (wf f w [w without-free-vars] [[(Apply apply-fn-knownbyme? w) = yes] [Epi2Me know (that w)]]) Note the condition [w without-free-vars] that species when the specialist can be called. Basically, the formula says that to evaluate whether Epi2Me knows w, if w contains no free variables, the Lisp function apply-fn-knownbyme? should be used on the argument w.
Currently we are not doing any extra processing of the attachment axioms to make the call to specialists more ecient. One possibility for increasing the eciency without decreasing transparency is to automatically attach special information to these axioms, so as to limit the keys usable for goal chaining to those in the consequent of the implication3 (e.g., [Epi2Me know (that w)]) and to explicitly indicate the preconditions for the execution of the specialist (e.g., [w without-free-vars]).
The inference module implements the rules of inference designed for EL and described in (Schubert and Hwang, 2000). We made the following extensions to the rules given in that paper: If a formula contains equivalences, before it can be used in inference the equivalences are oriented (i.e. transformed into normal implications) according to the polarity of the key to be matched in the other formula. For example, given the two formulas ( x [[x Foo] [x Bar]]) and [C1 Bar] to be used for goal chaining, the rst formula would be changed into ( x [[x Foo] [x Bar]]) to generate the new goal [C1 Foo]. If instead, the second formula is (not [C1 Bar]) the rst would be changed into ( x [[x Bar] [x Foo]]) to generate the goal (not [C1 Foo]). The soundness conditions originally given were sucient conditions to guarantee that the result didnt contain any free variable. However they are not necessary conditions. Now the inference will be avoided only if it actually produces free variables.

Figure 4.8: The dashed subgoal is not added if f 1maj f 2maj and key1maj key2maj and key3sg was a part of f 1maj. This avoids the creation of an inference loop that creates a cascade of ever growing subgoals. 2. Loop avoidance: Subgoals that would create loops in the inference graph should not be added, since they can impair the performance of the theorem prover. Two types of loop creation are currently avoided: A subgoal is not added if it would duplicate an ancestor. Note that because of assumption-making one subgoal is considered equivalent to another only if their formulas as well as their associated KBs are the same. Cascades of ever-growing subgoals are avoided by blocking the inferences that would create a subgoal g by goal chaining between: 1) the same retrieved w and key used to produce gs parent, gp, and 2) a key in gp that comes from the retrieved w used to create gp. Figure 4.8 contains a representation of this case. Agendas: two agendas are used in the QA framework, one to order the subgoals pertaining to a question and one to order the retrieval actions within a subgoal (for each subgoal, there is a retrieval action for each key contained in the subgoals formula). The agenda system denes an interface (like the knowledge module) and currently provides 3 agenda schemas, one list-based, one AVL tree-based, and a third that is hierarchy-based.
Each agenda (that is an instantiation of one of the agenda schemas provided) has a particular evaluation function associated with it that estimates the value/priority of each element in the agenda. This makes it possible to order the agendas content. Some evaluation measure may require an update of the value given to previously inserted elements (and therefore a resorting) every time a new element is inserted. This can be inecient. The hierarchical agenda overcomes this problem by associating to the elements in it a hierarchical structure. To nd the best element in it one has to traverse this structure deciding at each element whether to stop or to proceed along one of its children based on the evaluation measure associated with the current node and its children. When an element is added to the agenda, it is added as a child of a preexisting element and, if necessary, a change in the evaluation measure is propagated to its ancestors. In selecting which subgoal to use for the next inference the hierarchical agenda structure is used. But in that case the hierarchical structure is already present it is the inference graph. The heuristic used to evaluate the importance of each subgoal, g, is critical in the performance of the theorem prover; it is currently based on two counteracting quantities: Costs 1) the size of the formula associated with g relative to the size of the biggest formula among the siblings of g; 2) the % of times a descendant of g or g itself was selected for inference but no improvement was obtained.6 Gains 1) the % of g that is solved (this is greater than 0 only for a subgoal that at some point can be split, e.g., a conjunction); 2) the % dierence between the size of gs formula and the size of the smallest formula among the descendants of g whose solution would imply a solution of g; for conjunction of subgoals, their average size is considered.

a performance evaluation based both on a publicly available set of commonsense questions (in particular focused on self-awareness), and on a more standardized set of problems for FOL theorem provers. This evaluation will be described in detail in chapter 5.

Evaluation

The natural way to test COMAs capabilities, both on the reasoning front and on the representation front, is to use a publicly available set of commonsense problems (selecting a subset of questions oriented towards self-awareness). Among several collections that are available, we opted for the set of problems contained in the Research Cyc knowledge base. They are a set of more than 1600 problems1 that provide both the English formulation of a question and its translation into CycL2. In addition to the abundance of interesting and challenging questions, another advantage of using this dataset is that it allows comparison between our and Cycs interpretation of each question. The last point highlights the problem of comparison for systems that use a dataset based on English for their evaluation. Because a question expressed in English can be formalized in many ways and at various levels of detail, it is very dicult to use the results obtained to compare dierent systems. This lack of a dataset expressed in logic to facilitate comparisons is not easily solved given the lack of agreement on a single logical language well-suited for NL; and even if such a language existed each English sentence can still be interpreted in many ways
This dataset also contains most of the relevant questions in Vaughan Pratts list available at http://boole.stanford.edu/pub/cyc.report. 2 http://www.cyc.com/cycdoc/ref/cycl-syntax.html
and at dierent levels of detail. Therefore, to give a more complete picture of the performance of the Epilog system and to facilitate comparisons with other systems, we decided to evaluate it as well against the widely used TPTP dataset for FOL theorem provers. This puts the basic performance of the reasoner in perspective with respect to the state of the art in FOL theorem provers. The evaluation on Cycs commonsense test cases instead tests the features that distinguish Epilog from a traditional FOL theorem prover. The chapter is organized as follows: rst we describe the evaluation using a small selection of questions that are either hand-built or selected from the Cyc data set mentioned above; the evaluation using the TPTP dataset follows in section 5.2. We conclude with a summary of the results.

The question is answered negatively by using the knowledge that to be able to put-out a re one must use a ame-suppressant material, and gasoline is not a ame-suppressant material. The seventh question is Cycs question named #$CST-DoesCycHaveABiologicalFather, which in English is Do you (Cyc) have a biological father?. In Cyc the question is represented as (thereExists ?F (biologicalFather Cyc ?F)). We expressed the question in EL as follows: ( e0 [e0 at-about Now] ( y [[Epi2Me (have-as ((attr biological) father)) y] ** e0])) In this question, have-as is a subject adding operator that takes a n-ary predicate as argument and returns a n+1-ary predicate. In this case ((attr biological) father) is the monadic predicate true for all individuals that are biological fathers. (have-as ((attr biological) father)) is the binary predicate that is true for all pairs of individuals in which the object of the predicate is the father of its subject. The relevant knowledge for this example is:
Epilog is an artifact. [Epi2Me artifact] No artifact is a natural object. ( x [x artifact] (not [x natural-obj])) A creature is a natural object. ( x [x creature] [x natural-obj]) All creatures, and only creatures, have a biological father. ( x ([x creature] ( y ( e [[x (have-as ((attr biological) father)) y] ** e]))))
The question is answered negatively by using the knowledge that Epilog is an articial thing and therefore not a natural object. Further its known that only creatures can have a biological father and that creatures are a subtype of natural objects. The eighth question corresponds to the Cyc question named #$CST-AnimalsDontHaveFruitAsAnatomicalParts-HypothesizedQueryTest. In Cyc the question is expressed as (implies (isa ?ANIMAL Animal) (not (relationInstanceExists anatomicalParts ?ANIMAL Fruit))). In EL we expressed the question (more naturally, we claim) as: ( e0 (No x [x animal] [[x (have-as anatomical-part) (K fruit)] ** e0])) The relevant knowledge for this example is:
Plant stu is not animal stu. ( x [x plant-stu] (not [x animal-stu])) Continued on next page
Fruits are made of plant stu. [(K fruit) made-of (K plant-stu)] Animals are made of animal stu. [(K animal) made-of (K animal-stu)] If an individual x is made of (kind of stu) p and if (kind of stu) q is a supertype of p then x is made of q. ( x (pred p [x made-of (K p)] (pred q ( y [y p] [y q]) [x made-of (K q)]))) If an individual x is made of (kind of stu) p and if (kind of stu) q is disjoint from p then x is not made of q. ( x (pred p [x made-of (K p)] (pred q ( y [y p] (not [y q])) (not [x made-of (K q)])))) If a type p is made of (kind of stu) q then all individuals of type p are made of q. (pred p (pred q [[(K p) made-of (K q)] ( y [y p] [y made-of (K q)])])) Every part is made of the material of the whole. ( w ( e ( p ([[w (have-as anatomical-part) p] ** e] ( wm [w made-of wm] [p made-of wm]))))) We decided to answer the question by saying that all parts are made of the same substance of which the whole is made. However the case of articial parts/organs is not captured by this knowledge. One could improve it by saying that parts must be made of materials compatible with the material of which the whole is made. This would work for parts made of titanium, or other durable inert materials. However

( x [x golf-club] [(some e [[x (pasv ((adv-a (by (K hand))) swing))] ** e]) [x solid] [x phys-obj] ( w [[x weighs w] [w (K ((num 2) pound))]])]) For any striking weapon, one person can attack another with the weapon, by striking him or her with it. ( x [x ((nn striking) weapon)] ( y [y person] ( z [z person] ( e [[y ((adv-a (by-means (Ka ((adv-a (with-instr x)) (strike z))))) ((adv-a (with-instr x)) (attack z)))] ** e])))) There is a golf-club. ( x [x golf-club]) (by-means modication is monotone) If an agent does some action by means of another action, then he does the rst action. (pred p ( x ( y ( e [[x ((adv-a (by-means y)) p)] ** e] [[x p] ** e]))))
This question is answered positively by using the knowledge that golf-clubs are heavy and solid and can be swung by a person and that objects with those properties can be used to attack another person. In Cyc, this test case is solved by using the following knowledge: (implies (and (genls ?SPEC ?TYPE) ((TypeCapableFn behaviorCapable) ?TYPE ?ACT-TYPE ?ROLE)) ((TypeCapableFn behaviorCapable) ?SPEC ?ACT-TYPE ?ROLE))
((TypeCapableFn behaviorCapable) SportsPoundingDrivingImplement PhysicallyAttackingAnAgent deviceUsedAsWeapon)
(genls GolfClub SportsPoundingDrivingImplement) Note that the Cyc axioms used are dicult to express in English, say nothing about the type of agent involved, and collapse complex English terms into logically primitive concepts. A variant of this last question is obtained when you is interpreted literally to mean Epi2Me itself. In this case the question becomes: ( e0 [e0 at-about Now] ( x [x person] ( y [y golf-club] [[Epi2Me (able (Ka ((adv-a (with-instr y)) (attack x))))] ** e0]))) The knowledge relevant to this question is:
To be able to attack somebody one has to able to do a physical activity. ( e ( x ( y ( z [[[x able (Ka ((adv-a (with-instr y)) (attack z)))] ** e] [[x able (Ka physical-activity)] @ e]])))) If Epi2Me is able to do what Epi2Me considers a major activity, then Epi2Me will know that its able to do it. (pred x [(Ka x) (major activity)] ( e [[[Epi2Me able (Ka x)] @ e] [(that [[Epi2Me able (Ka x)] @ e]) knownbyme]])) Physical activities are a major kind of activity. [(Ka physical-activity) (major activity)]
In this case the question is answered negatively by using introspection and the closure axiom (second row in the previous table). The axiom states that Epi2Mes knowledge of what Epi2Me is able to do is complete with respect to major abilities. Furthermore a physical activity is a major kind of activity. But the introspective QA will not be able to conrm that Epi2Me can do a physical activity and that means that it is not able to do a physical activity, thus ruling out being able to attack somebody.

Evaluation using TPTP

Veloso, Manuela M. and Subbarao Kambhampati, editors. 2005. Proceedings, The Twentieth National Conference on Articial Intelligence and the Seventeenth Innovative Applications of Articial Intelligence Conference, July 9-13, 2005, Pittsburgh, Pennsylvania, USA. AAAI Press AAAI Press / The MIT Press. Vere, Steven. 1983. Planning in time: Windows and durations for activities and goals. IEEE Transactions on Pattern Analysis and Machine Intelligence., 5(3):246267. Vere, Steven and Timothy Bickmore. 1990. A basic agent. Comput. Intell., 6(1):4160. Winograd, Terry. 1972. Understanding natural language. Cognitive Psychology, 3(1):1191. Winograd, Terry. 1990. Thinking machines: Can there be? are we? In Derek Partridgen and Yorick Wilkis, editors, The Foundations of Articial Intelligence: A Sourcebook, pages 167189. Cambridge University Press.

 

Tags

MS5500 P4VX4 DSC-S800 UF-590 AX4sgmax Version 2 Seville 1998 640UI H5350 4 16V HF-200 Samsung NT10 A-807 TX-32PX20F 81187 PCG-FX405 Lachesis 230 ACX Racing MR8 Mk2 Trkd 6625 EIM-805 PSR-210 32CV510U AOC66311K Utility Audi TT Asus R800 HR4001C Cect I68 KDC-4070RA W2043SE Klimageraet Planplus Darkening Support EX Plus FR-8062 KEH-3800RDS AJ3915 CDP-XA5ES BD-C5500C ESF6130 Kart 64 CT-W703RS HDW-M2000P Fostex RD-8 MHC-3600 Gravity T CQ-DF201U AV-D50 Nokia 6310 Dgps 53 Series Toro DDC SX200 IS MY225X FM WV-CP480 VP-D80 TD090 CD2452S Aspire 7100 MS7381SGM XS-drive II YZ400F-1998 RM120 KV-29FX64B Photosmart R740 300 VOX 4corel1333-viiv Snap TX-26LXD600 CMT-M35WM TXL32U10BA Visionpro IAQ Radeon 9200 5-5 6G L177WSB PF SC-DC575 Extensa 5200 KX-TGA600B Street V3 Z715C Ericsson Zylo 1212M HS-3W KRC-378R CD1451B 51 Digitech GNX1 Review AL-800 840 X-790 AV-21JT5EU SGH-F250L Wintv-HVR-950 Sp-MNM-003 S60362KG8 Roomba 510 WAP4400N TL-WA5110G Paperport 9

 

manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding

 

Sitemap

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101