Presentation is loading. Please wait.

Presentation is loading. Please wait.

L1. Introduction of AI course

Similar presentations


Presentation on theme: "L1. Introduction of AI course"— Presentation transcript:

1 L1. Introduction of AI course
About this course How do you take this course? What is AI? Take 10 minutes to Look up the words or terms in this file from a dictionary. Write down their translations in A4 paper. 1

2 Objective of this course
Artificial intelligence (AI) is the study of ideas that enable computers to be intelligent. The goals of the field of artificial intelligence are to make computers more useful and to understand the principles that make intelligence possible. This course will cover the following topics: introduction of AI and intelligent agents, problem solving and search methods, knowledge representation, planning, reasoning, reasoning under uncertainty, and learning mechanism. Upon successful completion of this course, the student should be able to (1) understand basic ideas of artificial intelligence, (2) master essential techniques of artificial intelligence, (3) apply the techniques to a simple problem solving or intelligent controls 2

3 Each Lecture (スケジュール) Evaluation Policy Requirements:
taking attendance / look at dictionary (10 minutes) Lecturing quiz time (sometimes) Evaluation Policy Attendance: % (15x2) Final Exam: % Requirements: Come to lecture in time (遅れないように). Do not talk when I am lecturing (授業中、喋ないように). Bring a A4 paper, which for answering quizzes. 3

4 Text books [1] “Artificial Intelligence – A Modern Approach”, Stuart Russell and Peter Norvig, Prentice Hall, ISBN (English version). [2] “エージェントアプロ-チ人工知能”, Stuart Russell and Peter Norvig 著, 古川康一監訳、 共立出版, ISBN (Japanese version). References [1] “Constructing Intelligent Agents with Java”, Joseph P. Bigus and Jennifer Bigus, Wiley Computer Publishing. [2] “Artificial Intelligence”, 3rd Edition, Patrick Henry Winston, Addison Wesley, ISBN 4

5 Suggested ways of study
Pre-study (予習) Read lecture notes. Read textbook. “エージェントアプロ-チ人工知能” 2. Look at dictionary 5

6 What is AI (Artificial Intelligence, 人工知能)?
Understand The field AI attempts to understand intelligent entities. Apply The study of AI is for applying constructed intelligent entities to computers or machines. Impact Computers with human-level intelligence would have a huge impact on our everyday life. 6

7 Definition of AI “The exciting new effort to make computers think … machine with minds, … ” (Haugeland, 1985) “The study of mental faculties through the use of computational models” (Charniak and McDermott, 1985) “Activities that we associated with human thinking, activities such as decision-making, problem solving, learning … “ (Bellman, 1978) “ The study of the computations that make it possible to perceive, reason, and act” (Winston, 1992) “The art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil, 1990) “A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” (Schalkoff, 1990) “The study of how to make computers do things at which, at the moment, people are better” (Rich and Knight, 1991) “The branch of computer science that is concerned with the automation of intelligent behavior” (Luger and Stubblefield, 1993) In conclusion, they falls into four categories: Systems that think like human, act like human, think rationally, or act rationally. ? 7 What is your definition of AI?

8 AI Foundations? AI inherited many ideas, viewpoints and techniques from other disciplines. Psychology Philosophy To investigate human mind AI Theories of reasoning and learning Linguistic Mathematics CS The meaning and structure of language Theories of logic probability, decision making and computation Make AI a reality 8

9 - Study of intelligence
History - Study of intelligence For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, should be done. - Appearance of computers Computer projects were started in 1940s. * early: chess programs, simulation of 40 neurons,… * great expectations: “Electronic Super-Brain”, “Faster Than Einstein” , … - Study of AI It was formally initiated in 1956. (Two months workshop by Mc Carthy brought together US researchers and all agreed on Mc Carthy’s proposed name for this field: AI) Lisp became the dominant AI programming language. (proposed by Mc Carthy ) e.g. the Logic Theorist, General Problem Solver, Geometry Theorem Prover (Mc Carthy’s first complete AI systems). 9

10 Software intelligent agents
Collaborative agents Interface Agents Mobile Agents Information Agents Reactive Agents Smart Agents Multi-Agents Hybrid Agents Heterogeneous Agents …… 10

11 Some development environments and tools
Agent Building Environment (ABE) -- developer's toolkit product alpha JATLite (Java Agent Template, Lite) -- a package of programs written in the Java language that allow users to quickly create new software "agents" that communicate robustly over the Internet. Jess (Jave Expert System Shell) -- a rule engine and scripting environment written entirely in Sun's Java language. 11

12 - AI became an industry in 1980s
* Commercial expert system, R1, began operation at Digital Equipment Corporation in 1982. * Japan proposed to develop 5th generation computer (intelligent computer) project (10 years plan) in 1986. News The 8 participants companies are: Hitachi, NEC, Fujitsu, Toshiba, Sharp, Oki Electric, Mitsubishi Electronic and Matsushita Electric Industrial. Mitsubishi Electric in May became the first firm to commercialize a product developed under the 5th Generation Computer Project which aims at developing AI, software development. Japanese efforts to develop artificial intelligence the use of computers to draw inferences and make judgments -- is gaining momentum as 19 major companies have recently established the Artificial Intelligence Joint Research Society and the Artificial Intelligence Center. 12

13 Robots Sony says next decade the age of the robot
Robots--working for Japan's future? That is one goal of the Japanese government's $37.7 million Humanoid Robotics Project (HRP), which aims to market within a few years robots that can operate power shovels, assist construction workers and care for the elderly. In the process, a new multibillion-dollar Japanese industry could be born. Robot competitions/exhibitions around the world K'NEX K-bot World Championships "ROBODEX2002“, Pacifico Yokohama Exhibition Hall, , Minatomorai, Nishi-ku, Yokohama-city, Kanagawa, Japan Asimo in New York as a stockbroker 13

14 14

15 15

16 16

17 IEEE Spectrum Latest News:
Autonomous Vehicle on the Road, Ultrasensitive Artificial Skin, and more 17

18 18

19 19

20 20

21 Quiz: Why and how a robot can play soccer?
Why and how a robot can help sick kids? Why and how a no-driver vehicle can drive from Parma to Shanghai? (give me the answer to one of the above questions in Japanese) 21

22 L2. Problem Solving Using Search
PAGE of designing an agent Search Strategies 22

23 PAGE: Percepts, Actions, Goals, Environment
Sensors:eyes, camera, … percepts environment agent actions Effectors:hands, motors, … Fig1. A generic agent diagram e.g. Percepts cameras, speedometer, GPS, sonar Actions steer, accelerate, brake Goals safely to destination Environment traffic light, other traffic, pedestrians, in Japan Fig 1. The taxi driver agent and its PAGE description 23

24 More e.g.: Vacuum-clearning Agent
Percepts touch sensor, photo sensor, infrared sensor go forward, turn right by 90o,turn left by 90o, suck up dirt, and turn off Actions Goals clean up the room A grid of squares that may have obstacles (wall or furniture), dirt, or may be open space (having nothing) Environment Fig 2. The Vacuum-cleaning agent and its PAGE description Notes: Touch sensor is to check if the machine has bumped into something. Photo sensor is to check if there is dirt there. Infrared sensor is to check if the agent is in its home location. 24

25 ? Quiz If you design an agent for a washing machine, What are P, A, G, E description? 25

26 Problem types Deterministic, accessible => single-state problem (単一状態問題) The agent sensors give it enough information to tell exactly which state it is in. technique  search Deterministic, inaccessible => multiple-state problem (多重状態問題) The agent knows all the effects of its actions but has limited access to the world state so that it must reason about sets of states that it might get to. technique  search + reasoning Nondeterministic, inaccessible => Contingency problem (偶発的問題) There is no fixed action sequence that guarantees a solution to the problem. It must sensors during execution. solution is a tree or policy, not a single sequence of actions. techniques: interleave search (dynamically acquire information) + execution Unknown state space => exploration problem (探索問題) The agent must experiment, gradually discovering what its actions do and what sorts of states exist. techniques: experiment + discovery more knowledge less 26

27 Define a problem A problem is defined by four items: initial state: The world initial state. operators: A set of possible actions. goal-test: To check if it reaches goal state or states. path-cost-fucntion: It is the sum of cost of the individual actions along the path, which is from one state to another. Notes: State space is the set of all states reachable from the initial state by any sequence of actions. Solution is a path (a sequence of operators) from the initial state to a state that satisfies the goal test. ? How to find a solution? 27 search

28 ? Some examples - e.g. 1 5 4 3 2 1 6 1 8 4 8 7 3 2 5 6 7 The 8-puzzle
Start state Goal state state: the location of each of the eight tiles in one of the nine squares. operators: blank tile moves left, right, up, or down. goal-test: state matches the goal state. path-cost-fucntion: each step costs 1so the total cost is the length of the path. ? Can you write a Java program that can find a solution, i.e. path from the initial state to the goal state? Using a tree structure , can you write down a state space? 28 28

29 ? Some examples – e.g. 2 The vacuum world: 2 squares
1 3 5 7 dirt 2 4 6 8 state: one of the eight states above. operators: move left, move right, suck. goal-test: no dirt left in any square. path-cost-fucntion: each action costs 1. Can you write a Java program that can find a solution, i.e. path from any initial state to the goal state? ? 29

30 Breadth-first search (幅優先探索)
The breadth-first search algorithm searches a state space by constructing a hierarchical tree structure consisting of a set of nodes and links. The algorithm defines a way to move through the tree structure, examining the value at nodes. From the start, it looks at each node one edge away. Then it moves out from those nodes to all two edges away from the start. This continues until either the goal node is found or the entire tree is searched. 幅優先探索アルゴリズムはノードやリンクからなる階層的なツリー構造を構成することにより、状態空間を検索します。 アルゴリズムはツリー構造を通して移動すると定義されています。スタートからそれぞれのノードを見ます。それからスタ ートから繋がっている二つのノードへと移動する。ゴールノードが見つかるか、すべてのツリーが探索されるまでこの操作 を続けます。 ※ある階層をすべて調べ、それが終わるとひとつ深い階層をすべて調べることを繰り返す。 The algorithm follows: Create a queue and add the first node to it. Loop: If the queue is empty, quit. Remove the first node from the queue. If the node contains the goal state, then exit with the node as the solution. For each child of the current node: add the new state to the back of the queue. 30 30

31 An example: 31

32 An example of breadth-first search: Rochester  Wausau
InternationalFalls GrandForks Bemidji Duluth Fargo GreenBay Minneapolis St.Cloud Wausau LaCrosse Madison Milwaukee Rochester Sioux Dubuque Rockford Chicago [Rochester]->[Sioux, Minneapolis, LaCrosse, Dubuque]->[Minneapolis, LaCrosse, Dubuque, Frago, Rochester]-> [LaCrosse, Dubuque, Frago, Rochester, St.Cloud, Duluth, Wausau, LaCrosse, Rochester]->…… (after 5 goal test and expansion) -> [Wausau, LaCrosse, Rochester, Minneapolis, GreenBay, ……] Finally, we get to Wausau, the goal test succeeds. 32

33 A demo Source code for your reference
public SearchNode breadthFirstSearch(SearchNode initialNode, Object goalState) { Vector queue = new Vector() ; queue.addElement(initialNode) ; initialNode.setTested(true) ; // test each node once while (queue.size()> 0) { SearchNode testNode = (SearchNode)queue.firstElement() ; queue.removeElementAt(0) ; testNode.trace() ; if (testNode.state.equals(goalState)) return testNode ; // found it if (!testNode.expanded) { testNode.expand(queue,SearchNode.BACK) ; } return null ; 33

34 Depth-first search(深さ優先探索)
The depth-first algorithm follows a single branch of the tree down as many levels as possible until we either reach a solution or a dead end. It searches from the start or root node all the way down to a leaf node. If it does not find the goal node, it backtracks up the tree and searches down the next untested path until it reaches the next leaf. 深さ優先探索アルゴリズムは木の最初のノードから、目的のノードが見つかるか行き止まりまで深く下がっていきます スタートか根ノードから始まり、ずっと下ったところの葉ノードまで探索する。もしゴールノードが見つからなかったら引き返 し、次のまだ通っていないツリーを葉に到達するまで探す。 Root node : ツリー構造の頂点にあるノード leaf node : ツリー構造の最底辺にあるノード ※ 深く進んでいき、行き止まったら引き返して、また深く進みながらゴール目指す The algorithm follows: Create a queue and add the first node to it. Loop: If the queue is empty, quit. Remove the first node from the queue. If the node contains the goal state, then exit with the node as the solution. For each child of the current node: add the new state to the front of the queue. 34 34

35 An example: 35

36 An example of depth-first search: Rochester  Wausau
InternationalFalls GrandForks Bemidji Duluth Fargo GreenBay Minneapolis St.Cloud Wausau LaCrosse Madison Milwaukee Rochester Sioux Dubuque Rockford Chicago [Rochester]->[Dubuque, LaCrosse, Minneapolis, Sioux Falls]->[Rockford, LaCrosse, Rochester,LaCrosse, Minneapolis, Sioux Falls]-> …… (after 3 goal test and expansion) -> [GreenBay, Madison, Chicago, Rockford, Chicago, Madison, Dubuque, LaCrosse, Rochester,LaCrosse, Minneapolis, Sioux Falls] We remove GreenBay and add Milwaukee, LaCrosse, and Wausau to the queue in that order. Finally, we get to Wausau, at the front of the queue and the goal test succeeds. 36

37 A demo Source code for your reference
public SearchNode depthFirstSearch(SearchNode initialNode, Object goalState) { Vector queue = new Vector() ; queue.addElement(initialNode) ; initialNode.setTested(true) ; // test each node once while (queue.size()> 0) { SearchNode testNode = (SearchNode)queue.firstElement() ; queue.removeElementAt(0) ; testNode.trace() ; // display trace information if (testNode.state.equals(goalState)) return testNode ; // found it if (!testNode.expanded) { testNode.expand(queue,SearchNode.FRONT); } return null ; 37

38 Decisions in games Minimax algorithm - algorithm Tic-Tac-Toe game 38

39 (you can search for materials from the Internet)
Quiz: Please briefly describe breadth first algorithm and depth first algorithm in Japanese. (you can search for materials from the Internet) 39

40 Games as search problems
Game playing is one of the oldest areas of endeavor in AI. What makes games really different is that they are usually much too hard to solve within a limited time. For chess game: there is an average branching factor of about 35, games often go to 50 moves by each player, so the search tree has about 35100 (there are only 1040 different legal position). The result is that the complexity of games introduces a completely new kind of uncertainty that arises not because there is missing information, but because one does not have time to calculate the exact consequences of any move. In this respect, games are much more like the real world than the standard search problems. But we have to begin with analyzing how to find the theoretically best move in a game problem. Take a Tic-Tac-Toe as an example. 40

41 Perfect two players game
Two players are called MAX and MIN. MAX moves first, and then they take turns moving until the game is over. The problem is defined with the following components: initial state: the board position, indication of whose move, a set of operators: legal moves. a terminal test: state where the game has ended. an utility function, which give a numeric value, like +1, -1, or 0. So MAX must find a strategy, including the correct move for each possible move by Min, that leads to a terminal state that is winner and the go ahead make the first move in the sequence. Notes: utility function is a critical component that determines which is the best move. 41

42 Minimax ゲームは、自分にとっては最も有利な手を自分が打ち(max)、次に相手が 自分にとって最も不利な手を打ち(min)、それらが交互に繰り返されること によって成り立ちます。 It includes five steps: Generate the whole game tree. Apply the utility function to each terminal state to get its value. Use the utility of the terminal states to determine the utility of the nodes one level higher up in the search tree. Continue backing up the values from the leaf nodes toward the root. MAX chooses the move that leads to the highest value. A2 A31 A1 A3 A11 A12 A13 A22 A23 A21 A32 A33 5 2 4 6 14 8 12 3 L1 (maximum in L2) L2 (minimum in L3) L3 42

43 A2 A1 A3 MinMax (GamePosition game) { return MaxMove (game); } A31 A11
5 2 4 6 14 8 12 3 MaxMove (GamePosition game) { if (GameEnded(game)) { return EvalGameState(game); } else { best_move <- {}; moves <- GenerateMoves(game); ForEach moves { move <- MinMove(ApplyMove(game)); if (Value(move) > Value(best_move)) { best_move <- move; } return best_move; MinMove (GamePosition game) { best_move <- {}; moves <- GenerateMoves(game); ForEach moves { move <- MaxMove(ApplyMove(game)); if (Value(move) < Value(best_move)) { best_move <- move; } return best_move; 43

44 To sum up: So the MAX player will try to select the move with highest value in the end. But the MIN player also has something to say about it and he will try to select the moves that are better to him, thus minimizing MAX's outcome <Minimax 法> 想定される最大の損害が最小になるように決断を行う戦略。将棋やチェスなどコンピュー タに思考させるためのアルゴリズムの一つ。 実行例1 44

45 - pruning (Alpha-Beta法)
実行例2 3 L1 (maximum in L2) A1 A2 A3 =3 2 L2 (minimum in L3) A11 A12 A13 A21 A22 A23 A31 A32 A33 L3 = =3 =999 初期値 8  =2 4 6 14 5 2 =3 =2  >=  A121 A122 A123 2 3 1 Pruning this branch of the tree to cut down time complexity of search so as to speed up minimax search = 12 5 9 =12 =3  >  45

46 MaxMove (GamePosition game, Integer alpha, Integer beta) {
if (GameEnded(game) || DepthLimitReached()) { return EvalGameState(game, MAX); } else { best_move <- {}; moves <- GenerateMoves(game); ForEach moves { move <- MinMove(ApplyMove(game), alpha, beta); if (Value(move) > Value(best_move)) { best_move <- move; alpha <- Value(move); // Ignore remaining moves if (alpha >= beta) return best_move; MinMove (GamePosition game, Integer alpha, Integer beta) { if (GameEnded(game) || DepthLimitReached()) { return EvalGameState(game, MIN); } else { best_move <- {}; moves <- GenerateMoves(game); ForEach moves { move <- MaxMove(ApplyMove(game), alpha, beta); if (Value(move) < Value(best_move)) { best_move <- move; beta <- Value(move); // Ignore remaining moves if (beta < = alpha) return best_move; =12 =3  >=  =3 =2  >=  46

47 まとめ ゲームは、自分にとっては最も有利な手を自分が打ち(max)、次に相手が自分にと って最も不利な手を打ち(min)、それらが交互に繰り返されることによって成り立ち ます。 <α-β 法(狩り> Minimaxを改良したもの。枝刈りを行うことでMinimaxより評価するノードを抑えている <Minimax algorithmとα-β algorithmの違い> Minimax法ではすべてを探索し最良の手を選択するのに対して、α-β法は、minimax法で採用さ れないと判断された手については、そこから先を探索しないことで無駄な探索に費やす時間をカ ットしている。また、α-β法による結果はminimax法での結果と同じになる。 枝刈りを行うことにより探索がminimax法より早く終わるのでα-β法のほうが効率的である。 47 47

48 Tic Tac Toe game Applet 48

49 Tic Tac Toe game In SymTic.java 49 else { int minValue = 999;
Vector v_child = this.children('o'); for (int i=0; i<v_child.size(); i++) { SymTic st = (SymTic)v_child.elementAt(i); int value = st.evaluate(depth-1, MAX, minValue); if (value < minValue) { minValue = value; // minValue =  } if (value <= refValue) { // refValue =   return value; } } return minValue; }} Tic Tac Toe game  In SymTic.java // 評価する. public int evaluate(int depth, int level, int refValue) { int e = evaluateMyself(); if ((depth==0)||(e==99)||(e==-99)||((e==0)&&(Judge.finished(this)))) { return e; } else if (level == MAX) { int maxValue = -999; Vector v_child = this.children(usingChar); for (int i=0; i<v_child.size(); i++) { SymTic st = (SymTic)v_child.elementAt(i); //st is a move int value = st.evaluate(depth, MIN, maxValue); if (value > maxValue ) { maxChild = st; maxValue = value; //maxValue =  } if (value > = refValue) { //refValue =  return value; return maxValue; private int evaluateMyself() { char c = Judge.winner(this); if (c == usingChar) { //win the game return 99; } else if (c != ' ') { //lose the game return -99; } else if (Judge.finished(this)) { //draw the game return 0; } 49

50 実行例1 実行例2 Quiz:Take a look of two web sites and make your own
example tree. 実行例1 実行例2 50

51 論理的に推論 L4. Reasoning Logically Knowledge Representation (知識表現)
 (知識表現) Propositional Logic  (命題論理) Vumpus World  (鬼の世界) 51 51

52 knowledge logic propositional logic mean representation infer
inference syntax semantics assuming entail breeze  pit smelly wumpus   adjacent cave enumeration fact imply equivalent 52 52

53 Logic What is a logic? Two elements constitute what we call a logic.
a formal language in which knowledge can be expressed. a means of carrying out reasoning in such a language. e.g. a logic in natural language   if the signal is red, then stop the car. a logic in logical sentence        A1,1  EastA  W2,1   Forward Knowledge base: It is a set of representations of facts about the world. Sentence: It is each individual representation. Knowledge representation language: It is used for expressing sentences. 53 53

54 Wumpus world 4 3 2 1 A p p p A p 1 2 3 4 b b b w b g g b w b b
The wumpus world is a grid of squares surrounded by walls, where each square can contain agents and objects. The agent always starts in the lower left corner, a square that we will label [1,1]. The agent’s task is to find the gold, return to [1,1] and climb out of the cave. Agent A s b p b Breeze 微風 s b w p b g Gold 金 g p Pit 穴 b s s Smelly 臭い w Wumpus 鬼 A b p b START 54

55 Wumpus World   Please write down the description of the wumpus world in Japanes Write down the PAGE 55

56 How an agent should act and reason
In the knowledge level From the fact that the agent does not detect stench and breeze in [1,1], the agent can infer that [1,2] and [2,1] are free of dangers. They are marked OK to indicate this. A cautious agent will only move into a square that it knows is OK. So the agent can move forward to [2,1] or turn left 900 and move forward to [1,2]. Assuming that the agent first moves forward to [2,1], from the fact that the agent detects a breeze in [2,1], the agent can infer that there must be a pit in a neighboring square, either [2,2][3,1]. So the agent turns around (turn left 900, turn left 900) and moves back to [1,1]. The agent has to move towards to [2,1], from the fact that the agent detects a stench in [1,2], the agent can infer that there must be a wumpus nearby and it can not be in [2,2] (or the agent would have detected a stench when it was in [2,1]). So the agent can infer that the wumpus is in [1,3]. It is marked with W. The agent can also infer that there is no pit in [2,2] (or the agent would detect breeze in [1,2]). So the agent can infer that the pit must be in [3,1]. After a sequence of deductions, the agent knows [2,2] is unvisited safe square. So the agent moves to [2,2]. What is the next move?…… moves to [2, 3] or [3,2]??? Assuming that the agent moves to [2,3], from the fact that the agent detects glitter in [2, 3], the agent can infer that there is a gold in [2,3]. So the agent grabs the gold and goes back to the start square along the squares that are marked with OK. 56 56

57 How to represent beliefs that can make inferences
initial facts  initial beliefs (knowledge) inferences  actions new beliefs  new facts  How to represent a belief (knowledge)?  Knowledge representation The object of knowledge representation is to express knowledge in computer- tractable form, such that it can be used to help agents perform well. A knowledge representation language is defined by two aspects: Syntax – describes the possible configurations that can constitute sentences. defines the sentences in the language Semantics – determines the facts in the world to which the sentences refer. defines the “meaning ” of sentences Examples: x = y*z + 10; is a sentence of Java language but x =yz10 is not. x+2y is a sentence of arithmetic language but x2+y > is not. “I am a student.” is a sentence in English but “I a student am.” is not 57 57

58 言語は、言語の構文論と意味論がはっきりと定義されてい る論理学と呼ばれている。
Logic and inference Logic: A language is called a logic provided the syntax and semantics of the language are defined precisely. 言語は、言語の構文論と意味論がはっきりと定義されてい る論理学と呼ばれている。 Inference: From the syntax and semantics, an inference mechanism for an agent that uses the language can be derived. 構文論と意味論から、言語を使用するエージェントのため の推論メカニズムは引き出すことができる Facts are parts of the world, whereas their representations must be encoded in some way within an agent. All reasoning mechanisms must operate on representations of facts, rather than on the facts themselves. The connection between sentences and facts is provided by the semantics of the language. The property of one fact following some other facts is mirrored by the property of one sentence being entailed by some other sentences. Logical inference generates new sentences that are entailed by existing sentences. 58 entail: 〈…を〉必然的 に伴う, 58

59 Entailment New sentences generated are necessarily true, given that the old sentences are true. This relation between sentences is called entailment. KB |=  Knowledge base KB entails sentence  if and only if  is true in all worlds where KB is true. Here, KB is a set of sentences in a formal language. For example, x > 0 and y > 0 |= x+y > 0 59 59

60 Propositional logic: Syntax
Symbols represent whole propositions (facts). The symbols of propositional logic are the logic constants true and false. Logic connectives:  (not),  (and),  (or),  (implies), and  (equivalent) If S is a sentence, S is a sentence. If S1 and S2 is a sentence, S1  S2 is a sentence. If S1 and S2 is a sentence, S1  S2 is a sentence. If S1 and S2 is a sentence, S1 S2 is a sentence. If S1 and S2 is a sentence, S1  S2 is a sentence. The order of the precedence in propositional logic is , , , ,  (from highest to lowest) 60 60

61 Propositional logic: Semantics
S is true iff S is false S1  S2 is true iff S1 is true and S2 is true S1  S2 is true iff S1 is true or S2 is true S1 S2 is true iff S1 is false or S2 is true i.e., is false iff S1 is true and S2 is false S1  S2 is true iff S1 S2 is true and S2 S1 is true S1 is true, then S2 is true. S1 is false, then S2 is either true or false S1 S2 S1 S2 S1 S2 S1 S2 S1  S2 S1 S2 S1  S2 S1  S2 grey  true grey  true white  false white  false 61 61

62 Propositional inference: Enumeration method
Let  = A  B and KB =(A  C)  (B  C) Is it the case that KB |= ?   はKBの論理的帰結 Check all possible models -  must be true whenever KB is true. A B C A  C B  C KB False True 62 62

63 命題論理の 性質: ¬(¬P) = P P∧Q = Q∧P P∨Q = Q∨P (P∧Q)∧R = P∧(Q∧R)
二重否定 ¬(¬P) = P 交換則 P∧Q = Q∧P P∨Q = Q∨P 結合則 (P∧Q)∧R = P∧(Q∧R) (P∨Q)∨R = P∨(Q∨R) 分配則 P∧(Q∨R) = (P∧Q)∨(P∧R) P∨(Q∧R) = (P∨Q)∧(P∨R) ド・モルガンの法則 ¬ (P∧Q) =  (¬P) ∨ (¬Q) ¬ (P∨Q) =  (¬P) ∧ (¬Q)  ここからは解釈との関連で特別な性質をもつ論理式について見ていこう.  どんな解釈のもとでも,2つの論理式 P, Q の真偽が一致するとき,この2つ の論理式は等価であるといい,P=Q と書く.  よく知られている等価な論理式をスライドに示してある.いずれも常識的な ものだが,特に,分配則の2つめの式においては,or が and に対して分配 できることに注意しよう.(orを足し算,andを掛け算だと思っているとこの式は 奇妙に感じるかもしれない.)  ド・モルガンの法則を知らなかった人は,ここで必ず覚えておこう.  いずれも,「すべての解釈について」,左辺と右辺の計算結果が一致する ことによって確認できる.(かなりの労苦を伴うが.) 63

64 例:真偽値の計算 のとき のとき  これは論理式の意味(真偽値)の計算例である. 64

65 例:恒真 のとき  どのような解釈のもとでも真である論理式は恒真であるという.このスライド の2つの例のうち,1つめは必ず真となることはすぐわかる.2つめがそうであ ることはすぐにはわからないので,4通りのすべての解釈に対して,この論理 式が真であることを確認する必要がある.スライドでは,その1つだけを示し ている. 65

66 The knowledge base Percept sentences:
there is no smell in the square [1,1]  S1,1 there is no breeze in the square [1,1]  B1,1 there is no smell in the square [2,1]  S2,1 there is breeze in the square [2,1]   B2,1 there is smell in the square [1,2]  S1,2 there is no breeze in the square [1,2]  B1,2 66 66

67 The knowledge base knowledge sentences:
If a square has no smell, then neither the square nor any of its adjacent squares can house a wumpus. R1: S1,1  W1,1  W1,2  W2,1 R2: S2,1  W1,1  W2,1  W2,2  W3,1 If there is smell in [1,2], then there must be a wumpus in [1,2] or in one or more of the neighboring squares. R3: S1,2  W1,3  W1,2  W2,2  W1,1 If a square has no breeze, then neither the square nor any of its adjacent squares can have a pit. R4: B1,1  P1,1  P1,2  P2,1 R5: B1,2  P1,1  P1,2 P2,2  P1,3 If there is breeze in [2,1], then there must be a pit in [3,1] or in one or more of the neighboring squares. R6: B2,1  P3,1  P2,1  P2,2  P1,1 67 67

68 Home work: Complete the following truth table according to propositional syntax. S1 S2 S1 S1  S2 S1  S2 S1 S2 S1  S2 False True 68 68

69 Where is Wumpus Propositional logic (cont…) Reasoning where is wumpus
鬼はどこですか? Propositional logic (cont…)    命題論理 Reasoning where is wumpus  鬼がいる場所を推理する 69 69

70 elimination introduction negation resolution complex atomic
conjunction disjunction time-dependent 70

71 命題論理: 意味論 論理積 A∧B AかつB 論理和 A∨B AまたはB 否定 ¬A Aでない 含意 A⇒B AならばBを意味する
同等 A⇔B (AならばB)かつ(BならばA) S1 S2 S1 S2 S1 is true, then S2 is true. S1 is false, then S2 is either true or false S1 is true, then S2 is true. S1 is false, then S2 is false S1 S2 S1  S2 white  false white  false 71 71

72 then B is either true or false
This relation between sentences is called entailment. A |= B This relation between sentences is called implication. A  B AならばB」(A→B)は、Aが真ならばBが真のときだけ真、Aが偽ならばBの真偽にかかわらず真となります。 A is true, then B is true. A is false, then B is either true or false 72

73 課題:真偽値の計算 p = T, q = F, r = Tのとき (p  q)  r p = T, q = F, r = Fのとき
p = F, q = F, r = Tのとき (p  q)  r p = F, q = F, r = Fのとき (p  q)  r  これは論理式の意味(真偽値)の計算例である. 73

74 Seven inference rules for propositional Logic
Modus Ponens And-Elimination And-Introduction Or-Introduction Double-Negation Elimination Unit Resolution Logic connectives:   ,  Some notations: ______________ means implication  , means  e.g. (( )  )   i 1  2 … n 1  2 … n 1, 2, …, n 1  2  …  n i      ,   (α または β , not β ) → α    ,     (α または β , not β または γ ) →  α または γ である    74 74

75 Wumpus world 4 3 2 1 A p p p A p 1 2 3 4 b b b w b g g b w b b
The wumpus world is a grid of squares surrounded by walls, where each square can contain agents and objects. The agent always starts in the lower left corner, a square that we will label [1,1]. The agent’s task is to find the gold, return to [1,1] and climb out of the cave. Agent A s b p b Breeze 微風 s b w p b g Gold 金 g p Pit 穴 b s s Smelly 臭い w Wumpus 鬼 A b p b START 75

76 The knowledge base p p A p Percept sentences:
there is no smell in the square [1,1]  S1,1 there is no breeze in the square [1,1]  B1,1 there is no smell in the square [2,1]  S2,1 there is breeze in the square [2,1]   B2,1 there is smell in the square [1,2]  S1,2 there is no breeze in the square [1,2]  B1,2 s s b b p w g p A p 76 76

77 The knowledge base p p A p knowledge sentences: w g
If a square has no smell, then neither the square nor any of its adjacent squares can house a wumpus. R1: S1,1  W1,1  W1,2  W2,1 R2: S2,1  W1,1  W2,1  W2,2  W3,1 If there is smell in [1,2], then there must be a wumpus in [1,2] or in one or more of the neighboring squares. R3: S1,2  W1,3  W1,2  W2,2  W1,1 If a square has no breeze, then neither the square nor any of its adjacent squares can have a pit. R4: B1,1  P1,1  P1,2  P2,1 R5: B1,2  P1,1  P1,2 P2,2  P1,3 If there is breeze in [2,1], then there must be a pit in [2,1] or in one or more of the neighboring squares. R6: B2,1  P3,1  P2,1  P2,2  P1,1 s s b b p w g p A p 77 77

78 Inferring knowledge using propositional logic
Concerning with the 6 squares, [1,1], [2,1], [1,2], [3,1], [2,2], [1,3], there are 12 symbols, S1,1, S2,1, S1,2, B1,1, B2,1, B1,2, W1,1, W1,2, W2,1, W2,2, W3,1, W1,3 The process of finding a wumpus in [1,3] as follows: 1. Apply R1 to S1,1, we obtain W1,1  W1,2  W2,1 2. Apply And-Elimination, we obtain W1,1 W1,2 W2,1 3. Apply R2 and And-Elimination to S2,1, we obtain W1,1  W2,2 W2,1 W3,1 4. Apply R3 and the unit resolution to S1,2, we obtain ( is W1,3W1,2 W2,2 and  is W1,1 ) W1,3  W1,2  W2,2 5. Apply the unit resolution again, we obtain ( is W1,3 W1,2 and  is W2,2 ) W1,3  W1,2 6. Apply the unit resolution again, we obtain ( is W1,3 and  is W1,2 ) W1,3 Here is the answer: the wumpus is in [1,3], that is, W1,3 is true. i 1  2 … n   ,   R3: S1,2  W1,3  W1,2  W2,2  W1,1 s s b b p w g p A p 78 78

79 Problem with propositional logic
Too many propositions  too many rules to define a competent agent The world is changing, propositions are changing with time.  do not know how many time-dependent propositions we will need have to go back and rewrite time-dependent version of each rule. The problem with proposition logic is that it only has one representational device: the proposition!!! The solutions to the problem is to introduce other logic first-order logic That can represent objects and relations between objects in addition to propositions. 79 79

80 First-order Logic Propositional logic (cont…)
一階述語論理 Propositional logic (cont…)    命題論理 Syntax&Semantics of first-order logic    構文論と意味論 Deducing hidden properties Describing actions 80 80

81 existential deduce complex hidden atomic property referent resolution
instantiation reflex causal diagnostic diachronic calculus axiom deduce hidden property resolution identity distinguish refer to predicate variable substitute quantify 81

82 Seven inference rules for propositional Logic
Modus Ponens And-Elimination And-Introduction Or-Introduction Double-Negation Elimination Unit Resolution Logic connectives:   ,  i 1  2 … n 1  2 … n 1, 2, …, n 1  2  …  n i      ,   (α または β , not β ) → α    ,     (α または β , not β または γ ) →  α または γ である    82 82

83 The knowledge base p p A p Percept sentences:
there is no smell in the square [1,1]  S1,1 there is no breeze in the square [1,1]  B1,1 there is no smell in the square [2,1]  S2,1 there is breeze in the square [2,1]   B2,1 there is smell in the square [1,2]  S1,2 there is no breeze in the square [1,2]  B1,2 s s b b p w g p A p 83 83

84 The knowledge base p p A p knowledge sentences: w g
If a square has no smell, then neither the square nor any of its adjacent squares can house a wumpus. R1: S1,1  W1,1  W1,2  W2,1 R2: S2,1  W1,1  W2,1  W2,2  W3,1 If there is smell in [1,2], then there must be a wumpus in [1,2] or in one or more of the neighboring squares. R3: S1,2  W1,3  W1,2  W2,2  W1,1 If a square has no breeze, then neither the square nor any of its adjacent squares can have a pit. R4: B1,1  P1,1  P1,2  P2,1 R5: B1,2  P1,1  P1,2 P2,2  P1,3 If there is breeze in [2,1], then there must be a pit in [2,1] or in one or more of the neighboring squares. R6: B2,1  P3,1  P2,1  P2,2  P1,1 s s b b p w g p A p 84 84

85 Inferring knowledge using propositional logic
Concerning with the 6 squares, [1,1], [2,1], [1,2], [3,1], [2,2], [1,3], there are 12 symbols, S1,1, S2,1, S1,2, B1,1, B2,1, B1,2, W1,1, W1,2, W2,1, W2,2, W3,1, W1,3 The process of finding a wumpus in [1,3] as follows: 1. Apply R1 to S1,1, we obtain W1,1  W1,2  W2,1 2. Apply And-Elimination, we obtain W1,1 W1,2 W2,1 3. Apply R2 and And-Elimination to S2,1, we obtain W1,1  W2,2 W2,1 W3,1 4. Apply R3 and the unit resolution to S1,2, we obtain ( is W1,3W1,2 W2,2 and  is W1,1 ) W1,3  W1,2  W2,2 5. Apply the unit resolution again, we obtain ( is W1,3 W1,2 and  is W2,2 ) W1,3  W1,2 6. Apply the unit resolution again, we obtain ( is W1,3 and  is W1,2 ) W1,3 Here is the answer: the wumpus is in [1,3], that is, W1,3 is true. i 1  2 … n   ,   R3: S1,2  W1,3  W1,2  W2,2  W1,1 s s b b p w g p A p 85 85

86 R6: B2,1  P3,1  P2,1  P2,2  P1,1 1. Apply R4 to B1,1, we obtain
第1引数 第2引数 1 2 4 3 1. Apply R4 to B1,1, we obtain P1,1∧P1,2∧P2,1 2. Apply And-Elimination, we obtain P1,1 P1,2 P2,1 3. Apply R5 to B1,2, we obtain P1,1∧P1,2∧P2,2 ∧P1,3 4. Apply And-Elimination , we obtain P1,1 P1,2 P2,2 P1,3 5. Apply R6 to B2,1, we obtain P3,1  P2,1  P2,2  P1,1 6. Apply Unit Resolution to B2,1, we obtain ( is P3,1  P2,1  P2,2  P1,1 and  is P1,1) P3,1  P2,1  P2,2  Apply Unit Resolution again, we obtain ( is P3,1  P2,1  P2,2 and  is P2,2) P3,1  P2,1 Apply Unit Resolution again, we obtain ( is P3,1  P2,1 and  is P2,1) P3,1 Here is the answer: the pit is in [3,1], that is, p3,1 is true. R6: B2,1  P3,1  P2,1  P2,2  P1,1 86

87 Problem with propositional logic
Too many propositions  too many rules to define a competent agent The world is changing, propositions are changing with time.  do not know how many time-dependent propositions we will need have to go back and rewrite time-dependent version of each rule. The problem with proposition logic is that it only has one representational device: the proposition!!! The solutions to the problem is to introduce other logic first-order logic That can represent objects and relations between objects in addition to propositions. 87 87

88 First-order Logic What is first-order logic?
First-order logic contains: Objects: are things with individual identities and properties. e.g., people, houses, computers, numbers, Mike Jason, color, … Properties: are used to distinguish an object from other objects. e.g., tall, western style, multimedia, prime, English, red, … Relations: exist and hold among the objects. e.g., father of, bigger than, made after, equal, student of, … Functions: are relations in which there is only one “value” for a given “input”. e.g., brother of, increment of, forward, one more than, … Almost any fact can be thought of as referring to objects and properties or relations For example: One plus two equals three. Objects: one, two, three, one plus one; Relations: equals; Function: plus. Squares neighboring the wumpus are smelly. Objects: squares, wumpus; Property: smelly; Relation: neighboring 88 88

89 Syntax of FOL: basic element
Constant symbols: refer to the same object in the same interpretation e.g. Mike Jason, 4, A, B, … Predicate symbols: refer to a particular relation in the model. e.g., Brother, >, Function symbols: refer to particular objects without using their names. Some relations are functional, that is, any given object is related to exactly one other object by the relation. (one-one relation) e.g., Cosine, FatherOf, Variables: substitute the name of an objec. e.g., x, y, a, b,… x, Cat(x)  Mammal(x) if x is a cat then x is a mammal. Logic connectives:  (not),  (and),  (or),  (implies), and  (equivalent) Quantifiers:  (universal quantification symbol),  (existential quantification symbol) x, for any x, … x, there is a x, … Equality: = e.g. Father(John) = Henry 89 89

90 Sentences: atomic vs complex
Atomic sentences = predicate(term1, term2, …termn) or term1 = term2 Term = function(term1, term2, …termn) or constant or variable E.g., Sister(Muxin, Yanbo) >(height(Muxin), height(Yanbo)) x, >(height(Yanbo), height(x)) Complex sentences: made from atomic sentences using connectives.  (not),  (and),  (or),  (implies), and  (equivalent) E.g., Sibling(Muxin, Yanbo)  Sibling(Yanbo, Muxin) >(Age(Muxin), Age(Yanbo))   >(Age(Muxin), Age(Yanbo) ) 90 90

91 Truth in first-order logic
Sentences are true with respect to a model and an interpretation. Model contains objects and relations among them. E.g., a model: a family. objects in this model: Robert John, Rose John, Shelly John, Daivd John. relations: Husband(Robert, Rose), Wife(Rose, Robert), Father(Robert, David), …, Sister(Shelly, David), … Interpretation specifies referents for constant symbols -> objects predicate symbols -> relations function symbols -> functional relations An atomic sentence, predicate(term1, term2, …termn), is true iff the objects referred to by term1, term2, …termn are relation referred to by predicate. E.g, Father(Robert, David) is true iff Robert and David are two objects in the family model. Robert is David’s father (relation referred to by predicate Father) 91 91

92 Universal quantification
<variables> <sentence> x P is equivalent to the conjunction of instantiations of P E.g., Every student at CIS is smart: x At(x, CIS)  Smart(x) At(Suzuki, CIS)  Smart(Suzuki)  At(Abe, CIS)  Smart(Abe)  At(Honda, CIS)  Smart(Honda)  … Typically,  is the main connective with . Common mistake: using  as the main connective with . x At(x, CIS)  Smart(x) means Every student is at CIS and every student is smart. 92 92

93 Existential quantification
<variables> <sentence>  x P is equivalent to the disjunction of instantiations of P E.g., Someone at CIS is smart:  x At(x, CIS)  Smart(x) At(Suzuki, CIS)  Smart(Suzuki)  At(Abe, CIS)  Smart(Abe)  At(Honda, CIS)  Smart(Honda)  … Typically,  is the main connective with . Common mistake: using  as the main connective with .  x At(x, CIS)  Smart(x) is true if there is any student who is not at CIS. The uniqueness quantifier ! E.g., ! x King(x) means that there is only one King. 93 93

94 Property of quantifiers
x y is the same as y x  x  y is the same as  y  x  x y is not the same as  y x E.g.,  x y Knows(x, y) “There is a person who knows everyone in the world” y  x Knows(x, y) “Everyone in the world is known by at least one person” Quantifier duality: each can be expressed using the other. . x Likes(x, iceCream)   x  Likes(x, iceCream) “Everyone likes ice cream” means that “There is no one who does not like ice cream” x Likes(x, carrot)  x  Likes(x, carrot) “Someone likes carrot” means that “Not everyone does not like carrot” 94 94

95 Knowledge base for the Wumpus world
Perception: Stench (variable s), Breeze (variable b), Glitter (variable g), Wall (variable u), Scream (variable v) b, g, u, v, t Percept([S, b, g, u, v], t)  Smelly(t) s, g, u, v, t Percept([s, B g, u, v], t)  Breeze(t) s, b, u, v, t Percept([s, b, G, u, v], t)  AtGoldRoom(t) Reflex: t AtGoldRoom(t)  Action(Grab, t) Reflex with internal state: do we have the gold already? t AtGoldRoom(t)  Holding(Gold, t)   Action(Grab, t) 95 95

96 Deducing hidden properties
Properties of locations: l, t At(Agent, l, t)  Smell(t)  Smell(l) l, t At(Agent, l, t)  Breeze(t)  Breeze(l) Diagnostic rule – infer cause from effect e. g. Squares are breezy near a pit y Breeze(y)   x Pit(x)  (x=y  Adjacent(x, y)) Causal rule – infer effect from cause x, y Pit(x)  (x=y  Adjacent(x, y))  Breeze(y) 96 96

97 Keeping track of world change
Situation calculus: is one way to represent change in FOL. - Facts holds in situation rather than eternally. e.g. Holding(Gold, Now) rather than just Holding(Gold) Holding(Gold, Now) denotes a situation. where, Now is extra situation argument. - Situations are connected by the Result function Result(a, s). e.g. At(Agent, [1,1], S0)  At(Agent, [1,2], S1) Result(Forward, S0) = S1 p p w g p S0 S1 w g p A A p p 97 97

98 Describing actions Effect axioms – describe changes due to action
s AtGoldRoom(s)  Holding(Gold, Result(Grab, s)) x, s Holding(x, Result(Release, s)) Frame axioms – describe non-changes due to action a, x, s Holding(x, s)  (aRelease)  Holding(x, Result(a, s)) Successor-state axioms – combine the effect axioms and the frame axioms P true afterwards  [an action made P true  P true already and no action made P false] a, x, s Holding(x, Result(a, s))  [( (Present(x, s)  a=Grab )  (Holding(x, s)  aRelease) ] 98 98

99 Toward a goal Once the gold is found, the aim now is to return to the start as quickly as possible. Assuming that the agent now is holding “gold ”and has the goal of being at location [1,1].  s Holding(Gold, s)  GoalLocation([1,1], s) The presence of an explicit goal allows the agent to work out a sequence of actions that will achieve the goal. 99 99

100 L7. Logic for knowledge representation
and inference 知識表現 推理 Propositional logic (continue …) Propositional logic versus first-order logic (命題論理対一階述語論理 ) Apply first-order logic e.g. A family relations A proof 100 100

101 Sentence in propositional logic
Sentence  AtomicSentence | ComplexSentence AtomicSentence  proposition symbols like P, Q, R True | False ComplexSentence  (Sentence) | Sentence Connectives like P  Q  (P  Q)  ( P   Q) Connectives:  (not),  (and),  (or),  (implies), and  (equivalent) 原子文 複文 連結語 (必然的に)伴う 101 101

102 FOL: basic elements and sentences
Constant (定数) symbols: refer to the same object in the same interpretation Predicate (述語) symbols: refer to a particular relation in the model. Function (関数) symbols: refer to particular objects without using their names. Logic connectives:  (not),  (and),  (or),  (implies), and  (equivalent) Equality (対等): = e.g. Father(John) = Henry Variables (変数): substitute (取り替える) for the name of an object. e.g., x, y, a, b,… x, Cat(x)  Mammal(x)           //Mammal:哺乳動物.  Quantifiers (数量詞):  (universal (一般) quantification symbol)       (existential (存在に関する) quantification symbol)       x, for any x, … x, there is a x, … Atomic sentences = predicate(term1, term2, …termn) Term = function(term1, term2, …termn) or constant or variable Complex sentences: made from atomic sentences using connectives. 102 102

103 Seven inference rules for propositional Logic
R(1) Implication-Elimination R(2) And-Elimination R(3) And-Introduction R(4) Or-Introduction R(5) Double-Negation Elimination R(6) Unit Resolution R(7) Logic connectives:   ,  i 1  2 … n 1  2 … n 1, 2, …, n 1  2  …  n i      ,     ,        103

104 The three new inference rules added to FOL
R(8) Universal-Elimination: For any sentence , variable v, and ground term g: e. g.,  x Likes(x, IceCream), we can use the substitute {x/Rose} and infer Like(Rose, IceCream). R(9) Existential-Elimination: For any sentence , variable v, and constant symbol k that does not appear elsewhere in the knowledge base: e. g.,  x Kill(x, Victim), we can infer Kill(Murderer, Victim), as long as Murderer does not appear elsewhere in the knowledge base. R(10) Existential-Introduction: For any sentence , variable v that does not occur in , and ground term g that does occur in : e. g., from Likes(Rose, IceCream) we can infer  x Likes(x, IceCream). SUBST({v/g}, )  v  Ground term is a term that contains no variables. v/g SUBST({v/k}, )  v  k  v SUBST({g/v}, ) 104

105 Representing Simple Rules
Variables: x, P, C, ... Generalised Facts: female(P), parent(P, C), ... Conjunctions: parent(P, C)  female(P) Rules:  P, C parent(P, C)  female(P)  mother(P, C) 105

106 Knowledge Bases: Rules
Family Relations  GP, P, C parent(P, C)  female(P)  mother(P, C) parent(P, C)  male(P)  father(P, C) parent(GP, P)  parent(P, C)  grandparent(GP, C) parent(GP, P)  female(GP)  parent(P, C) grandmother(GP, C) grandparent(GP, C)  male(GP)  grandfather(GP, C) 106

107 Knowledge Base Architecture
Updates KNOWLEDGE BASE (KB) facts and rules Query Answer INFERENCE MECHANISM 107

108 Example of proof in first-order logic
Bob is a buffalo | 1. Buffalo(Bob) f1 Pat is a pig | 2. Pig(Pat) f2 Buffaloes run faster than pigs | 3.  x, y Buffalo(x)  Pig(y)  Faster(x,y) --r1 To proof: Bob runs faster than Pat Apply R(3) to f1 And f | 4. Buffalo(Bob)  Pig(Pat) f3 (And-Introduction) Apply R(8) to r1 {x/Bob, y/Pat} | 5. Buffalo(Bob)  Pig(Pat)  Faster(Bob,Pat) --f4 (Universal-Elimination) Apply R(1) to f3 And f | 6. Faster(Bob,Pat) f5 (Inplication-Elimination ) 108

109 Search with primitive inference rules
Operators are inference rules States are sets of sentences Goal test checks state to see if it contains query sentence R(1) to R(10) are common inference pattern 1 2 3 Apply R(3) to 1 & 2 Apply R(8) to 3 Apply R(1) to 4 & 5 109

110 A Reasoning System Rule Base Working Memory Interaction with
Inference Engine Matching Fire a Rule Select a Rule Acting 110

111 Here is the answer: the gold is in [2,3].
Finding gold The agent has been in [1,1]->[2,1]->[1,1]->[1,2] and knows there is a wumpus in [1,3] and a pit in [3,1] then goes to [2,2] continues to [3,2] (forward since it perceives nothing special) Now, percept sentences (add a new one): there is a breeze in the square [3,2]  B3,2 Since knowledge is not enough to judge whether [4,2] and [3,3] are safe or not, the agent goes back to [2,2], turn left and goes to [2,3]. Now, percept sentences(add new ones):   there is a glitter in the square [2,3]  G2,3 The agent has been in [1,1]->[2,1]->[1,1]->[1,2] -> [2,2] -> [3,2] -> [2,2]-> [2,3] Here is the answer: the gold is in [2,3]. 111 111

112 Finding a way back to the start position
Go back along the path (5 steps) [1,1]->[2,1]->[1,1]->[1,2] -> [2,2] -> [3,2] -> [2,2]-> [2,3] (a doubly linked list would be a good data structure for recording agent’s history path) Using safe squares to build a tree, search a shortest path from [2,3] to [1,1] (3 steps). s s b b p w g p A p 112 112

113 Finding a way back to the start position
Go back along the path (5 steps) [1,1]->[2,1]->[1,1]->[1,2] -> [2,2] -> [3,2] -> [2,2]-> [2,3] (a doubly linked list would be a good data structure for recording agent’s history path) Using safe squares to build a tree, search a shortest path from [2,3] to [1,1] (3 steps). Quiz: Draw a tree from the square where there is a gold) to the square [1,1] s s b b p w g p [2,3] [3,2] [1,2] [2,2] [1,1] [2,1] A p 113 113

114 Appendix (追加) Apply first-order logic to making inference in the Wumpus world. (If you have time, please take a look this appendix.) A gold finding agent in the Wumpus world 114

115 Predicates and rules At(p, l, S) : Agent p is at the location l in the situation S Percept([Smell, b, g, u, v], l, S, t): Agent perceives smell at time t at the location l in the situation S Perception rules R1:  b, g, u, v, l, s, t Percept([Smell, b, g, u, v], s, t)  Smell(t) R2: sm, g, u, v, l, s, t Percept([sm, Breeze, g, u, v], s, t)  Breeze(t) R3: sm, b, u, v, l, s, t Percept([sm, b, Glitter, u, v], s, t)  AtGoldRoom(t) R4: sm, b, u, v, l, s, t Percept([sm, b, g, Wall, v], s, t)  Wall(t) R6: sm, b, g, u, v, l, s, t Percept([sm, b, g, u, v], s, t)   Smell(t)  Breeze(t)  AtGoldRoom(t)  Wall(t) Action rules R7:  i, j Result(Forward, li) = lj i  j  i, j Result(Turn, Sm) = Sn m  n Orientation rules R8:  s (Orientation(p, s ) =0  Face(p, east) )  (Orientation(p, s) = 90  Face(p, north )  (Orientation(p, s) = 180  Face(p, west)  (Orientation(p, s) = 270  Face(p, south) ) R9:  a, d, p, s Orientation(p, Result(a, s)) =d  [(a=Turn(Right) d=Mod(Orientation(p,s) – 90, 360))  (a=Turn(Left) d=Mod(Orientation(p,s) + 90, 360) )  (Orientation(p,s)=d (a=Turn(Right)  a=Turn(Left)))] 115 115

116 R20: l1, l2, s At(Pit, l1)  Adjacent(,l1, l2)  Breeze(l2)
Location rules R10:  l, l   x, y [x, y] R11: (a)  x, y LocationToward([x, y], 0) = [x+1, y] (b)  x, y LocationToward([x, y], 90) = [x, y+1] R12: (a)  x, y LocationToward([x, y], 180) = [x-1, y] (b)  x, y LocationToward([x, y], 270) = [x+1, y-1] R13:  p, l, s At(p, l, s)  LocationAhead(p,S) = LocationToward(l, Orientation(p,s)) R14:  l1, l2 Adjacent(l1, l2)   d l1=LocationToward(l2, d) R15:  x, y Wall([x, y])  (x=0  x=5  y=0  y=5) R16:  a, d, p, s At(p, l, Result(a, s))  [(a=Forward  l = LocationAhead(p,s)  Wall(l))  (At(p,l,s)  a  Forward)] Hidden properties (rules) R17: l, s At(Agent, l, s)  Smell(s)  Smell(l) R18: l, s At(Agent, l, s)  Breeze(s)  Breeze(l) R19: l1, s Breeze(l1)   l2 At(Pit, l2,s)  (l2, = l1  Adjacent(,l1, l2)) R20: l1, l2, s At(Pit, l1)  Adjacent(,l1, l2)  Breeze(l2) R21:  l1, s Breeze(l1)   l2 At(Pit, l2, s )  (l2, = l1  Adjacent(,l1, l2)) 116 116

117 R23: l1, l2, s At(Wumpus, l1)  Adjacent(,l1, l2)  Smell(l2)
Hidden properties (rules) R22: l1, s Smell(l1)   l2 At(Wumpus, l2,s)  (l2, = l1  Adjacent(,l1, l2)) R23: l1, l2, s At(Wumpus, l1)  Adjacent(,l1, l2)  Smell(l2) R24:  l1, s Smell(l1)   l2 At(Wumpus, l2, s )  (l2, = l1  Adjacent(,l1, l2)) R25: x, y, g, u, v, s, t Percept([None, None, g, u, v], t)  At(Agent, x, s)  Adjacent(,x, y)  OK(y) R26: x, t ( At(Wumpus, x, t)  At(Pit, x, t))  OK(x) Obtaining Gold R27:  x, s AtGoldRoom( x, s)  Present(Gold, x, s) R28:  x, s Present( x, s)  Portable(x) Holding(x, Result(Grab, s)) R29:  x, s Holding(x, Result(Release, s)) R30:  a, x, s Holding(x, s)  (aRelease)  Holding(x, Result(a, s)) R31:  a, x, s Holding(x, s)  (aGrab   (Present(x, s)  Portable(x))  Holding(x, Result(a, s)) P true afterwards  [an action made P true  P true already and no action made P false] R32:  a, x, s Holding(x, Result(a, s))  [(a=Grab  (Present(x, s)  Portable(x))  (Holding(x, s)  aRelease) ] 117 117

118 Since the world is not changing, our inferences below drop time term.
Initial state At(Agent, [1,1]) Apply And-Elimination rule to R6, we obtain C1 Smell([1,1]) , Breeze([1,1]) , AtGoldRoom([1,1]) , Wall([1,1]) Apply And-Elimination rule to R21 and apply R15, we obtain C At(Pit, [1,1]), At(Pit, [2,1]), At(Pit, [2,1]), Wall([1,0]), Wall([0,1]) Apply And-Elimination rule to R24, we obtain C At(Wumpus, [1,1]), At(Wumpus, [2,1]), At(Wumpus, [1,2]) Apply R26 to C2 and C3, we obtain C OK([1,1]), OK([2,1]), OK([1,2]) Apply R11(a) and R16, we obtain At(Agent, [2,1]) Apply And-Elimination rule to R2, we obtain C5 Smell([2,1]) , Breeze([2,1]) , AtGoldRoom([2,1]) , Wall([2,1]) Apply R19 to Breeze([2,1]), we obtain C6 At(Pit, [1,1])  At(Pit, [2,1])  At(Pit, [2,2])  At(Pit, [3,1]) 118 118

119 Apply unit resolution twice to C6, we obtain
C7 At(Pit, [2,2])  At(Pit, [3,1]) Use C4 and C7, (for a conservative agent, it would not go where is not sure), and apply R9, R12(a) and R16, we obtain At(Agent, [1,1]), Orientation(Agent, [1,1])=180 Use C4 and apply R9, R11(b) and R16, we obtain At(Agent, [1,2]), Orientation(Agent, [1,2])=90 Apply R22 to Smell([1,2]), we obtain C8 At(Wumpus, [1,1])  At(Wumpus, [1,3])  At(Wumpus, [1,2])  At(Wumpus, [2,2]) Apply unit resolution twice to C8, we obtain C9 At(Wumpus, [2,2])  At(Wumpus, [1,3]) Apply R24 and And-Elimination rule to Smell([2,1]), we obtain C At(Wumpus, [1,1]), At(Wumpus, [1,2]), At(Wumpus, [2,2]), At(Wumpus, [2,1]) Apply unit resolution to C9, we obtain C At(Wumpus, [1,3]) Apply R21 and And-Eleimination rule to Breeze([1,2]), we obtain C12 At(Pit, [1,1]), At(Pit, [2,1]), At(Pit, [2,2]), At(Pit, [3,1]) 119 119

120 Apply R26 to C10 and C12, we obtain C13 OK([2,2]),
Use C13 and apply R9, R11(a) and R16, we obtain At(Agent, [2,2]), Orientation(Agent, [2,2])=0 Use C4 and apply R9, R11(b) and R16, we obtain At(Agent, [1,2]), Orientation(Agent, [1,2])=90 Apply R21 and And-Elimination rule to Breeze([2,2]), and apply R24 and And-Elimination rule to Smell([2,2]), we obtain C14 At(Pit, [2,1]), At(Pit, [1,2]), At(Pit, [2,2]), At(Pit, [2, 3], At(Pit, [2, 3]) C At(Wumpus, [2,1]), At(Wumpus, [1,2]), At(Wumpus, [2,2]), At(Wumpus, [2,3], At(Wumpus, [3,2]) Apply R26 to C14 and C15, we obtain C OK([2,3]), OK([3,2]) Apply R11(a) and R16, we obtain At(Agent, [3,2]) Apply R19 to Breeze([3,2]), we obtain C17 At(Pit, [2,2])  At(Pit, [2,3])  At(Pit, [3,2])  At(Pit, [3,1])  At(Pit, [4,2]) Apply twice unit resolution to C9, we obtain C At(Pit, [3,2])  At(Pit, [3,1])  At(Pit, [4,2]) 120 120

121 The rest of question is how to go back safely?
For safe, agent goes back to [2, 2] Use C13 and apply R9, R12(a) and R16, we obtain At(Agent, [2,2]), Orientation(Agent, [2,2])=180 Use C16 and apply R9, R11(b) and R16, we obtain At(Agent, [2,3]), Orientation(Agent, [2,3])=90 Apply R3, we obtain C AtGoldRoom( [2,3]) Apply R32, we obtain C20 Holding([2,3], Result(Grab)) The rest of question is how to go back safely? 121 121

122 Inference in First-order Logic
Unification (統一) Forward Chaining (前向き推論) Backward Chaining (後ろ向き推論) 122

123 premise inference Inference engine substitution match unify interaction attempt prove conclusion consequence

124 Seven inference rules for propositional Logic
R(1) Modus Ponens R(2) And-Elimination R(3) And-Introduction R(4) Or-Introduction R(5) Double-Negation Elimination R(6) Unit Resolution R(7) Logic connectives:   ,  i 1  2 … n 1  2 … n 1, 2, …, n 1  2  …  n i      ,     ,        124

125 The three new inference rules
R (8) Universal Elimination: For any sentence , variable v, and ground term g: e. g.,  x Likes(x, IceCream), we can use the substitute {x/Rose} and infer Like(Rose, IceCream). R (9) Existential Elimination: For any sentence , variable v, and constant symbol k that does not appear elsewhere in the knowledge base: e. g.,  x Kill(x, Victim), we can infer Kill(Murderer, Victim), as long as Murderer does not appear elsewhere in the knowledge base. R (10) Existential Introduction: For any sentence , variable v that does not occur in , and ground term g that does occur in : e. g., from Likes(Rose, IceCream) we can infer  x Likes(x, IceCream). SUBST({v/g}, )  v  Ground term is a term that contains no variables. SUBST({v/k}, )  v   v SUBST({g/v}, ) 125

126 Example of proof (証明) Bob is a buffalo | 1. Buffalo(Bob) f1 Pat is a pig | 2. Pig(Pat) f2 Buffaloes run faster than pigs | 3.  x, y Buffalo(x)  Pig(y)  Faster(x,y) --r1 To proof: Bob runs faster than Pat Apply R(3) to f1 And f | 4. Buffalo(Bob)  Pig(Pat) f3 (And-Introduction) Apply R(8) to r1 {x/Bob, y/Pat} | 5. Buffalo(Bob)  Pig(Pat)  Faster(Bob,Pat) --f4 (Universal-Elimination) Apply R(1) to f3 And f | 6. Faster(Bob,Pat) f5 (Inplication-Elimination ) 126

127 A Reasoning System Rule Base Working Memory Interaction with
A new conclusion A new fact, p Interaction with Inference Engine Matching A query, q An answer, yes/no Fire a Rule Select a Rule Acting 127

128 Unify Unification function, Unify, is to take two atomic sentences p and q and return a substitution that would make p and q look the same. A substitute  unifies atomic sentences p and q if p  =q  For example, p q  Knows(John, x) | Knows(John, Jane) | {x/Jane} Knows(John, x) | Knows(y, OJ) | {y/John, x/OJ} Knows(John, x) | Knows(y, Mother(y)) | {y/John, x/Mother(John)} Premise 前提 128

129 String matching: string1 = string2
e.g. “rose” = “rose” if string1.equals(string2) “I am Rose” = “I am Rose” “I am ?x” = “I am Rose” “I am ?x” = “?y am Rose” I = ?y am = am ?x = Rose ? e.g. ?x is ?y and ?x Rose is rose and ?y e.g. husband(father(?x), Mike) husband(father(John), Mike) ?x = Rose ?y = rose ?x = ?y ?x = John 129

130 Forward chaining If we start with the sentences in the knowledge base and generate new conclusions that in turn can allow more inferences to be made. This is called forward chaining. TELL when a new fact p is added (told) to the KB for each rule such that p unifies with a premise if the other premises are known then add the conclusion to the KB and continue chaining. ・新しい事実が観測されたときに、事実に最も合う推論を求める ・事実からスタートして、ルールによって推論結果を得る ・新たに得られた推論結果は、事実と同じように次の推論に利用できる ・ 「AはBである」という事実と、「BならばC」という規則から、「AはCである」と  いう結論を導く推論方式 Forward chaining is usually used when a new fact is added to the database and we want to generate its consequences. It is data driven. 130

131 Forward chaining example
Let us add facts r1, r2, r3, f1, f2, f3 in turn into KB. r1. Buffalo(x)  Pig(y)  Faster(x,y) r2. Pig(y)  Slug(z)  Faster(y,z) r3. Faster(x,y) Faster(y,z)  Faster(x,z) f1. Buffalo(Bob) [r1-c1, Bob/x, yes] f2. Pig(Pat) [r1-c2, Pat/y, yes]  f4. Faster(Bob, Pat) f3. Slug(Steve) [r2-c2, Steve/z, yes] [r2, f2, f3, Pat/y, Steve/z, yes]  f5. Faster(Pat, Steve) [r3, f4, f5, Bob/x, Pat/y, Steve/z, yes]  f6. Faster(Bob, Steve)  x, y, z 131

132 Backward chaining It is to start with something we want to prove, find implication sentences that would allow us to conclude it, and them attempt to establish their premises in turn. This is called backward chaining. ASK when a query q is asked if a matching fact q’ is known, return the unifier for each rule whose consequent q’ match q attempt to prove each premise of the rule by backward chaining ・与えられた仮説が、現在のアサーション集合において成り立つかど うかを検証していく推論 ・ゴールからスタートする、ゴールが事実の集合にあれば推論成功 132

133 Backward chaining example
Bob is a buffalo | 1. Buffalo(Bob) f1 Pat is a pig | 2. Pig(Pat) f2 Buffaloes run faster than pigs | 3.  x, y Buffalo(x)  Pig(y)  Faster(x,y) --r1 Goal: to prove Faster(Bob, Pat) Faster(x, y) r1 Buffalo(x)  Pig(y) R(2) – And Elimination Buffalo(x) Pig(y) R(8) – Universal Elimination R(8) – Universal Elimination {x/Bob} {y/Pat} Buffalo(Bob) Pig(Pat) {} {} 133

134 Rules defined in the rule base file: CarShop
rule "CarRule1" if "?x is inexpensive" then "?x is made in Japan" rule "CarRule2" if "?x is small" rule "CarRule3" if "?x is expensive" then "?x is a foreign car" rule "CarRule4" if "?x is big" "?x needs a lot of gas" then "?x is a foreign car" rule "CarRule5" if "?x is made in Japan" "?x has Toyota's logo" then "?x is a Toyota" rule "CarRule6" if "?x is made in Japan" "?x is a popular car" then "?x is a Toyota" rule "CarRule7" if "?x is made in Japan" "?x has Honda's logo" then "?x is a Honda" rule "CarRule8" "?x has a VTEC engine" rule "CarRule9“ if "?x is a Toyota" "?x has several seats" "?x is a wagon" then "?x is a Carolla Wagon" rule "CarRule10" "?x is a hybrid car" then "?x is a Prius" rule "CarRule11" if "?x is a Honda" "?x is stylish" "?x has several color models" then "?x is an Accord Wagon" rule "CarRule12" if "?x is a Honda" "?x has an aluminium body" "?x has only 2 seats" then "?x is a NSX" rule "CarRule13" if "?x is a foreign car" "?x is a sports car" "?x is stylish" "?x has several color models" "?x has a big engine" then "?x is a Lamborghini Countach" rule "CarRule14" "?x is red" then "?x is a Ferrari F50" rule "CarRule15" "?x is a good face" then "?x is a Jaguar XJ8"

135 Forward Chaining-他の例 rules 135

136 Backward Chaining-他の例
rules 136

137 Working memory Vehicle Rule Base forward/backward chaining demo
agentSoft/ciagent/part1/rule/RuleApplet.html Working memory Vehicle Rule Base Vehicles Rule Base Rule Base: bicycle: IF vehicleType=cycle AND num_wheels=2 AND motor=no THEN vehicle=Bicycle tricycle: IF vehicleType=cycle AND num_wheels=3 THEN vehicle=Tricycle motorcycle: IF vehicleType=cycle AND motor=yes THEN vehicle=Motorcycle sportsCar: IF vehicleType=automobile AND size=small AND num_doors=2 THEN vehicle=Sports_Car sedan: IF vehicleType=automobile AND size=medium AND num_doors=4 THEN vehicle=Sedan miniVan: IF vehicleType=automobile AND num_doors=3 THEN vehicle=MiniVan SUV: IF vehicleType=automobile AND size=large THEN vehicle=Sports_Utility_Vehicle Cycle: IF num_wheels<4 THEN vehicleType=cycle Automobile: IF num_wheels=4 AND motor=yes THEN vehicleType=automobile vehicleType value = null size value = null num_wheels value = null num_doors value = null motor value = null vehicle value = null size set to medium num_wheels set to 4 motor set to yes num_doors set to 3 137

138 Forward chaining trace log
--- Setting all Vehicles Rule Base variables to null --- Starting Inferencing Cycle --- vehicleType value = null size value = medium num_wheels value = 4 num_doors value = 3 motor value = yes vehicle value = null Testing rule bicycle Testing rule tricycle Testing rule motorcycle Testing rule sportsCar Testing rule sedan Testing rule miniVan Testing rule SUV Testing rule Cycle Testing rule Automobile -- Rules in conflict set: Automobile(2), Firing rule Automobile Testing rule bicycle Testing rule tricycle Testing rule motorcycle Testing rule sportsCar Testing rule sedan Testing rule miniVan Testing rule SUV Testing rule Cycle Testing rule Automobile -- Rules in conflict set: miniVan(3), Firing rule miniVan -- Rules in conflict set: vehicleType value = automobile size value = medium num_wheels value = 4 num_doors value = 3 motor value = yes vehicle value = MiniVan --- Ending Inferencing Cycle --- 138

139 Backward chaining trace log
The goal is set as MiniVan Evaluating rule motorcycle Rule motorcycle is false, can't set vehicle Evaluating rule sportsCar Rule sportsCar is false, can't set vehicle Evaluating rule sedan Rule sedan is false, can't set vehicle Evaluating rule miniVan Rule miniVan is true, setting vehicle: = MiniVan +++ Found Solution for goal: vehicle --- Stopping Demo BackwardChain! --- vehicleType value = automobile size value = medium num_wheels value = 4 num_doors value = 3 motor value = yes vehicle value = MiniVan --- Starting Demo BackwardChain --- vehicleType value = null size value = medium num_wheels value = 4 num_doors value = 3 motor value = yes vehicle value = null Evaluating rule bicycle Evaluating rule Cycle Rule Cycle is false, can't set vehicleType Evaluating rule Automobile Rule Automobile is true, setting vehicleType: = automobile Rule bicycle is false, can't set vehicle Evaluating rule tricycle Rule tricycle is false, can't set vehicle 139

140 Understand the forward chaining algorithm and think an example.
Quiz: Understand the forward chaining algorithm and think an example. Home Work: Understand the backward chaining algorithms and think an example. 140

141 10. A rule based system Forward Chaining based rule base system 141

142 Reasoning with Rules Knowledge represented as if-then rules
Natural Easy to understand Each rule as a standalone piece of knowledge Easy to update knowledge base Forward chain To produce new facts  production rule Backward chain To deduct whether statements are true or not Expert system (Knowledge-based system) A rule-based processing system

143 Major Elements A rule base A working memory Inference engine
The set of if-then rules A working memory Known facts and/or derived facts Inference engine Containing the reasoning logic used to process the rules and data

144 A Rule based System Rule Base Working Memory Interaction with
A new conclusion A new fact, p Interaction with Inference Engine Matching A query, q An answer, yes/no Fire a Rule Select a Rule Acting 144

145 Forward chaining If we start with the sentences in the knowledge base and generate new conclusions that in turn can allow more inferences to be made. This is called forward chaining. TELL when a new fact p is added (told) to the KB for each rule such that p unifies with a premise if the other premises are known then add the conclusion to the KB and continue chaining. ・新しい事実が観測されたときに、事実に最も合う推論を求める ・事実からスタートして、ルールによって推論結果を得る ・新たに得られた推論結果は、事実と同じように次の推論に利用できる ・ 「AはBである」という事実と、「BならばC」という規則から、「AはCである」と  いう結論を導く推論方式 Forward chaining is usually used when a new fact is added to the database and we want to generate its consequences. It is data driven. 145

146 Forward chaining example
Let us add facts r1, r2, r3, f1, f2, f3 in turn into KB. r1. Buffalo(x)  Pig(y)  Faster(x,y) r2. Pig(y)  Slug(z)  Faster(y,z) r3. Faster(x,y) Faster(y,z)  Faster(x,z) f1. Buffalo(Bob) [r1-c1, Bob/x, yes] f2. Pig(Pat) [r1-c2, Pat/y, yes]  f4. Faster(Bob, Pat) f3. Slug(Steve) [r2-c2, Steve/z, yes] [r2, f2, f3, Pat/y, Steve/z, yes]  f5. Faster(Pat, Steve) [r3, f4, f5, Bob/x, Pat/y, Steve/z, yes]  f6. Faster(Bob, Steve)  x, y, z 146

147 Rules defined in the rule base file: CarShop
rule "CarRule1" if "?x is inexpensive" then "?x is made in Japan" rule "CarRule2" if "?x is small" rule "CarRule3" if "?x is expensive" then "?x is a foreign car" rule "CarRule4" if "?x is big" "?x needs a lot of gas" then "?x is a foreign car" rule "CarRule5" if "?x is made in Japan" "?x has Toyota's logo" then "?x is a Toyota" rule "CarRule6" if "?x is made in Japan" "?x is a popular car" then "?x is a Toyota" rule "CarRule7" if "?x is made in Japan" "?x has Honda's logo" then "?x is a Honda" rule "CarRule8" "?x has a VTEC engine" rule "CarRule9“ if "?x is a Toyota" "?x has several seats" "?x is a wagon" then "?x is a Carolla Wagon" rule "CarRule10" "?x is a hybrid car" then "?x is a Prius" rule "CarRule11" if "?x is a Honda" "?x is stylish" "?x has several color models" then "?x is an Accord Wagon" rule "CarRule12" if "?x is a Honda" "?x has an aluminium body" "?x has only 2 seats" then "?x is a NSX" rule "CarRule13" if "?x is a foreign car" "?x is a sports car" "?x is stylish" "?x has several color models" "?x has a big engine" then "?x is a Lamborghini Countach" rule "CarRule14" "?x is red" then "?x is a Ferrari F50" rule "CarRule15" "?x is a good face" then "?x is a Jaguar XJ8"

148 Forward Chaining 初期状態でワーキングメモリに以下の「事実」 が入っているとします
my-car is inexpensive my-car has a VTEC engine my-car is stylish my-car has several color models my-car has several seats my-car is a wagon

149 Forward Chaining

150 Forward Chainingの実行例 ・・・
CarRule1 [?x is inexpensive]->?x is made in Japan CarRule15 [?x is a foreign car, ?x is a good face]->?x is a Jaguar XJ8 apply rule:CarRule1 Success: my-car is made in Japan ADD:my-car is made in Japan apply rule:CarRule2 apply rule:CarRule8 Success: my-car is a Honda ADD:my-car is a Honda apply rule:CarRule9 apply rule:CarRule10 apply rule:CarRule11 Success: my-car is an Accord Wagon ADD:my-car is an Accord Wagon apply rule:CarRule12 apply rule:CarRule15 Working Memory[my-car is inexpensive, my-car has a VTEC engine, my-car is stylish, my-car has several color models, my-car has several seats, my-car is a wagon, my-car is made in Japan, my-car is a Honda, my-car is an Accord Wagon] No rule produces a new assertion

151 A rulebase

152 Forward Chainingのプログラム
Rule Baseクラス - forwardChain() //前向き推論を行うためのメソッド public void forwardChain() { boolean newAssertionCreated; // 新しいアサーションが生成されなくなるまで続ける. do { newAssertionCreated = false; for (int i = 0; i < rules.size(); i++) { Rule aRule = (Rule) rules.elementAt(i); System.out.println("apply rule:" + aRule.getName()); Vector antecedents = aRule.getAntecedents(); String consequent = aRule.getConsequent(); Vector bindings = wm.matchingAssertions(antecedents); if (bindings != null) { for (int j = 0; j < bindings.size(); j++) { //後件をインスタンシエーション String newAssertion = instantiate((String) consequent, (Hashtable) bindings.elementAt(j)); //ワーキングメモリーになければ成功 if (!wm.contains(newAssertion)) { System.out.println("Success: " + newAssertion); wm.addAssertion(newAssertion); newAssertionCreated = true; } System.out.println("Working Memory" + wm); } while (newAssertionCreated); System.out.println(“No rule produces a new assertion”);   ルールベースからルール を一つずつ取り出し、ワー キングメモリ中の「事実」と マッチングする マッチングが成功し、変数 束縛が生じれば、変数の 具体化を行う

153 In a Rule-base System public class RuleBaseSystem {
static RuleBase rb; public static void main(String args[]){ rb = new RuleBase(); rb.forwardChain(); }

154 Rule-base System Examples
rule "CarRule1" if "?x is inexpensive" then "?x is made in Japan" rule "CarRule4" if "?x is big" "?x needs a lot of gas" then "?x is a foreign car" name antecedents consequent class WorkingMemory { Vector assertions; WorkingMemory(){ assertions = new Vector(); } ……. class RuleBase { String fileName; WorkingMemory wm; Vector rules; RuleBase(){ fileName = "CarShop.data"; wm = new WorkingMemory(); wm.addAssertion("my-car is inexpensive"); wm.addAssertion("my-car is a wagon"); rules = new Vector(); loadRules(fileName(); } class Rule { String name; Vector antecedents; String consequent; Rule(String theName,Vector theAntecedents, String theConsequent){ this.name = theName; this.antecedents = theAntecedents; this.consequent = theConsequent; }

155 Rule-base System Examples
public class RuleBaseSystem { static RuleBase rb; public static void main(String args[]){ rb = new RuleBase(); rb.forwardChain(); } In class WorkingMomery public Vector matchingAssertions(Vector theAntecedents){ Vector bindings = new Vector(); return matchable(theAntecedents,0,bindings); } private Vector matchable(Vector theAntecedents,int n,Vector bindings){ // recursive method } public void addAssertion(String theAssertion){ System.out.println("ADD:"+theAssertion); assertions.addElement(theAssertion); }

156 public class RuleBaseSystem {
static RuleBase rb; public static void main(String args[]){ rb = new RuleBase(); rb.forwardChain(); } 142: public void forwardChain(){ 143: boolean newAssertionCreated; 144: // 新しいアサーションが生成されなくなるまで続ける. 145: do { 146: newAssertionCreated = false; 147: for(int i = 0 ; i < rules.size(); i++){ 148: Rule aRule = (Rule)rules.elementAt(i); 149: System.out.println("apply rule:"+aRule.getName()); 150: Vector antecedents = aRule.getAntecedents(); 151: String consequent = aRule.getConsequent(); 152: //Hashtable bindings = wm.matchingAssertions(antecedents); 153: Vector bindings = wm.matchingAssertions(antecedents); 154: if(bindings != null){ 155: for(int j = 0 ; j < bindings.size() ; j++){ 156: //後件をインスタンシエーション 157: String newAssertion = 158: instantiate((String)consequent, 159: (Hashtable)bindings.elementAt(j)); 160: //ワーキングメモリーになければ成功 161: if(!wm.contains(newAssertion)){ 162: System.out.println("Success: "+newAssertion); 163: wm.addAssertion(newAssertion); 164: newAssertionCreated = true; 165: } 166: } 167: } 168: } 169: System.out.println("Working Memory"+wm); 170: } while(newAssertionCreated); 171: System.out.println("No rule produces a new assertion"); 172: } 156

157 matchable() Method public Vector matchingAssertions(Vector theAntecedents){ Vector bindings = new Vector(); return matchable(theAntecedents,0,bindings); } 40: private Vector matchable(Vector theAntecedents,int n, Vector bindings){ 41: if(n == theAntecedents.size()){ 42: return bindings; 43: } else if (n == 0){ 44: boolean success = false; 45: for(int i = 0 ; i < assertions.size() ; i++){ 46: Hashtable binding = new Hashtable(); 47: if((new Matcher()).matching( 48: (String)theAntecedents.elementAt(n), 49: (String)assertions.elementAt(i), 50: binding)){ 51: bindings.addElement(binding); 52: success = true; 53: } 54: } 55: if(success){ 56: return matchable(theAntecedents, n+1, bindings); 57: } else { 58: return null; 59: } 60: } else { 61: boolean success = false; 62: Vector newBindings = new Vector(); 63: for(int i = 0 ; i < bindings.size() ; i++){ 64: for(int j = 0 ; j < assertions.size() ; j++){ 65: if((new Matcher()).matching( 66: (String)theAntecedents.elementAt(n), 67: (String)assertions.elementAt(j), 68: (Hashtable)bindings.elementAt(i))){ 69: newBindings.addElement(bindings.elementAt(i)); 70: success = true; 71: } 72: } 73: } 74: if(success){ 75: return matchable(theAntecedents,n+1,newBindings); 76: } else { 77: return null; 78: } 79: } 80: } 157

158 Quiz: When the rule representation is changed, how to change the program, RuleBaseSystem.java rule "CarRule4" if "?x is big" "?x needs a lot of gas" then "?x is a foreign car" name antecedents consequent antecedents id r4 if "?x is big" AND "?x needs a lot of gas" then "?x is a foreign car" consequent 158

159 STRIPS examples Exercises 159

160 STRIPS: describing goals and state
STRIPS: STanford Research Institute Planning System Basic approach in GPS (general Problem Solver): Find a “difference” (Something in G that is not provable in S0) Find a relevant operator f for reducing the difference Achieve precondition of f; apply f; from resultant state, achieve G. 160

161 STRIPS planning STRIPS uses logical formulas to represent the states
S0, G, P, etc: Description of operator f: 161

162 Example: The move operator
162

163 Example1: The move operator
G S0 f(P)->G S0->P x/B, y/A, z/Fl x/B, y/A, z/Fl On(x,y) Clear(x) Clear(z) f: move(x,y, z) add: On(x,z), Clear(y) del: On(x,y), Clear(z) On(x,z) Clear(x) Clear(y) P: f(P) : 163

164 Example 2: robot operators (Add object’s location sentence)
revise 修正する go(x,y) pickup(x) putdown(x) “go from x to y” pre: AtR(x) del: AtR(x) add: AtR(y) eff: AtR(x)AtR(y) “pick up x at y” pre: At(x, y), AtR(y,) Empty-H del: At(x,y), Empty-H add: Holding(x) At(x, y) Object, x, is at the room, y. eff: Holding(x) “put down x at y” pre: Holding(x), AtR(y) del: Holding(x) add: At(x,y) eff: Holding(x)At(x,y) 164

165 Example 2: Get the slipper
S0: AtR(A) G0: AtR(A) Empty-H Holding(slipper1) At(slipper1, B) Schema 概要な式 An operator is usually written in the scheme form Difference: Holding(slipper1) in G0 but nor in S0. Relevant operator instance: pickup(slipper1) (schema pickup(x), x/slipper1) Pre: At(slipper1, y), AtR(y), Empty-H Pre: (y/B) ->Pre’: At(slipper1, B), AtR(B), Empty-H But AtR(B) is not in S0! To reduce the difference between: AtR(A) and AtR(B) Relevant operator instance: go(x, B) by (x/A)  go(A, B) Pre: AtR(A) Del: AtR(A) Add: AtR(B) reduce 減少させる 165

166 Example 3: Tree representation
S0  G1  S1  G0 S0: AtR(A) Empty-H At(slipper1, B) S0->G0 S1: AtR(B) Holding(slipper1) S0->G1 AtR(y) Empty-H At(slipper1, y) G1->S1 step2:pickup(slipper1) S1->G0 y/B x/B Pick up slipper1 at B S3->G4 AtR(B) G4->S4 Step3:go(B,A) S4->G0 y/B x/A G0: AtR(A) Holding(slipper1) S0->G2 AtR(A) G2->S2 step1:go(A,B) S2->G1 G1: AtR(B) Empty-H At(slipper1, B) 166

167 Exercise: The move operator
G0 : A C B A B C 167

168 Exercise: The move operator
S0: Clear(Fl) Clear(A) On(C, Fl) Clear(B) On(A, C) On(B, Fl) A B C G: Clear(Fl) Clear(A) On(C, Fl) On(B, C) On(A, B) A C B S0G1 Add: On(B, C) f: Move (B, y1, C) G3 = G Pre: ( y1/Fl) Clear(B), On(B, Fl) Clear(C) Eff: ( y1/Fl) Add: On(B,C) Del: On(B, Fl), Clear(C) G1G3 Add: On(A, B) f: Move (A, y2, B) Pre: ( y2/Fl) Clear(A), Clear(B) On(A, Fl) Eff: ( y2/Fl) Add: On(A, B)) Del: Clear(B), On(A, Fl) Add: Clear(C) f: Move (x1, C, z1) G2: G1 Clear(Fl) Clear(A) On(C, Fl) On(B. C) Clear(C) On(A, Fl) Clear(B) On(B, Fl) Pre: ( x1/A, z1/Fl) Clear(A), Clear(Fl) On(A, C) Eff: ( x1/A, z1/Fl) Add: Clear(C), On(A, Fl) Del: On(A, C) G1: G3 Clear(Fl) Clear(A) On(C, Fl) On(B. C) Clear(C) On(A, Fl) Clear(B) On(B, Fl) On(A. B) S0: G2 Clear(Fl) Clear(A) On(C, Fl) Clear(C) On(A, Fl) Clear (B) On(A, C) On(B, Fl) 168

169 Example3: Block world F1 F2 F3 Start state S0 Goal state G0  169 A B

170 Example3: Block world F1 F2 F3 Start state S0 Goal state G0 
S0: On(C, Fl) On(B, C) On(A, B) Clear(A) Clear(F2) Clear(F3) G0: On(C, F3) On(B, C) On(A, B) Clear(A) Clear(F1) Clear(F2) delete list add list Operators: Move(x, y, z) “Move x from y to z” S0  G2  G1  G’0  G’’0  G0 170

171 S0---------->G’0 S0---------->G1 G1---->S1 S1--> G’0
Sub-goal: to achieve one of the conjuncts, G’0 S0: On(C, Fl)   On(B, C)   On(A, B)  Clear(A)  Clear(F2)  Clear(F3) S >G’0 diff: On(C, F3) G’0: On(C, F3) On(B, A) On(A, F2) Clear(B) Clear(C) Clear(F1) G0: On(C, F3) On(B, C) On(A, B) Clear(A) Clear(F2) Clear(F1) G’0: On(C, F3) y/F1 S >G1 pre: On(C, F1), Clear(F3), Clear(C) G1---->S1 f: Move(C, y, F3) S1--> G’0 f3: Move(C, F1, F3) x1/B, z1/A S >G2 pre: On(B, C), Clear(A), Clear(B) G2---->S2 f: Move(x1, C, z1) S2-->G1 f2: Move(B, C, A) x2/A, z2/F2 S >G3 pre: On(A, B), Clear(A), Clear(F2) G3---->S3 f: Move(x2, B, z2) S3-->G2 171 f1: Move(A, B, F2)

172 S0---------->G’’0 G1---->S2 S0---------->G1 S2-->G0
Sub-goal: to achieve one of the conjuncts, G’’0 S0=G’0: On(C, F3)   On(B, A)   On(A, F2)   Clear(B)   Clear(C)   Clear(F1) G’’0: On(C, F3)   On(B, C)   On(A, F2)   Clear(B)   Clear(A)   Clear(F1) G0: On(C, F3) On(B, C) On(A, B) Clear(A) Clear(F2) Clear(F1) S >G’’0 diff: On(B, C) G’’0: On(B, C) y/A G1---->S2 f: Move(B, y, C) S >G1 pre: On(B, A), Clear(C), Clear(B) S2-->G0 f5: Move(B, A, C) Sub-goal: to achieve one of the conjuncts, G’’’0 S0=G’’0: On(C, F3)   On(B, C)   On(A, F2)   Clear(B)   Clear(A)   Clear(F1) G’’’0: On(C, F3) On(B, C) On(A, B) Clear(A) Clear(F1) Clear(F2) G0: On(C, F3) On(B, C) On(A, B) Clear(A) Clear(F1) Clear(F2) S >G’’’0 diff: On(A, B) G’’’0: On(A, B) = y/F2 S >G1 pre: On(A, F2), Clear(A), Clear(B) G1---->S2 f: Move(A, y, B) S2-->G0 172 f4: Move(A, F2, B)

173 A sequence plan structure
NIL finish On(A,B) Clear(F2) On(B,C) Clear(A) 5 4 Move(A,B,F2) Move(B,A,C) On(A,F2) Clear(B) On(B,A) Clear(C) On(C,F3) Clear(F1) 1 2 3 Move(A,B,F2) Move(B,C,A) Move(C,F1,F3) On(A,B) Clear(F2) On(B,C) Clear(A) On(C,F1) Clear(F3) start T Plan sequence: Move(A,B,F2), Move(B,C,A),Move(C,F1,F3), Move(B,A,C), Move(A,F2,B) 173


Download ppt "L1. Introduction of AI course"

Similar presentations


Ads by Google