Last active
September 7, 2024 19:00
-
-
Save csabatini/68512b6fba12b78b1b24ffefbb0ac0b6 to your computer and use it in GitHub Desktop.
OMSCS KBAI Notes
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Ebook: https://gatech.instructure.com/courses/193244/files/folder/KBAI-Ebook | |
GDrive notes #1: https://drive.google.com/file/d/1T2TNnNQbfkjR1kyiwC3hgLEjTFiMgbC5/view | |
GDrive notes #2: https://drive.google.com/drive/folders/1f-udSsv9oMgJ8zzP_YuJo2Em0zQgrDuC?usp=sharing | |
--- | |
Lesson 3: Semantic Networks | |
1. Knowledge representation and Reasoning using that representation is the key to problem-solving. 2. Semantic Networks are one of the many ways for knowledge representation. 3. Pathways along spreading activation networks could potentially help with memorizing and recalling solutions instead of solving them every time for new recurring problems. | |
--- | |
Lesson 4: Generate and Test | |
This lesson cover the following topics: 1. Generate and Test is a very commonly used problem-solving method used by humans and in nature by biological evolution (similar to Genetic algorithms). 2. We need both knowledge representation and problem-solving methods together to provide reasoning to solve problems. 3. Smart generators and smart testers help prune multitude number of states that are possible due to combinatorial explosion of successor states, thereby helping solve intractable problems effeciently using limited computational resources and limited knowledge of the world as compared to dumb generators and dumb testers. | |
--- | |
Lesson 5: Means End Analysis and Problem Reduction | |
This lesson cover the following topics: 1. The knowledge representation of Semantic networks works well with Generate and Test, Means-Ends Analysis and Problem Reduction. These are examples of Universal AI methods. 2. Means-ends analysis uses a heuristic to guide the search from the initial state to the goal state. Convergence is not guaranteed. Optimality is not guaranteed. Computational effeciency is not guaranteed. 3. Problem reduction is used along with Means-ends analysis to help overcome problems with means-ends analysis. 4. The Universal methods have weak coupling between the methods and the knowledge representation and hence are called Weak methods since they make little use of knowledge. Strong AI methods are knowledge-intensive and use knowledge of the world to come up with good solutions in an effecient manner. | |
Means end Analysis | |
Approach: For each operation | |
* Apply it to current state | |
* Calculate difference between new and goal state | |
* Select/prefer move that minimizes distance between new state and goal | |
Pros/cons | |
* universal AI method | |
* costly and no guarantee of success or efficiency | |
* doesn't necessarily bring us closer to goal | |
Problem reduction | |
* Given a big problem, decompose it into smaller problems that are easier to solve. And compose into a final solutions | |
In general, universal methods are weak | |
--- | |
Lesson 6: Production Systems | |
Production Systems helps map percepts in the world into actions. When the production system reaches an impasse, it uses chunking to learn a new rule to overcome that impasse. | |
Architectures/Layers for KBAI systems | |
Three layers: Knowledge/Task Level, Algo Level, Hardware Level | |
Example: Watson | |
* Hardware: Physical computer | |
* Algo layer: Searching and decision-making for answers | |
* Task layer: Answering the clue based on his knowledge, searching and answering | |
Architecture + Content = Behavior | |
Functions: Perceptions -> Action | |
In the 2nd model, architecture doesn't change. Similar to a computer running programs, the architecture is unchanged | |
E.g. SOAR architecture | |
https://en.wikipedia.org/wiki/Soar_(cognitive_architecture) | |
Production rules: Captured in the procedural knowledge in SOAR's memory. Seven example rules for pitcher problem | |
System can reach an impass where there is no single course of action determined | |
"Chunking" is a learning technique to learn rules that can break an impass (between two rules), using memory | |
--- | |
Lesson 7: Frames | |
Frames represent stereotypes of a certain concept (e.g. situation, event, etc.) and are composed of Slots and Fillers. Frames provide default values for the Slots and can inherit from one another. Frames are representationally equivalent to Semantic Nets. In other words, a frame can capture a large amount of information in an organized manner as a packet. Frames enable us to construct a theory of cognitive processing which is both bottom-up and top-down. The data in the frames generates expectations of the world in a cognitive-effecient manner. | |
--- | |
Lesson 8: Learning by Recording Cases | |
1. Memory is as important as Learning/Reasoning so that we can fetch the answer to similar cases encountered in the past and avoid having to redo the non-trivial task of learning and reasoning, thereby saving effort. 2. A case is an encapsulation of a past experience that can be applied to a large number of similar situations in future. The similarity metric can be as simple as the Euclidean distance metric or a complex metric involving higher dimensions. 3. kNN method is one method to find the most similar case from memory for a new problem. 4. In some cases, we need to adapt the cases from our memory to fit the requirements of the new problem. In some cases, we also need to store cases based on qualitative labels along with numeric labels to make the comparison applicable for particular situations. 5. Learning by storing cases in memory has a very strong connection to Cognition since human cognition works in a similar manner by recording cases and applying them to new problems in real world by exploting the patterns of regularity in them. | |
* Given a new problem 'A' | |
* Retrieve most similar problem from memory ('B') | |
* Apply the solution of 'B' to 'A' | |
example: | |
* always starting with a main method for a java project | |
* debugging errors using previous cases | |
* doctor using a similar cases when determining a diagnosis | |
more objective: calculate the euclidean distance (x/y) and choose the nearest neighbor | |
Also need methods to adapt past cases to fit the new problem (this is called case based reasoning, next lesson) | |
--- | |
Lesson 9: Case-based reasoning | |
1. In case-based reasoning, the cognitive agent addresses new problems by adapting or tweaking previously encountered solutions to new similar but not identical problems. 2. Case-based reasoning has 4 phases: 1) Case Retrieval, 2) Case Adaptation, 3) Case Evaluation and 4) Case Storage. 3. One method of Case Retrieval is kNN method. 4. 3 common ways of Case Adaptation are: 1) Model-based method, 2) Recursive case-based method, and 3) Rule-based method. 5. Designers often use heuristics for case adaptation. A heuristic is a rule of thumb that works often, but NOT always. 6. Case Evaluation can be performed through Simulation or if the cost is not high then through actual Execution. It can also be done by building a Prototype and testing it or through careful review of the design by experts. Simulation, Prototype or Execution. 7. Case Storage has 2 kinds of mechanisms to organize information for effecient retrieval: 1) Indexing/Tabular method (Linear time complexity) and 2) Discrimination Tree (Logarithmic) 8. Incremental Learning allows the addition of a new case which enables new knowledge structure to be learnt. 9. If the evaluation of a case retrieved fails, then it could be adapted and retried and if the failure continues, then we need to abandon the case. Sometimes, storing failed cases helps us anticipate future problems. Case Adaption is done using model of the world, by using rules or using recursion. 10. We do not need to store all successful cases, but yet need to store noteworthy and representative cases so that we get enough utility from the stored cases and at the same time keep the retrieval process tractable. This is also known as the Utility problem. 11. Case-based reasoning has a very strong connection with human cognition. Case-based reasoning shifts the balance of importance from Reasoning to both Learning and Memory. Case-based reasoning unifes all the 3 concepts: Learning (to acquire experiences), Memory (to store and retrieve experiences) and Reasoning (to adapt experiences to similar new problems). | |
Unlike recording cases, in case-based reasoning, the new problem is similar but not identical to a previous case | |
Two components: | |
* Case-based: extract something from memory and re-use it | |
* Reasoning: Adapt the solution from memory to fit the new problem | |
CBR Steps: 1) Retrieval, 2) Adaptation, 3) Evaluation (determine how well the solution fits the new problem) 4) Storage of new solution as a case | |
Assumptions: | |
* Patterns exist in the world | |
* Similar problems have similar solutions | |
Use heuristics: rules that work sometimes but not always (rule of thumb) | |
--- | |
Lesson 10: Incremental Concept Learning | |
Incremental concept learning is intimately connected with human cognition where instead of giving a large number of examples, the agent is given one example at a time and the agent gradually and incrementaly learns concepts from those examples. | |
In ICL, instead of getting a large number of examples, the agent is given one example at a time and gradually learns from these examples (positive and negative) | |
* Generalize: Concept is expanded to include a positive example/feature | |
* Specialize: Concept is limited to exclude a negative example/feature | |
Example: Child learning about animals: concept of a cat - black cat, orange cat, dog, etc. | |
also, structure of a foo/blocks: AI learns about support blocks, which sides can be touching, etc | |
Heuristics: | |
require-link, forbid-link, drop-link, enlarge-set, climb-tree, close-interval | |
--- | |
Lesson 11: Classification | |
Classification is ubiquitous and Humans continuously and constantly perform classification in day-to-day life | |
It can be top-down or bottom up. | |
Top-down: establish and refine. starting with an animal and going deeper in the hierarchy | |
Bottom-up controller processing/search: DJIA price rediction. | |
Equivalence Classes reduce the 2^n percepts into a more manageable number of concepts for the AI agent | |
Concept Hierarchies: e.g. Animal -> Reptile/Mammal/Marsupial, etc. | |
Axiomatic Concepts, Prototype Concepts, Exemplar Concepts | |
1. formal set of necessary and sufficient conditions (like a circle) | |
2. base properties that can sometimes be overridden (prototypical) - like a stool and a folding chair are both chairs | |
3. defined by implicit abstractions of certain examples. example: beauty could be a flower, a sunset, a painting. Very hard to define and use in AI--- | |
Lesson 12: Logic | |
Logic provide the framework for formal notation/language for reasoning and inferences. In other words, logic provides a formal and precise way of reasoning. It also allows agents to reason more formally about initial and goals states and helps in planning. Deduction is term used for reasoning from causes to effects; Abduction is the term used for reasoning from effects to causes; and Induction is generating a generic rule, given the cause and its effect. | |
Why do we need logic? | |
We have knowledge base and rules. We might want the agent to prove its answer, or only present a valid solution. | |
Soundness: Only valid conclusions can be proven | |
Completeness: All valid conclusions can be proven | |
--- |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment