UC-Berkeley-CS-188-Artificial-Intelligence icon indicating copy to clipboard operation
UC-Berkeley-CS-188-Artificial-Intelligence copied to clipboard

Assignment code for UC Berkeley CS 188 Artificial Intelligence

UC Berkeley CS 188 Artificial Intelligence

Assignment code for UC Berkeley CS 188 Artificial Intelligence.

For open course material in edX, using this class: BerkeleyX: CS188.1x Artificial Intelligence

Projects

  1. Project 0: Python Refresher
    • addition.py
    • buyLotsOfFruit.py
    • shopSmart.py
  2. Project 1: Search (Python 3 Version)
    • search.py
    • searchAgents.py

Quizzes

  1. Lecture 2: Uninformed Search
    • Quiz 1: Planning Agents vs. Reflex Agents
    • Quiz 2: Safe Passage
    • Quiz 3: State Space Graphs and Search Trees
    • Quiz 4: Depth-First Tree Search
    • Quiz 5: Depth-First Tree Search: Space and Time Complexity
    • Quiz 6: Breadth-First Tree Search
    • Quiz 7: Breadth-First Tree Search: Space and Time Complexity
    • Quiz 9: Which Search Algorithm?
    • Quiz 10: Which Search Algorithm?
    • Quiz 11: Uniform Cost Search
    • Quiz 12: Which Search Algorithm?
    • Quiz 13: Which Search Algorithm?
  2. Lecture 3: Informed Search
    • Quiz 1: Search Execution
    • Quiz 2: Greedy Search
    • Quiz 3: A* Tree Search
    • Quiz 4: A* Tree Search
    • Quiz 5: Which Search Algorithm?
    • Quiz 6: Which Search Algorithm?
    • Quiz 7: Which Search Algorithm?
    • Quiz 8: Which Search Algorithm?
    • Quiz 9: Which Search Algorithm?
    • Quiz 10: Admissible Heuristics
    • Quiz 11: Combining Heuristics
    • Quiz 12: Consistency
  3. Lecture 4: CSPs
    • Quiz 1: Constraints
    • Quiz 2: Constraint Graphs
    • Quiz 3: Constraints
    • Quiz 4: Backtracking Search
    • Quiz 5: Forward Checking
    • Quiz 6: Arc Consistency
    • Quiz 7: Arc Consistency
    • Quiz 8: Least Constraining Value
  4. Lecture 5: CSPs II
    • Quiz 1: Tree-Structured CSPs
    • Quiz 2: Smallest Cutset
    • Quiz 3: Min-Conflicts
    • Quiz 4: Hill Climbing
  5. Lecture 6: Adversarial Search
    • Quiz 1: Minimax
    • Quiz 2: Evaluation and Collaboration
    • Quiz 3: Alpha-Beta Pruning
  6. Lecture 7: Uncertainty and Utilities
    • Quiz 1: Expectimax
    • Quiz 3: Other Games
    • Quiz 4: Monotonic Transformations
    • Quiz 5: Rationality
    • Quiz 6: Certainty Equivalent Values
  7. Lecture 8: Markov Decision Processes
    • Quiz 1: MDP Notation
    • Quiz 2: Discounting
    • Quiz 3: Solving MDPs
    • Quiz 4: Value Iteration
  8. Lecture 9: Markov Decision Processes II
    • Quiz 1: The Bellman Equation
    • Quiz 2: Policy Evaluation
    • Quiz 3: Policy Iteration
  9. Lecture 10: Reinforcement Learning
    • Quiz 1: Reinforcement Learning
    • Quiz 2: Model-Based Learning
    • Quiz 3: Passive Reinforcement Learning
    • Quiz 4: TD Learning
    • Quiz 5: Q-Learning
  10. Lecture 11: Reinforcement Learning II
    • Quiz 1: Exploration vs. Exploitation
    • Quiz 2: Feature-Based Representations

Homeworks

  1. Homework 1 - Search
    • Question 1: Search Trees
    • Question 2: Depth-First Graph Search
    • Question 3: Breadth-First Graph Search
    • Question 4: A* Graph Search
    • Question 5: Hive Minds: Lonely Bug
    • Question 6: Hive Minds: Swarm Movement
    • Question 7: Hive Minds: Migrating Birds
    • Question 8: Hive Minds: Jumping Bug
    • Question 9: Hive Minds: Lost at Night
    • Question 10: Early Goal Checking Graph Search
    • Question 11: Lookahead Graph Search
    • Question 12: Memory Efficient Graph Search
    • Question 13: A*-CSCS
  2. Homework 1 - Search (Practice)
    • Question 1: Search Trees
    • Question 2: Depth-First Graph Search
    • Question 3: Breadth-First Graph Search
    • Question 4: A* Graph Search
    • Question 5: Hive Minds: Lonely Bug
    • Question 6: Hive Minds: Swarm Movement
    • Question 7: Hive Minds: Migrating Birds
    • Question 8: Hive Minds: Jumping Bug
    • Question 9: Hive Minds: Lost at Night
    • Question 10: Early Goal Checking Graph Search
    • Question 11: Lookahead Graph Search
    • Question 12: Memory Efficient Graph Search
    • Question 13: A*-CSCS
  3. Homework 2 - CSPs
    • Question 1: Campus Layout
    • Question 2: CSP Properties
    • Question 3: 4-Queens
    • Question 4: Tree-Structured CSPs
    • Question 5: Solving Tree-Structured CSPs
    • Question 6: Arc Consistency
    • Question 7: Arc Consistency Properties
    • Question 8: Backtracking Arc Consistency
  4. Homework 2 - CSPs (Practice)
    • Question 1: Campus Layout
    • Question 2: CSP Properties
    • Question 3: 4-Queens
    • Question 4: Tree-Structured CSPs
    • Question 5: Solving Tree-Structured CSPs
    • Question 6: Arc Consistency
    • Question 7: Arc Consistency Properties
    • Question 8: Backtracking Arc Consistency
  5. Homework 3 - Games
    • Question 1: Minimax
    • Question 2: Expectiminimax
    • Question 3: Unknown Leaf Value
    • Question 4: Alpha-Beta Pruning
    • Question 5.1: Non-Zero-Sum Games
    • Question 5.2: Properties of Non-Zero-Sum Games
    • Question 6: Possible Pruning
    • Question 7: Suboptimal Strategies
    • Question 8: Shallow Search
    • Question 9: Rationality of Utilities
    • Question 10: Certainty Equivalent Values
    • Question 11: Preferences and Utilities
  6. Homework 3 - Games (Practice)
    • Question 1: Minimax
    • Question 2: Expectiminimax
    • Question 3: Unknown Leaf Value
    • Question 4: Alpha-Beta Pruning
    • Question 5.1: Non-Zero-Sum Games
    • Question 5.2: Properties of Non-Zero-Sum Games
    • Question 6: Possible Pruning
    • Question 7: Suboptimal Strategies
    • Question 8: Shallow Search
    • Question 9: Rationality of Utilities
    • Question 10: Certainty Equivalent Values
    • Question 11: Preferences and Utilities
  7. Homework 4 - MDPs
    • Question 1: Solving MDPs
    • Question 2: Value Iteration Convergence Values
    • Question 3: Value Iteration: Cycle
    • Question 4: Value Iteration: Properties
    • Question 5: Value Iteration: Convergence
    • Question 6: Policy Evaluation
    • Question 7: Policy Iteration
    • Question 8: Policy Iteration: Cycle
    • Question 9: Wrong Discount Factor
    • Question 10: MDP Properties
    • Question 11: Policies
  8. Homework 4 - MDPs (Practice)
    • Question 1: Solving MDPs
    • Question 2: Value Iteration Convergence Values
    • Question 3: Value Iteration: Cycle
    • Question 4: Value Iteration: Properties
    • Question 5: Value Iteration: Convergence
    • Question 6: Policy Evaluation
    • Question 7: Policy Iteration
    • Question 8: Policy Iteration: Cycle
    • Question 9: Wrong Discount Factor
    • Question 10: MDP Properties
    • Question 11: Policies
  9. Homework 5 - Reinforcement Learning
    • Question 1: Model-Based RL: Grid
    • Question 2: Model-Based RL: Cycle
    • Question 3: Direct Evaluation
    • Question 4: Temporal Difference Learning
    • Question 5: Model-Free RL: Cycle
    • Question 6: Q-Learning Properties
    • Question 7: Exploration and Exploitation
    • Question 8: Feature-Based Representation: Actions
    • Question 9: Feature-Based Representation: Update
  10. Homework 5 - Reinforcement Learning (Practice)
    • Question 1: Model-Based RL: Grid
    • Question 2: Model-Based RL: Cycle
    • Question 3: Direct Evaluation
    • Question 4: Temporal Difference Learning
    • Question 5: Model-Free RL: Cycle
    • Question 6: Q-Learning Properties
    • Question 7: Exploration and Exploitation
    • Question 8: Feature-Based Representation: Actions
    • Question 9: Feature-Based Representation: Update

Midterm Exam

  1. Practice I
    • Question 1: Search
    • Question 2: Hive Minds
    • Question 3: CSPs: Time Management
    • Question 4: Surrealist Pacman
    • Question 5: MDPs: Grid-World Water Park
    • Question 6: Short Answer: Search
    • Question 7: Short Answer: Iterative Deepening
    • Question 8: Short Answer: Dominance
    • Question 9: Short Answer: Heuristics
    • Question 10: Short Answer: CSP
    • Question 11: Short Answer: Games
  2. Practice I (Practice)
    • Question 1: Search
    • Question 2: Hive Minds
    • Question 3: CSPs: Time Management
    • Question 4: Surrealist Pacman
    • Question 5: MDPs: Grid-World Water Park
    • Question 6: Short Answer: Search
    • Question 7: Short Answer: Iterative Deepening
    • Question 8: Short Answer: Dominance
    • Question 9: Short Answer: Heuristics
    • Question 10: Short Answer: CSP
    • Question 11: Short Answer: Games
  3. Practice II
    • Question 1: Search
    • Question 2: Search: Heuristic Function Properties
    • Question 3: Search: Slugs
    • Question 4: Value Functions
    • Question 5: CSPs: CS188x Offices
    • Question 6: CSP Properties
    • Question 7: Games: Alpha-Beta Pruning
    • Question 8: Utilities: Low/High
    • Question 9: MDPs and Reinforcement Learning: Mini-Grids
  4. Practice II (Practice)
    • Question 1: Search
    • Question 2: Search: Heuristic Function Properties
    • Question 3: Search: Slugs
    • Question 4: Value Functions
    • Question 5: CSPs: CS188x Offices
    • Question 6: CSP Properties
    • Question 7: Games: Alpha-Beta Pruning
    • Question 8: Utilities: Low/High
    • Question 9: MDPs and Reinforcement Learning: Mini-Grids
  5. Practice III
    • Question 1: CSPs: Final Exam Staff Assignments
    • Question 2: Solving Search Problems with MDPs
    • Question 3: X-Values
    • Question 4: Games with Magic
    • Question 5: Pruning and Child Expansion Ordering
    • Question 6: A* Search: Batch Node Expansion
  6. Practice III (Practice)
    • Question 1: CSPs: Final Exam Staff Assignments
    • Question 2: Solving Search Problems with MDPs
    • Question 3: X-Values
    • Question 4: Games with Magic
    • Question 5: Pruning and Child Expansion Ordering
    • Question 6: A* Search: Batch Node Expansion
  7. Exam
    • Question 1: Pacman's Tour of San Francisco
    • Question 2: Missing Heuristic Values
    • Question 3: PAC-CORP Assignments
    • Question 4: k-CSPs
    • Question 5: One Wish Pacman
    • Question 6: AlphaBetaExpinimax
    • Question 7: Lotteries in Ghost Kingdom
    • Question 8: Indecisive Pacman
    • Question 9: Reinforcement Learning
    • Question 10: Potpourri