|
| 1 | +# Week 2 Lab Plan: Setting Up AIMA Python Environment and Agent Testing |
| 2 | + |
| 3 | +## Overview |
| 4 | +This lab session will guide students through setting up the AIMA Python environment and working with intelligent agents using the provided `agents.ipynb` notebook. Students will learn to run agent simulations and make modifications to understand how agents interact with their environments. |
| 5 | + |
| 6 | +## Learning Objectives |
| 7 | +By the end of this lab, students will be able to: |
| 8 | +- Set up the AIMA Python environment using conda |
| 9 | +- Run Jupyter notebooks for AI agent simulations |
| 10 | +- Understand basic agent-environment interactions |
| 11 | +- Modify agent behaviors and environment parameters |
| 12 | +- Experiment with 2D grid world environments |
| 13 | + |
| 14 | +## Prerequisites |
| 15 | +- Basic Python knowledge |
| 16 | +- Understanding of AI agent concepts from lectures |
| 17 | +- Laptop with internet connection |
| 18 | + |
| 19 | +## Lab Setup Instructions |
| 20 | + |
| 21 | +### 1. Environment Setup (15 minutes) |
| 22 | + |
| 23 | +#### Step 1: Install Conda/Miniconda |
| 24 | +If students don't have conda installed: |
| 25 | +- Download Miniconda from https://docs.conda.io/en/latest/miniconda.html |
| 26 | +- Follow installation instructions for their operating system |
| 27 | + |
| 28 | +#### Step 2: Clone the Repository |
| 29 | +```bash |
| 30 | +git clone https://github.com/your-repo/aima-python-eecs118-fall-25.git |
| 31 | +cd aima-python-eecs118-fall-25 |
| 32 | +``` |
| 33 | + |
| 34 | +#### Step 3: Create and Activate Conda Environment |
| 35 | +```bash |
| 36 | +conda env create -f environment.yml |
| 37 | +conda activate aima-python |
| 38 | +``` |
| 39 | + |
| 40 | +#### Step 4: Launch Jupyter Notebook |
| 41 | +```bash |
| 42 | +jupyter notebook |
| 43 | +``` |
| 44 | + |
| 45 | +### 2. Exploring the Agents Notebook |
| 46 | + |
| 47 | +#### Understanding the Basic Components |
| 48 | +Students will work through the `agents.ipynb` notebook to understand: |
| 49 | + |
| 50 | +1. **Agent Class Structure** |
| 51 | + - Review the `Agent` base class |
| 52 | + - Understand agent properties: `alive`, `bump`, `holding`, `performance`, `program` |
| 53 | + |
| 54 | +2. **Environment Class Structure** |
| 55 | + - Review the `Environment` base class |
| 56 | + - Understand key methods: `percept()`, `execute_action()`, `is_done()` |
| 57 | + |
| 58 | +3. **Simple Agent Example - BlindDog** |
| 59 | + - Run the BlindDog simulation in 1D park |
| 60 | + - Observe how the agent moves and interacts with food/water |
| 61 | + - Understand the agent program logic |
| 62 | + |
| 63 | +4. **2D Environment - Park2D** |
| 64 | + - Run the EnergeticBlindDog simulation |
| 65 | + - Observe 2D movement and visual representation |
| 66 | + - Understand direction handling and boundary detection |
| 67 | + |
| 68 | +### 3. Hands-On Exercises |
| 69 | + |
| 70 | +#### Exercise 1: Modify Grid World Size |
| 71 | +**Objective**: Change the park dimensions and observe behavior |
| 72 | + |
| 73 | +**Task**: In the Park2D example, modify the park size from (5,5) to (8,8) |
| 74 | +```python |
| 75 | +# Find this line in the notebook: |
| 76 | +park = Park2D(5,5, color={'EnergeticBlindDog': (200,0,0), 'Water': (0, 200, 200), 'Food': (230, 115, 40)}) |
| 77 | + |
| 78 | +# Change to: |
| 79 | +park = Park2D(8,8, color={'EnergeticBlindDog': (200,0,0), 'Water': (0, 200, 200), 'Food': (230, 115, 40)}) |
| 80 | +``` |
| 81 | + |
| 82 | +**Questions for Students**: |
| 83 | +- How does the larger environment affect the dog's ability to find food and water? |
| 84 | +- Does the random movement strategy become less efficient? |
| 85 | + |
| 86 | +#### Exercise 2: Change Initial Agent Position |
| 87 | +**Objective**: Experiment with different starting positions |
| 88 | + |
| 89 | +**Task**: Modify the dog's starting position |
| 90 | +```python |
| 91 | +# Find this line: |
| 92 | +park.add_thing(dog, [0,0]) |
| 93 | + |
| 94 | +# Try different starting positions: |
| 95 | +park.add_thing(dog, [4,4]) # Center of 8x8 grid |
| 96 | +# or |
| 97 | +park.add_thing(dog, [7,7]) # Corner position |
| 98 | +``` |
| 99 | + |
| 100 | +**Questions for Students**: |
| 101 | +- How does starting position affect the agent's performance? |
| 102 | +- Which starting position seems most efficient for finding resources? |
| 103 | + |
| 104 | +#### Exercise 3: Implement Barriers/Walls |
| 105 | +**Objective**: Add obstacles to make the environment more challenging |
| 106 | + |
| 107 | +**Task**: Create a new `Wall` class and add barriers to the environment |
| 108 | +```python |
| 109 | +class Wall(Thing): |
| 110 | + pass |
| 111 | + |
| 112 | +# Add walls to the environment |
| 113 | +wall1 = Wall() |
| 114 | +wall2 = Wall() |
| 115 | +park.add_thing(wall1, [2,2]) |
| 116 | +park.add_thing(wall2, [3,3]) |
| 117 | + |
| 118 | +# Update the color dictionary |
| 119 | +park = Park2D(8,8, color={ |
| 120 | + 'EnergeticBlindDog': (200,0,0), |
| 121 | + 'Water': (0, 200, 200), |
| 122 | + 'Food': (230, 115, 40), |
| 123 | + 'Wall': (100, 100, 100) # Gray color for walls |
| 124 | +}) |
| 125 | +``` |
0 commit comments