|
| 1 | +# Quickstart: Using the PostgreSQL Extension |
| 2 | + |
| 3 | +Gemini CLI includes a pre-built extension for connecting to any PostgreSQL database, allowing you to query and manage your database using natural language. |
| 4 | + |
| 5 | +## Prerequisites |
| 6 | + |
| 7 | +### Set up the database |
| 8 | + |
| 9 | +1. Create or select a PostgreSQL instance. |
| 10 | + |
| 11 | + * [Install PostgreSQL locally](https://www.postgresql.org/download/) |
| 12 | + * [Install AlloyDB Omni](https://cloud.google.com/alloydb/omni/current/docs/quickstart) |
| 13 | + |
| 14 | +1. Create or reuse [a database |
| 15 | + user](https://cloud.google.com/alloydb/omni/current/docs/database-users/manage-users) |
| 16 | + and have the username and password ready. |
| 17 | + |
| 18 | +### Installation |
| 19 | + |
| 20 | +1. Install the latest [Gemini CLI](https://github.com/google-gemini/gemini-cli): |
| 21 | + |
| 22 | + ```bash |
| 23 | + npm install -g @google/gemini-cli@latest |
| 24 | + ``` |
| 25 | + |
| 26 | +1. Install the extension: |
| 27 | + |
| 28 | + ```bash |
| 29 | + gemini extensions install github.com/gemini-cli-extensions/postgres.git |
| 30 | + ``` |
| 31 | + |
| 32 | +## Configuration |
| 33 | + |
| 34 | +After activating the extension, configure it by setting the following environment variables: |
| 35 | + |
| 36 | +- `POSTGRES_HOST`: The hostname or IP address of the PostgreSQL server. |
| 37 | +- `POSTGRES_PORT`: The port number for the PostgreSQL server. |
| 38 | +- `POSTGRES_DATABASE`: The name of the database to connect to. |
| 39 | +- `POSTGRES_USER`: The database username. |
| 40 | +- `POSTGRES_PASSWORD`: The password for the database user. |
| 41 | +- `POSTGRES_QUERY_PARAMS`: (Optional) Raw query to be added to the db connection string. |
| 42 | + |
| 43 | +### Permissions |
| 44 | + |
| 45 | +Ensure the configured database user has the necessary database-level permissions (e.g., `SELECT`, `INSERT`) to execute the desired queries. |
| 46 | + |
| 47 | +## Supported Tools |
| 48 | + |
| 49 | +This extension provides the following tools. Note that these can be used in combination with core Gemini CLI tools (like `write_file` and `run_shell_command`). |
| 50 | + |
| 51 | +- `execute_sql`: Executes a SQL query. |
| 52 | +- `list_tables`: Lists tables in the database. |
| 53 | +- `list_autovacuum_configurations`: Lists autovacuum configurations in the database. |
| 54 | +- `list_memory_configurations`: Lists memory-related configurations in the database. |
| 55 | +- `list_top_bloated_tables`: Lists the top bloated tables in the database. |
| 56 | +- `list_replication_slots`: Lists replication slots in the database. |
| 57 | +- `list_invalid_indexes`: Lists invalid indexes in the database. |
| 58 | +- `get_query_plan`: Generates the execution plan of a statement. |
| 59 | + |
| 60 | +## Usage Examples |
| 61 | + |
| 62 | +### A Developer's Story |
| 63 | + |
| 64 | +This section follows a developer, Alex, as they use the PostgreSQL extension to diagnose a slow user profile page in their application. |
| 65 | + |
| 66 | +#### Step 1: Exploring the Schema |
| 67 | + |
| 68 | +Alex's first step is to understand the database structure related to user profiles. |
| 69 | +
|
| 70 | +> "List all tables in the database." |
| 71 | +*Tool Used*: `list_tables` |
| 72 | +
|
| 73 | +#### Step 2: Analyzing the Slow Query |
| 74 | +
|
| 75 | +After identifying the `users` and `user_profiles` tables, Alex suspects the query joining them is inefficient. They decide to check the execution plan. |
| 76 | +
|
| 77 | +> "What is the execution plan for the query 'SELECT * FROM users u JOIN user_profiles up ON u.id = up.user_id WHERE u.id = 123'?" |
| 78 | +*Tool Used*: `get_query_plan` |
| 79 | +
|
| 80 | +#### Step 3: Discovering an Indexing Problem |
| 81 | +
|
| 82 | +The query plan reveals a slow sequential scan. Alex wonders if there's an issue with the indexes on the tables. |
| 83 | + |
| 84 | +> "Show me all invalid indexes." |
| 85 | +*Tool Used*: `list_invalid_indexes` |
| 86 | + |
| 87 | +#### Step 4: Checking Overall Database Health |
| 88 | + |
| 89 | +Finding an invalid index prompts Alex to run a broader health check, looking for other common performance issues like table bloat. |
| 90 | + |
| 91 | +> "Show me the top 10 most bloated tables." |
| 92 | +*Tool Used*: `list_top_bloated_tables` |
| 93 | + |
| 94 | +### Conclusion |
| 95 | + |
| 96 | +In just a few minutes, using natural language, Alex has gone from a vague performance complaint to a concrete action plan: fix the invalid index and address table bloat. This demonstrates how the extension can accelerate development and debugging workflows. |
| 97 | + |
| 98 | +### Data Analyst Journey: Extracting Data for a Report |
| 99 | + |
| 100 | +Maria, a data analyst, needs to pull data for a sales report. |
| 101 | + |
| 102 | +#### Step 1: Find Relevant Tables |
| 103 | +Maria starts by looking for tables related to sales and customers. |
| 104 | +> "List all tables that have 'sales' or 'customers' in their names." |
| 105 | +*Tool Used*: `list_tables` |
| 106 | + |
| 107 | +#### Step 2: Explore Table Schemas |
| 108 | +To understand how to join them, she inspects the schemas. |
| 109 | +> "Show me the schema for the 'sales' and 'customers' tables." |
| 110 | +*Tool Used*: `execute_sql` |
| 111 | + |
| 112 | +#### Step 3: Query the Data |
| 113 | +Maria writes a query to get the total sales for each customer. |
| 114 | +> "Write a SQL query to get the total sales amount for each customer by joining the 'sales' and 'customers' tables on the customer ID." |
| 115 | +*Tool Used*: `execute_sql` |
| 116 | + |
| 117 | +#### Step 4: Save the Results |
| 118 | +She saves the data to a CSV file for her report. |
| 119 | +> "Save the results of the previous query to a file named 'sales_by_customer.csv'." |
| 120 | +*Tool Used*: `write_file` |
| 121 | + |
| 122 | +### Database Administrator (DBA) Journey: Routine Maintenance |
| 123 | + |
| 124 | +David, a DBA, performs his daily checks to ensure the database is healthy. |
| 125 | + |
| 126 | +#### Step 1: Check for Long-Running Queries |
| 127 | +David looks for queries that might be slowing down the system. |
| 128 | +> "Are there any queries that have been running for more than 10 minutes?" |
| 129 | +*Tool Used*: `execute_sql` (querying `pg_stat_activity`) |
| 130 | + |
| 131 | +#### Step 2: Review Autovacuum Configurations |
| 132 | +He reviews the autovacuum settings to ensure they are optimized for the workload. |
| 133 | +> "Show me the autovacuum configurations." |
| 134 | +*Tool Used*: `list_autovacuum_configurations` |
| 135 | + |
| 136 | +#### Step 3: Check Memory Configurations |
| 137 | +David checks the memory-related configurations to prevent performance bottlenecks. |
| 138 | +> "List all memory-related configurations." |
| 139 | +*Tool Used*: `list_memory_configurations` |
| 140 | + |
| 141 | +#### Step 4: Monitor Database Size |
| 142 | +He checks the overall size of the database to track its growth. |
| 143 | +> "What's the total size of the database?" |
| 144 | +*Tool Used*: `execute_sql` (using `pg_database_size()`) |
| 145 | +
|
| 146 | +#### Step 5: Verify Replication Status |
| 147 | +David ensures that the read replicas are in sync. |
| 148 | +> "Show me the current replication status and lag." |
| 149 | +*Tool Used*: `list_replication_slots` |
| 150 | +
|
| 151 | +#### Step 6: Initiate a Backup |
| 152 | +He kicks off a manual backup of a critical database. |
| 153 | +> "Create a backup of the 'production' database and store it in the '/backups' directory." |
| 154 | +*Tool Used*: `run_shell_command` (using `pg_dump`) |
| 155 | +
|
| 156 | +### New Developer Onboarding Journey: Learning the Ropes |
| 157 | +
|
| 158 | +Sarah, a new developer, is getting acquainted with the project's database. |
| 159 | +
|
| 160 | +#### Step 1: Get an Overview |
| 161 | +Her first step is to see all the tables in the database. |
| 162 | +> "List all the tables in the database." |
| 163 | +*Tool Used*: `list_tables` |
| 164 | +
|
| 165 | +#### Step 2: Examine a Table |
| 166 | +She decides to dive deeper into the `products` table. |
| 167 | +> "Show me the columns and their data types for the 'products' table." |
| 168 | +*Tool Used*: `execute_sql` |
| 169 | +
|
| 170 | +#### Step 3: Preview Table Data |
| 171 | +Sarah wants to see some sample data to understand the content. |
| 172 | +> "Show me the first 5 rows from the 'products' table." |
| 173 | +*Tool Used*: `execute_sql` |
| 174 | +
|
| 175 | +#### Step 4: Understand Table Relationships |
| 176 | +She wants to know how products relate to other tables. |
| 177 | +> "What are the foreign key relationships for the 'products' table?" |
| 178 | +*Tool Used*: `execute_sql` |
| 179 | +
|
| 180 | +### Test Data Generation Journey: Populating a New Table |
| 181 | +
|
| 182 | +A developer, Sam, needs to create some test data for a new `products` table. |
| 183 | +
|
| 184 | +#### Step 1: Create the Table |
| 185 | +Sam first needs to create the `products` table. |
| 186 | +> "Create a table named 'products' with columns for 'id' (integer, primary key), 'name' (varchar), and 'price' (decimal)." |
| 187 | +*Tool Used*: `execute_sql` |
| 188 | +
|
| 189 | +#### Step 2: Insert a Single Row |
| 190 | +Sam inserts a single product to verify the table structure. |
| 191 | +> "Insert a product with id 1, name 'Laptop', and price 1200.00 into the 'products' table." |
| 192 | +*Tool Used*: `execute_sql` |
| 193 | +
|
| 194 | +#### Step 3: Insert Multiple Rows |
| 195 | +Now, Sam wants to add a few more products in a single command. |
| 196 | +> "Insert the following products into the 'products' table: (2, 'Keyboard', 75.00), (3, 'Mouse', 25.00), (4, 'Monitor', 300.00)." |
| 197 | +*Tool Used*: `execute_sql` |
| 198 | +
|
| 199 | +#### Step 4: Verify the Data |
| 200 | +Finally, Sam checks that the data was inserted correctly. |
| 201 | +> "Show me all the data from the 'products' table." |
| 202 | +*Tool Used*: `execute_sql` |
0 commit comments