This repository provides examples of many popular Python AI agent frameworks using LLMs from GitHub Models. Those models are free to use for anyone with a GitHub account, up to a daily rate limit.
You have a few options for getting started with this repository. The quickest way to get started is GitHub Codespaces, since it will setup everything for you, but you can also set it up locally.
You can run this repository virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:
-
Open the repository (this may take several minutes):
-
Open a terminal window
-
Continue with the steps to run the examples
A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed)
-
Open the project:
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
-
Continue with the steps to run the examples
-
Make sure the following tools are installed:
- Python 3.10+
- Git
-
Clone the repository:
git clone https://github.com/Azure-Samples/python-ai-agent-frameworks-demos cd python-ai-agents-demos
-
Set up a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install the requirements:
pip install -r requirements.txt
You can run the examples in this repository by executing the scripts in the examples
directory. Each script demonstrates a different AI agent pattern or framework.
Example | Description |
---|---|
autogen_basic.py | Uses AutoGen to build a single agent. |
autogen_tools.py | Uses AutoGen to build a single agent with tools. |
autogen_magenticone.py | Uses AutoGen with the MagenticOne orchestrator agent for travel planning. |
autogen_swarm.py | Uses AutoGen with the Swarm orchestrator agent for flight refunding requests. |
langgraph.py | Uses LangGraph to build an agent with a StateGraph to play songs. |
llamaindex.py | Uses LlamaIndex to build a ReAct agent for RAG on multiple indexes. |
openai_agents_basic.py | Uses the OpenAI Agents framework to build a single agent. |
openai_agents.py | Uses the OpenAI Agents framework to handoff between several agents with tools. |
openai_functioncalling.py | Uses OpenAI Function Calling to call functions based on LLM output. |
pydanticai.py | Uses PydanticAI to build a two-agent sequential workflow for flight planning. |
semantickernel.py | Uses Semantic Kernel to build a writer/editor two-agent workflow. |
smolagents_codeagent.py | Uses SmolAgents to build a question-answering agent that can search the web and run code. |
If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your GITHUB_TOKEN
is already configured in the Codespaces environment.
If you want to run the scripts locally, you need to set up the GITHUB_TOKEN
environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps:
-
Go to your GitHub account settings.
-
Click on "Developer settings" in the left sidebar.
-
Click on "Personal access tokens" in the left sidebar.
-
Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference.
-
Click on "Generate new token".
-
Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes.
-
Click on "Generate token".
-
Copy the generated token.
-
Set the
GITHUB_TOKEN
environment variable in your terminal or IDE:export GITHUB_TOKEN=your_personal_access_token
-
Optionally, you can use a model other than "gpt-4o" by setting the
GITHUB_MODEL
environment variable. Use a model that supports function calling, such as:gpt-4o
,gpt-4o-mini
,o3-mini
,AI21-Jamba-1.5-Large
,AI21-Jamba-1.5-Mini
,Codestral-2501
,Cohere-command-r
,Ministral-3B
,Mistral-Large-2411
,Mistral-Nemo
,Mistral-small
You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure OpenAI instead, you need to provision the Azure AI resources, which will incur costs.
This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-4o" and "text-embedding-3-large". The IaC is defined in the infra
directory and uses the Azure Developer CLI to provision the resources.
-
Make sure the Azure Developer CLI (azd) is installed.
-
Login to Azure:
azd auth login
For GitHub Codespaces users, if the previous command fails, try:
azd auth login --use-device-code
-
Provision the OpenAI account:
azd provision
It will prompt you to provide an
azd
environment name (like "agents-demos"), select a subscription from your Azure account, and select a location. Then it will provision the resources in your account. -
Once the resources are provisioned, you should now see a local
.env
file with all the environment variables needed to run the scripts. -
To delete the resources, run:
azd down