From 6166ba91b64d275e7e1707f7e1f9bf111c101ced Mon Sep 17 00:00:00 2001
From: Piyali Mukherjee <6440362+peelscoded@users.noreply.github.com>
Date: Wed, 27 Aug 2025 12:09:17 -0400
Subject: [PATCH] Created using Colab
---
.../anthropic/00_Tutorial_How-To.ipynb | 408 ++++++++++--------
1 file changed, 227 insertions(+), 181 deletions(-)
mode change 100755 => 100644 AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb
diff --git a/AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb b/AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb
old mode 100755
new mode 100644
index cf56243..afbc768
--- a/AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb
+++ b/AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb
@@ -1,185 +1,231 @@
{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Tutorial How-To\n",
- "\n",
- "This tutorial requires this initial notebook to be run first so that the requirements and environment variables are stored for all notebooks in the workshop"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## How to get started\n",
- "\n",
- "1. Clone this repository to your local machine.\n",
- "\n",
- "2. Install the required dependencies by running the following command:\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {},
- "outputs": [
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QXOtWrH3SPjA"
+ },
+ "source": [
+ "# Tutorial How-To\n",
+ "\n",
+ "This tutorial requires this initial notebook to be run first so that the requirements and environment variables are stored for all notebooks in the workshop"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "U7tdwAsUSPjB"
+ },
+ "source": [
+ "## How to get started\n",
+ "\n",
+ "1. Clone this repository to your local machine.\n",
+ "\n",
+ "2. Install the required dependencies by running the following command:\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "rBW4KJ06SPjB",
+ "outputId": "e282fe10-9f84-4672-a308-670755201140",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h\u001b[31mERROR: Could not open requirements file: [Errno 2] No such file or directory: '../requirements.txt'\u001b[0m\u001b[31m\n",
+ "\u001b[0m"
+ ]
+ }
+ ],
+ "source": [
+ "%pip install -qU pip\n",
+ "%pip install -qr ../requirements.txt"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rsxEnUefSPjB"
+ },
+ "source": [
+ "3. Restart the kernel after installing dependencies"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "dA38lQ2kSPjB"
+ },
+ "outputs": [],
+ "source": [
+ "# restart kernel\n",
+ "from IPython.core.display import HTML\n",
+ "HTML(\"\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XK3LxHYQSPjB"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "## Usage Notes & Tips 💡\n",
+ "\n",
+ "- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
+ "\n",
+ "- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
+ "\n",
+ "- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
+ "\n",
+ "### The Anthropic SDK & the Messages API\n",
+ "We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n",
+ "\n",
+ "Below is an example of what running a prompt will look like in this tutorial."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "C--DE3_2SPjB"
+ },
+ "source": [
+ "First, we set and store the model name and region."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "OeMxTCxZSPjC"
+ },
+ "outputs": [],
+ "source": [
+ "import boto3\n",
+ "session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
+ "AWS_REGION = session.region_name\n",
+ "print(\"AWS Region:\", AWS_REGION)\n",
+ "MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
+ "\n",
+ "%store MODEL_NAME\n",
+ "%store AWS_REGION"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8Q9rQ17kSPjC"
+ },
+ "source": [
+ "Then, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yAUJ9KWlSPjC"
+ },
+ "outputs": [],
+ "source": [
+ "from anthropic import AnthropicBedrock\n",
+ "\n",
+ "client = AnthropicBedrock(aws_region=AWS_REGION)\n",
+ "\n",
+ "def get_completion(prompt, system=''):\n",
+ " message = client.messages.create(\n",
+ " model=MODEL_NAME,\n",
+ " max_tokens=2000,\n",
+ " temperature=0.0,\n",
+ " messages=[\n",
+ " {\"role\": \"user\", \"content\": prompt}\n",
+ " ],\n",
+ " system=system\n",
+ " )\n",
+ " return message.content[0].text"
+ ]
+ },
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Note: you may need to restart the kernel to use updated packages.\n",
- "Note: you may need to restart the kernel to use updated packages.\n"
- ]
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7gcSedJQSPjC"
+ },
+ "source": [
+ "Now we will write out an example prompt for Claude and print Claude's output by running our `get_completion` helper function. Running the cell below will print out a response from Claude beneath it.\n",
+ "\n",
+ "Feel free to play around with the prompt string to elicit different responses from Claude."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "OjQKC4yJSPjC"
+ },
+ "outputs": [],
+ "source": [
+ "# Prompt\n",
+ "prompt = \"Hello, Claude!\"\n",
+ "\n",
+ "# Get Claude's response\n",
+ "print(get_completion(prompt))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6_WyzJM-SPjC"
+ },
+ "source": [
+ "The `MODEL_NAME` and `AWS_REGION` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "py310",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.5"
+ },
+ "colab": {
+ "provenance": [],
+ "toc_visible": true,
+ "include_colab_link": true
}
- ],
- "source": [
- "%pip install -qU pip\n",
- "%pip install -qr ../requirements.txt"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "3. Restart the kernel after installing dependencies"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# restart kernel\n",
- "from IPython.core.display import HTML\n",
- "HTML(\"\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "---\n",
- "\n",
- "## Usage Notes & Tips 💡\n",
- "\n",
- "- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
- "\n",
- "- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
- "\n",
- "- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
- "\n",
- "### The Anthropic SDK & the Messages API\n",
- "We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n",
- "\n",
- "Below is an example of what running a prompt will look like in this tutorial."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "First, we set and store the model name and region."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import boto3\n",
- "session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
- "AWS_REGION = session.region_name\n",
- "print(\"AWS Region:\", AWS_REGION)\n",
- "MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
- "\n",
- "%store MODEL_NAME\n",
- "%store AWS_REGION"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Then, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from anthropic import AnthropicBedrock\n",
- "\n",
- "client = AnthropicBedrock(aws_region=AWS_REGION)\n",
- "\n",
- "def get_completion(prompt, system=''):\n",
- " message = client.messages.create(\n",
- " model=MODEL_NAME,\n",
- " max_tokens=2000,\n",
- " temperature=0.0,\n",
- " messages=[\n",
- " {\"role\": \"user\", \"content\": prompt}\n",
- " ],\n",
- " system=system\n",
- " )\n",
- " return message.content[0].text"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now we will write out an example prompt for Claude and print Claude's output by running our `get_completion` helper function. Running the cell below will print out a response from Claude beneath it.\n",
- "\n",
- "Feel free to play around with the prompt string to elicit different responses from Claude."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Prompt\n",
- "prompt = \"Hello, Claude!\"\n",
- "\n",
- "# Get Claude's response\n",
- "print(get_completion(prompt))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The `MODEL_NAME` and `AWS_REGION` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom."
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "py310",
- "language": "python",
- "name": "python3"
},
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.5"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
\ No newline at end of file