Skip to content

Commit a9964bf

Browse files
Chris Kim (Hyunggun)claude
andcommitted
Replace deprecated model IDs with -latest aliases
The hardcoded Claude 3 model IDs (claude-3-haiku-20240307, claude-3-sonnet-20240229) are deprecated and broken. Replace them with -latest aliases (e.g., claude-haiku-4-5-latest) so the tutorials automatically use the newest model version without code changes. Changes: - Update MODEL_NAME in all 3 variants (Anthropic 1P, Bedrock/anthropic, Bedrock/boto3) - Update tool use notebooks from Sonnet 3 to claude-sonnet-4-5-latest - Update eval notebooks from Haiku 3 to claude-haiku-4-5-latest - Update markdown prose to reference "Claude Haiku (latest)" instead of "Claude 3 Haiku" - Unpin SDK versions in AmazonBedrock/requirements.txt for alias support Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 0d27754 commit a9964bf

9 files changed

Lines changed: 23 additions & 204 deletions

AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb

Lines changed: 3 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -61,22 +61,7 @@
6161
{
6262
"cell_type": "markdown",
6363
"metadata": {},
64-
"source": [
65-
"---\n",
66-
"\n",
67-
"## Usage Notes & Tips 💡\n",
68-
"\n",
69-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
70-
"\n",
71-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
72-
"\n",
73-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
74-
"\n",
75-
"### The Anthropic SDK & the Messages API\n",
76-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n",
77-
"\n",
78-
"Below is an example of what running a prompt will look like in this tutorial."
79-
]
64+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n\nBelow is an example of what running a prompt will look like in this tutorial."
8065
},
8166
{
8267
"cell_type": "markdown",
@@ -90,16 +75,7 @@
9075
"execution_count": null,
9176
"metadata": {},
9277
"outputs": [],
93-
"source": [
94-
"import boto3\n",
95-
"session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
96-
"AWS_REGION = session.region_name\n",
97-
"print(\"AWS Region:\", AWS_REGION)\n",
98-
"MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
99-
"\n",
100-
"%store MODEL_NAME\n",
101-
"%store AWS_REGION"
102-
]
78+
"source": "import boto3\nsession = boto3.Session() # create a boto3 session to dynamically get and set the region name\nAWS_REGION = session.region_name\nprint(\"AWS Region:\", AWS_REGION)\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\n\n%store MODEL_NAME\n%store AWS_REGION"
10379
},
10480
{
10581
"cell_type": "markdown",
@@ -182,4 +158,4 @@
182158
},
183159
"nbformat": 4,
184160
"nbformat_minor": 2
185-
}
161+
}

AmazonBedrock/anthropic/10_2_Appendix_Tool_Use.ipynb

Lines changed: 2 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -20,38 +20,7 @@
2020
"execution_count": null,
2121
"metadata": {},
2222
"outputs": [],
23-
"source": [
24-
"%pip install anthropic --quiet\n",
25-
"\n",
26-
"# Import the hints module from the utils package\n",
27-
"import os\n",
28-
"import sys\n",
29-
"module_path = \"..\"\n",
30-
"sys.path.append(os.path.abspath(module_path))\n",
31-
"from utils import hints\n",
32-
"\n",
33-
"# Import python's built-in regular expression library\n",
34-
"import re\n",
35-
"from anthropic import AnthropicBedrock\n",
36-
"\n",
37-
"# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\n",
38-
"MODEL_NAME='anthropic.claude-3-sonnet-20240229-v1:0'\n",
39-
"%store -r AWS_REGION\n",
40-
"\n",
41-
"client = AnthropicBedrock(aws_region=AWS_REGION)\n",
42-
"\n",
43-
"# Rewrittten to call Claude 3 Sonnet, which is generally better at tool use, and include stop_sequences\n",
44-
"def get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n",
45-
" message = client.messages.create(\n",
46-
" model=MODEL_NAME,\n",
47-
" max_tokens=2000,\n",
48-
" temperature=0.0,\n",
49-
" messages=messages,\n",
50-
" system=system_prompt,\n",
51-
" stop_sequences=stop_sequences\n",
52-
" )\n",
53-
" return message.content[0].text"
54-
]
23+
"source": "%pip install anthropic --quiet\n\n# Import the hints module from the utils package\nimport os\nimport sys\nmodule_path = \"..\"\nsys.path.append(os.path.abspath(module_path))\nfrom utils import hints\n\n# Import python's built-in regular expression library\nimport re\nfrom anthropic import AnthropicBedrock\n\n# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\nMODEL_NAME='anthropic.claude-sonnet-4-5-latest-v1:0'\n%store -r AWS_REGION\n\nclient = AnthropicBedrock(aws_region=AWS_REGION)\n\n# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\ndef get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n message = client.messages.create(\n model=MODEL_NAME,\n max_tokens=2000,\n temperature=0.0,\n messages=messages,\n system=system_prompt,\n stop_sequences=stop_sequences\n )\n return message.content[0].text"
5524
},
5625
{
5726
"cell_type": "markdown",
@@ -797,4 +766,4 @@
797766
},
798767
"nbformat": 4,
799768
"nbformat_minor": 2
800-
}
769+
}

AmazonBedrock/anthropic/10_3_Appendix_Empirical_Performance_Evaluations.ipynb

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -31,14 +31,7 @@
3131
"tags": []
3232
},
3333
"outputs": [],
34-
"source": [
35-
"# Store the model name and AWS region for later use\n",
36-
"MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
37-
"AWS_REGION = \"us-west-2\"\n",
38-
"\n",
39-
"%store MODEL_NAME\n",
40-
"%store AWS_REGION"
41-
]
34+
"source": "# Store the model name and AWS region for later use\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\nAWS_REGION = \"us-west-2\"\n\n%store MODEL_NAME\n%store AWS_REGION"
4235
},
4336
{
4437
"cell_type": "code",
@@ -350,4 +343,4 @@
350343
},
351344
"nbformat": 4,
352345
"nbformat_minor": 4
353-
}
346+
}

AmazonBedrock/boto3/00_Tutorial_How-To.ipynb

Lines changed: 3 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -66,22 +66,7 @@
6666
{
6767
"cell_type": "markdown",
6868
"metadata": {},
69-
"source": [
70-
"---\n",
71-
"\n",
72-
"## Usage Notes & Tips 💡\n",
73-
"\n",
74-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
75-
"\n",
76-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
77-
"\n",
78-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
79-
"\n",
80-
"### The Anthropic SDK & the Messages API\n",
81-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n",
82-
"\n",
83-
"Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
84-
]
69+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n\nBelow is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
8570
},
8671
{
8772
"cell_type": "markdown",
@@ -95,16 +80,7 @@
9580
"execution_count": null,
9681
"metadata": {},
9782
"outputs": [],
98-
"source": [
99-
"import boto3\n",
100-
"session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
101-
"AWS_REGION = session.region_name\n",
102-
"print(\"AWS Region:\", AWS_REGION)\n",
103-
"MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
104-
"\n",
105-
"%store MODEL_NAME\n",
106-
"%store AWS_REGION"
107-
]
83+
"source": "import boto3\nsession = boto3.Session() # create a boto3 session to dynamically get and set the region name\nAWS_REGION = session.region_name\nprint(\"AWS Region:\", AWS_REGION)\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\n\n%store MODEL_NAME\n%store AWS_REGION"
10884
},
10985
{
11086
"cell_type": "markdown",
@@ -192,4 +168,4 @@
192168
},
193169
"nbformat": 4,
194170
"nbformat_minor": 2
195-
}
171+
}

AmazonBedrock/boto3/10_2_Appendix_Tool_Use.ipynb

Lines changed: 2 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -20,43 +20,7 @@
2020
"execution_count": null,
2121
"metadata": {},
2222
"outputs": [],
23-
"source": [
24-
"# Rewrittten to call Claude 3 Sonnet, which is generally better at tool use, and include stop_sequences\n",
25-
"# Import python's built-in regular expression library\n",
26-
"import re\n",
27-
"import boto3\n",
28-
"import json\n",
29-
"\n",
30-
"# Import the hints module from the utils package\n",
31-
"import os\n",
32-
"import sys\n",
33-
"module_path = \"..\"\n",
34-
"sys.path.append(os.path.abspath(module_path))\n",
35-
"from utils import hints\n",
36-
"\n",
37-
"# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\n",
38-
"MODEL_NAME='anthropic.claude-3-sonnet-20240229-v1:0'\n",
39-
"%store -r AWS_REGION\n",
40-
"\n",
41-
"client = boto3.client('bedrock-runtime',region_name=AWS_REGION)\n",
42-
"\n",
43-
"def get_completion(messages, system_prompt=\"\", prefill=\"\", stop_sequences=None):\n",
44-
" body = json.dumps(\n",
45-
" {\n",
46-
" \"anthropic_version\": '',\n",
47-
" \"max_tokens\": 2000,\n",
48-
" \"temperature\": 0.0,\n",
49-
" \"top_p\": 1,\n",
50-
" \"messages\":messages,\n",
51-
" \"system\": system_prompt,\n",
52-
" \"stop_sequences\": stop_sequences\n",
53-
" }\n",
54-
" )\n",
55-
" response = client.invoke_model(body=body, modelId=MODEL_NAME)\n",
56-
" response_body = json.loads(response.get('body').read())\n",
57-
"\n",
58-
" return response_body.get('content')[0].get('text')"
59-
]
23+
"source": "# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\n# Import python's built-in regular expression library\nimport re\nimport boto3\nimport json\n\n# Import the hints module from the utils package\nimport os\nimport sys\nmodule_path = \"..\"\nsys.path.append(os.path.abspath(module_path))\nfrom utils import hints\n\n# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\nMODEL_NAME='anthropic.claude-sonnet-4-5-latest-v1:0'\n%store -r AWS_REGION\n\nclient = boto3.client('bedrock-runtime',region_name=AWS_REGION)\n\ndef get_completion(messages, system_prompt=\"\", prefill=\"\", stop_sequences=None):\n body = json.dumps(\n {\n \"anthropic_version\": '',\n \"max_tokens\": 2000,\n \"temperature\": 0.0,\n \"top_p\": 1,\n \"messages\":messages,\n \"system\": system_prompt,\n \"stop_sequences\": stop_sequences\n }\n )\n response = client.invoke_model(body=body, modelId=MODEL_NAME)\n response_body = json.loads(response.get('body').read())\n\n return response_body.get('content')[0].get('text')"
6024
},
6125
{
6226
"cell_type": "markdown",
@@ -788,4 +752,4 @@
788752
},
789753
"nbformat": 4,
790754
"nbformat_minor": 2
791-
}
755+
}

AmazonBedrock/boto3/10_3_Appendix_Empirical_Performance_Eval.ipynb

Lines changed: 2 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -31,21 +31,7 @@
3131
"tags": []
3232
},
3333
"outputs": [],
34-
"source": [
35-
"# Import python's built-in regular expression library\n",
36-
"import re\n",
37-
"\n",
38-
"# Import boto3 and json\n",
39-
"import boto3\n",
40-
"import json\n",
41-
"\n",
42-
"# Store the model name and AWS region for later use\n",
43-
"MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
44-
"AWS_REGION = \"us-west-2\"\n",
45-
"\n",
46-
"%store MODEL_NAME\n",
47-
"%store AWS_REGION"
48-
]
34+
"source": "# Import python's built-in regular expression library\nimport re\n\n# Import boto3 and json\nimport boto3\nimport json\n\n# Store the model name and AWS region for later use\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\nAWS_REGION = \"us-west-2\"\n\n%store MODEL_NAME\n%store AWS_REGION"
4935
},
5036
{
5137
"cell_type": "code",
@@ -339,4 +325,4 @@
339325
},
340326
"nbformat": 4,
341327
"nbformat_minor": 4
342-
}
328+
}

AmazonBedrock/requirements.txt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
awscli==1.32.74
2-
boto3==1.34.74
3-
botocore==1.34.74
4-
anthropic==0.21.3
1+
awscli
2+
boto3
3+
botocore
4+
anthropic>=0.39.0
55
pickleshare==0.7.5

Anthropic 1P/00_Tutorial_How-To.ipynb

Lines changed: 3 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -42,14 +42,7 @@
4242
"execution_count": null,
4343
"metadata": {},
4444
"outputs": [],
45-
"source": [
46-
"API_KEY = \"your_api_key_here\"\n",
47-
"MODEL_NAME = \"claude-3-haiku-20240307\"\n",
48-
"\n",
49-
"# Stores the API_KEY & MODEL_NAME variables for use across notebooks within the IPython store\n",
50-
"%store API_KEY\n",
51-
"%store MODEL_NAME"
52-
]
45+
"source": "API_KEY = \"your_api_key_here\"\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"claude-haiku-4-5-latest\"\n\n# Stores the API_KEY & MODEL_NAME variables for use across notebooks within the IPython store\n%store API_KEY\n%store MODEL_NAME"
5346
},
5447
{
5548
"cell_type": "markdown",
@@ -61,22 +54,7 @@
6154
{
6255
"cell_type": "markdown",
6356
"metadata": {},
64-
"source": [
65-
"---\n",
66-
"\n",
67-
"## Usage Notes & Tips 💡\n",
68-
"\n",
69-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
70-
"\n",
71-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
72-
"\n",
73-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
74-
"\n",
75-
"### The Anthropic SDK & the Messages API\n",
76-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n",
77-
"\n",
78-
"Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
79-
]
57+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n\nBelow is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
8058
},
8159
{
8260
"cell_type": "code",
@@ -151,4 +129,4 @@
151129
},
152130
"nbformat": 4,
153131
"nbformat_minor": 2
154-
}
132+
}

Anthropic 1P/10.2_Appendix_Tool Use.ipynb

Lines changed: 2 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -20,30 +20,7 @@
2020
"execution_count": null,
2121
"metadata": {},
2222
"outputs": [],
23-
"source": [
24-
"!pip install anthropic\n",
25-
"\n",
26-
"# Import python's built-in regular expression library\n",
27-
"import re\n",
28-
"import anthropic\n",
29-
"\n",
30-
"# Retrieve the API_KEY variable from the IPython store\n",
31-
"%store -r API_KEY\n",
32-
"\n",
33-
"client = anthropic.Anthropic(api_key=API_KEY)\n",
34-
"\n",
35-
"# Rewrittten to call Claude 3 Sonnet, which is generally better at tool use, and include stop_sequences\n",
36-
"def get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n",
37-
" message = client.messages.create(\n",
38-
" model=\"claude-3-sonnet-20240229\",\n",
39-
" max_tokens=2000,\n",
40-
" temperature=0.0,\n",
41-
" system=system_prompt,\n",
42-
" messages=messages,\n",
43-
" stop_sequences=stop_sequences\n",
44-
" )\n",
45-
" return message.content[0].text"
46-
]
23+
"source": "!pip install anthropic\n\n# Import python's built-in regular expression library\nimport re\nimport anthropic\n\n# Retrieve the API_KEY variable from the IPython store\n%store -r API_KEY\n\nclient = anthropic.Anthropic(api_key=API_KEY)\n\n# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\ndef get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n message = client.messages.create(\n model=\"claude-sonnet-4-5-latest\",\n max_tokens=2000,\n temperature=0.0,\n system=system_prompt,\n messages=messages,\n stop_sequences=stop_sequences\n )\n return message.content[0].text"
4724
},
4825
{
4926
"cell_type": "markdown",
@@ -775,4 +752,4 @@
775752
},
776753
"nbformat": 4,
777754
"nbformat_minor": 2
778-
}
755+
}

0 commit comments

Comments
 (0)