You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hardcoded Claude 3 model IDs (claude-3-haiku-20240307, claude-3-sonnet-20240229)
are deprecated and broken. Replace them with -latest aliases (e.g., claude-haiku-4-5-latest)
so the tutorials automatically use the newest model version without code changes.
Changes:
- Update MODEL_NAME in all 3 variants (Anthropic 1P, Bedrock/anthropic, Bedrock/boto3)
- Update tool use notebooks from Sonnet 3 to claude-sonnet-4-5-latest
- Update eval notebooks from Haiku 3 to claude-haiku-4-5-latest
- Update markdown prose to reference "Claude Haiku (latest)" instead of "Claude 3 Haiku"
- Unpin SDK versions in AmazonBedrock/requirements.txt for alias support
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: AmazonBedrock/anthropic/00_Tutorial_How-To.ipynb
+3-27Lines changed: 3 additions & 27 deletions
Original file line number
Diff line number
Diff line change
@@ -61,22 +61,7 @@
61
61
{
62
62
"cell_type": "markdown",
63
63
"metadata": {},
64
-
"source": [
65
-
"---\n",
66
-
"\n",
67
-
"## Usage Notes & Tips 💡\n",
68
-
"\n",
69
-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
70
-
"\n",
71
-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
72
-
"\n",
73
-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
74
-
"\n",
75
-
"### The Anthropic SDK & the Messages API\n",
76
-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n",
77
-
"\n",
78
-
"Below is an example of what running a prompt will look like in this tutorial."
79
-
]
64
+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/claude-on-amazon-bedrock) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial.\n\nBelow is an example of what running a prompt will look like in this tutorial."
80
65
},
81
66
{
82
67
"cell_type": "markdown",
@@ -90,16 +75,7 @@
90
75
"execution_count": null,
91
76
"metadata": {},
92
77
"outputs": [],
93
-
"source": [
94
-
"import boto3\n",
95
-
"session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
"source": "import boto3\nsession = boto3.Session() # create a boto3 session to dynamically get and set the region name\nAWS_REGION = session.region_name\nprint(\"AWS Region:\", AWS_REGION)\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\n\n%store MODEL_NAME\n%store AWS_REGION"
"source": "%pip install anthropic --quiet\n\n# Import the hints module from the utils package\nimport os\nimport sys\nmodule_path = \"..\"\nsys.path.append(os.path.abspath(module_path))\nfrom utils import hints\n\n# Import python's built-in regular expression library\nimport re\nfrom anthropic import AnthropicBedrock\n\n# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\nMODEL_NAME='anthropic.claude-sonnet-4-5-latest-v1:0'\n%store -r AWS_REGION\n\nclient = AnthropicBedrock(aws_region=AWS_REGION)\n\n# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\ndef get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n message = client.messages.create(\n model=MODEL_NAME,\n max_tokens=2000,\n temperature=0.0,\n messages=messages,\n system=system_prompt,\n stop_sequences=stop_sequences\n )\n return message.content[0].text"
"source": "# Store the model name and AWS region for later use\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\nAWS_REGION = \"us-west-2\"\n\n%store MODEL_NAME\n%store AWS_REGION"
Copy file name to clipboardExpand all lines: AmazonBedrock/boto3/00_Tutorial_How-To.ipynb
+3-27Lines changed: 3 additions & 27 deletions
Original file line number
Diff line number
Diff line change
@@ -66,22 +66,7 @@
66
66
{
67
67
"cell_type": "markdown",
68
68
"metadata": {},
69
-
"source": [
70
-
"---\n",
71
-
"\n",
72
-
"## Usage Notes & Tips 💡\n",
73
-
"\n",
74
-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
75
-
"\n",
76
-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
77
-
"\n",
78
-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
79
-
"\n",
80
-
"### The Anthropic SDK & the Messages API\n",
81
-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n",
82
-
"\n",
83
-
"Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
84
-
]
69
+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n\nBelow is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
85
70
},
86
71
{
87
72
"cell_type": "markdown",
@@ -95,16 +80,7 @@
95
80
"execution_count": null,
96
81
"metadata": {},
97
82
"outputs": [],
98
-
"source": [
99
-
"import boto3\n",
100
-
"session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
"source": "import boto3\nsession = boto3.Session() # create a boto3 session to dynamically get and set the region name\nAWS_REGION = session.region_name\nprint(\"AWS Region:\", AWS_REGION)\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\n\n%store MODEL_NAME\n%store AWS_REGION"
"source": "# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\n# Import python's built-in regular expression library\nimport re\nimport boto3\nimport json\n\n# Import the hints module from the utils package\nimport os\nimport sys\nmodule_path = \"..\"\nsys.path.append(os.path.abspath(module_path))\nfrom utils import hints\n\n# Override the MODEL_NAME variable in the IPython store to use Sonnet instead of the Haiku model\nMODEL_NAME='anthropic.claude-sonnet-4-5-latest-v1:0'\n%store -r AWS_REGION\n\nclient = boto3.client('bedrock-runtime',region_name=AWS_REGION)\n\ndef get_completion(messages, system_prompt=\"\", prefill=\"\", stop_sequences=None):\n body = json.dumps(\n {\n \"anthropic_version\": '',\n \"max_tokens\": 2000,\n \"temperature\": 0.0,\n \"top_p\": 1,\n \"messages\":messages,\n \"system\": system_prompt,\n \"stop_sequences\": stop_sequences\n }\n )\n response = client.invoke_model(body=body, modelId=MODEL_NAME)\n response_body = json.loads(response.get('body').read())\n\n return response_body.get('content')[0].get('text')"
"source": "# Import python's built-in regular expression library\nimport re\n\n# Import boto3 and json\nimport boto3\nimport json\n\n# Store the model name and AWS region for later use\nMODEL_NAME = \"anthropic.claude-haiku-4-5-latest-v1:0\"\nAWS_REGION = \"us-west-2\"\n\n%store MODEL_NAME\n%store AWS_REGION"
Copy file name to clipboardExpand all lines: Anthropic 1P/00_Tutorial_How-To.ipynb
+3-25Lines changed: 3 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -42,14 +42,7 @@
42
42
"execution_count": null,
43
43
"metadata": {},
44
44
"outputs": [],
45
-
"source": [
46
-
"API_KEY = \"your_api_key_here\"\n",
47
-
"MODEL_NAME = \"claude-3-haiku-20240307\"\n",
48
-
"\n",
49
-
"# Stores the API_KEY & MODEL_NAME variables for use across notebooks within the IPython store\n",
50
-
"%store API_KEY\n",
51
-
"%store MODEL_NAME"
52
-
]
45
+
"source": "API_KEY = \"your_api_key_here\"\n# \"latest\" alias auto-updates when new models release; pin a specific version if needed\nMODEL_NAME = \"claude-haiku-4-5-latest\"\n\n# Stores the API_KEY & MODEL_NAME variables for use across notebooks within the IPython store\n%store API_KEY\n%store MODEL_NAME"
53
46
},
54
47
{
55
48
"cell_type": "markdown",
@@ -61,22 +54,7 @@
61
54
{
62
55
"cell_type": "markdown",
63
56
"metadata": {},
64
-
"source": [
65
-
"---\n",
66
-
"\n",
67
-
"## Usage Notes & Tips 💡\n",
68
-
"\n",
69
-
"- This course uses Claude 3 Haiku with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to previous generation legacy Claude models such as Claude 2 and Claude Instant 1.2.\n",
70
-
"\n",
71
-
"- You can use `Shift + Enter` to execute the cell and move to the next one.\n",
72
-
"\n",
73
-
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
74
-
"\n",
75
-
"### The Anthropic SDK & the Messages API\n",
76
-
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n",
77
-
"\n",
78
-
"Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
79
-
]
57
+
"source": "- This course uses Claude Haiku (latest) with temperature 0. We will talk more about temperature later in the course. For now, it's enough to understand that these settings yield more deterministic results. All prompt engineering techniques in this course also apply to other Claude models as well.\n\n- You can use `Shift + Enter` to execute the cell and move to the next one.\n\n- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n\n### The Anthropic SDK & the Messages API\nWe will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n\nBelow is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
"source": "!pip install anthropic\n\n# Import python's built-in regular expression library\nimport re\nimport anthropic\n\n# Retrieve the API_KEY variable from the IPython store\n%store -r API_KEY\n\nclient = anthropic.Anthropic(api_key=API_KEY)\n\n# Rewritten to call Claude Sonnet, which is generally better at tool use, and include stop_sequences\ndef get_completion(messages, system_prompt=\"\", prefill=\"\",stop_sequences=None):\n message = client.messages.create(\n model=\"claude-sonnet-4-5-latest\",\n max_tokens=2000,\n temperature=0.0,\n system=system_prompt,\n messages=messages,\n stop_sequences=stop_sequences\n )\n return message.content[0].text"
0 commit comments