AWS examples for a makeathon
🚀 Check out this document Quick Start Guide
- Login
- Roles & Permissions
- Access Keys
- Sagemaker AI Platform & Jupyter Notebooks
- S3 Storage
All TypeScript examples (Bedrock, S3, S3 Vectors, LangChain, RAG) are in the typescript/ folder with their own setup, docs, and README.
Quick overview of what's inside:
| Script | File | What it does |
|---|---|---|
npm run verify |
src/verify.ts |
Check your credentials work |
npm run bedrock |
src/bedrock.ts |
Invoke any Bedrock model (simple + streaming) |
npm run s3 |
src/s3.ts |
Upload / download / list S3 objects |
npm run rag |
src/rag.ts |
Full RAG pipeline with S3 Vectors (raw SDK) |
npm run langchain |
src/langchain-rag.ts |
RAG with LangChain + Bedrock |
Checkout the S3_Example.ipynb notebook. → You can run this on AWS Sagemaker Notebooks
Checkout the Bedrock_Example.ipynb notebook. → You can run this on AWS Sagemaker Notebooks
Make sure you never store access keys in a public location!
- Create a virtual python environment
python3 -m venv .venv - Activate the virtual environment
source .venv/bin/activate - Install the required libraries
pip install -r requirements.txt
Source: https://docs.python.org/3/library/venv.html
- Create an AWS Access key Link
- Create a copy of the
.env.examplefile and name it.env - Store the
Key IDand theKey Secretin the.envfile
WARNING Make sure you NEVER add these keys to a public repository!
ATTENTION! When selecting a model in Bedrock it's now required their inference profile IDs instead of raw model IDs.
Example:
- Before:
amazon.nova-pro-v1:0- Now:
eu.amazon.nova-pro-v1:0Find all available inference profile IDs here: Supported Regions and models for inference profiles (always choose EU when available, otherwise global). When generating code with AI you might need to change this manually, as it's a recent change in AWS ;)
- Make sure the environment is activated
source .venv/bin/activate - Execute
python bedrock_local.py
The output should look like this:
(.venv) user@host$ python bedrock_local.py
>>> Model Response: A 'hello world' program demonstrates the basic syntax of a programming language by outputting a simple greeting.
- Make sure the environment is activated
source .venv/bin/activate - Change the group name in the file
s3_local.pyto a valid group name - Execute
python s3_local.py
The output should look like this:
(.venv) user@host$ python s3_local.py
Downloaded 's3://makeathontest/data/dummy_data.csv' and stored file in 'dummy_data.csv'
(.venv) user@host$ ls
dummy_data.csv
If you're using an AI coding assistant (Cursor, Windsurf, Claude Code, GitHub Copilot, etc.), you can give it direct access to the latest LangChain documentation via their MCP server. This means your assistant will give you accurate, up-to-date LangChain code instead of hallucinating outdated APIs.
MCP Server URL:
https://docs.langchain.com/mcp
Claude Code:
claude mcp add --transport http docs-langchain https://docs.langchain.com/mcpCursor / Windsurf — add to your MCP settings (.cursor/mcp.json or equivalent):
{
"mcpServers": {
"langchain-docs": {
"type": "http",
"url": "https://docs.langchain.com/mcp"
}
}
}Once connected, your assistant can search LangChain, LangGraph, and LangSmith docs in real time. More details: docs.langchain.com/use-these-docs
- Always use
eu.inference profile IDs for Bedrock models to keep data in EU regions - S3 bucket names must be lowercase — only letters, numbers, and hyphens, globally unique
- SSO credentials expire after ~1 hour — re-copy from the portal when you get
ExpiredTokenException