Build AI agents with LangGraph and deploy to PPIO Agent Runtime in minutes.
This example shows you how to quickly deploy an AI agent with streaming responses, multi-turn conversations, and tool integration to PPIO Agent Runtime.
简体中文 | English
- What This Example Includes
- Quick Start
- Project Structure
- Agent Capabilities
- Testing
- API Reference
- Troubleshooting
- Resources
This agent example includes the following capabilities:
- ✅ Streaming responses - Real-time token streaming for better UX
- ✅ Multi-turn conversations - Automatic conversation history management
- ✅ Tool integration - DuckDuckGo search capability
- ✅ Complete test suite - Both local and production tests
Before starting, install these requirements:
- Python 3.9+ and Node.js 20+
- PPIO API Key - Get it from console
1. Clone the repository
git clone [email protected]:PPIO/agent-runtime-example.git
cd agent-runtime-example2. Create a Python virtual environment
python -m venv .venv
# macOS/Linux:
source .venv/bin/activate
# Windows:
.venv\Scripts\activate3. Install Python dependencies
pip install -r requirements.txt4. Add your API keys to .env
Copy the example file and add your keys:
cp .env.example .envEdit .env with these required values:
| Variable | Description | Where to Find It |
|---|---|---|
PPIO_API_KEY |
Your PPIO platform API key | PPIO Dashboard → Key Management |
PPIO_AGENT_API_KEY |
PPIO API key for LLM API calls in agent | Same dashboard |
5. Start the agent locally
python app.pyThe agent runs at http://localhost:8080. Test it:
bash tests/test_local_basic.shYou should see a JSON response with the agent's answer.
1. Install PPIO sandbox CLI (beta) locally
npm install ppio-sandbox-cli@beta
npx ppio-sandbox-cli --version2. Configure your agent
Run the interactive configuration (first deployment only):
npx ppio-sandbox-cli agent configureThe CLI creates three files:
.ppio-agent.yaml- Agent metadata and configurationppio.Dockerfile- Sandbox template Dockerfile.dockerignore- Files to exclude from Docker build
3. Deploy to PPIO cloud
npx ppio-sandbox-cli agent launchAfter deployment succeeds, .ppio-agent.yaml contains your agent ID:
status:
phase: deployed
agent_id: agent-xxxx # ⭐ You need this ID to invoke the agent
last_deployed: '2025-10-23T10:35:00Z'4. Test with CLI
Invoke your deployed agent:
npx ppio-sandbox-cli agent invoke "Hello, Agent!" --env PPIO_AGENT_API_KEY="<your-api-key>"The CLI reads agent_id automatically from .ppio-agent.yaml.
5. Invoke the agent from your application with SDK
Save the Agent ID from .ppio-agent.yaml to .env file:
PPIO_AGENT_ID=agent-xxxx # Copy from .ppio-agent.yaml status.agent_idTest SDK invocation:
# Non-streaming response test
python tests/test_sandbox_basic.py
# Streaming response test
python tests/test_sandbox_streaming.py
# Multi-turn conversation test
python tests/test_sandbox_multi_turn.pyppio-agent-example/
├── app.py # Agent program
├── tests/ # All test files
│ ├── test_local_basic.sh # Local basic test
│ ├── test_local_streaming.sh # Local streaming response test
│ ├── test_local_multi_turn.sh # Local multi-turn conversation test
│ ├── test_sandbox_basic.py # Remote basic test
│ ├── test_sandbox_streaming.py # Remote streaming test
│ └── test_sandbox_multi_turn.py # Remote multi-turn test
├── app_logs/ # Application logs (generated at runtime)
├── .env.example # Environment variable template
├── .gitignore
├── requirements.txt
├── pyproject.toml
├── README.md
├── README_zh.md
└── LICENSE
This example agent has three main features:
The agent remembers conversation history automatically. Each sandbox instance maintains its own conversation context.
Example conversation:
Turn 1:
User: "My name is Alice"
Agent: "Nice to meet you, Alice!"
Turn 2 (same session):
User: "What's my name?"
Agent: "Your name is Alice."
To maintain the same session when using the SDK, pass the same runtimeSessionId value across requests.
The agent can search DuckDuckGo when it needs current information.
The LangGraph workflow handles this automatically:
- Agent detects when information is needed
- Agent calls the search tool
- Agent incorporates search results into the response
Each request can choose whether to return streaming data via the streaming parameter.
Local tests run against app.py on localhost:8080.
Start the agent:
python app.pyRun tests in another terminal:
# Basic test
bash tests/test_local_basic.sh
# Streaming response test
bash tests/test_local_streaming.sh
# Multi-turn conversation test
bash tests/test_local_multi_turn.shWindows users: Use Git Bash or WSL to run bash scripts.
Production tests invoke the deployed agent using the SDK.
Requirements:
- Agent deployed with
agent launchcommand PPIO_AGENT_IDadded to.envfile
Run tests:
# Non-streaming response
python tests/test_sandbox_basic.py
# Streaming response
python tests/test_sandbox_streaming.py
# Multi-turn conversation
python tests/test_sandbox_multi_turn.pyAll tests should pass if the agent is configured correctly.
Check if the agent is running properly:
GET /pingResponse:
{
"status": "healthy",
"service": "My Agent"
}Send a request to the agent:
POST /invocationsRequest body parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
prompt |
string | ✅ Yes | - | User message or question |
streaming |
boolean | No | false |
Enable streaming output |
Example request:
{
"prompt": "Tell me about AI agents",
"streaming": false
}Non-streaming response:
{
"result": "AI agents are autonomous systems that..."
}Streaming response:
Server-Sent Events (SSE) format:
data: {"chunk": "AI ", "type": "content"}
data: {"chunk": "agents ", "type": "content"}
data: {"chunk": "are ", "type": "content"}
...
data: {"chunk": "", "type": "end"}
Each data: line contains a JSON object with the next token chunk.
Cause: Each sandbox restart creates a new conversation history.
Solution: Use the same runtimeSessionId parameter in SDK calls to maintain the same sandbox instance:
response = await client.invoke_agent_runtime(
agentId=agent_id,
payload=payload,
runtimeSessionId="unique-session-id", # Same ID for multi-turn
timeout=300
)Cause: The streaming parameter might be missing or set to false.
Solution: Ensure your request includes "streaming": true:
{
"prompt": "Your question",
"streaming": true
}Cause: Dependencies not installed or wrong Python environment.
Solution:
- Activate your virtual environment
- Install dependencies:
pip install -r requirements.txt - Verify installation:
pip list | grep ppio-sandbox
MIT License - see LICENSE file for details.
Need help? Open an issue or contact support at ppio.ai