A Python framework for building journey-based conversational agents with LLMs. Design multi-step conversational experiences where agents guide users through structured processes with state management, tool execution, and flexible routing.
| Version | Date | Changes |
|---|---|---|
| 0.3.0 | 2025-01-23 | Renamed Executor to LLMExecutor for clarity (backwards compatible) |
| 0.2.0 | 2025-01-15 | Added memory context management and prompt templates |
| 0.1.0 | 2025-01-14 | Initial release - JourneyAgent, state management, Django adapter |
A journey-based agent guides users through a multi-step process (a "journey"), where:
- Each step has its own behavior, prompts, and available tools
- State is maintained throughout the conversation
- The agent transitions between steps based on user interactions
- Tools can modify state and trigger step transitions
Perfect for: Onboarding flows, data collection, quote generation, claim processing, multi-step forms, guided troubleshooting, and any structured conversational workflow.
The framework is built around several core abstractions:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β JourneyAgent β
β Orchestrates the conversation flow β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β State β β Tools β β Prompts β
β (Journey β β (Actions β β (Step-based β
β Progress) β β & Logic) β β Behavior) β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β β β
ββββββββββββββββββββΌβββββββββββββββββββ
βΌ
ββββββββββββββββ
β LLMExecutor β
β (LLM Loop) β
ββββββββββββββββ
JourneyAgent: Base class for building step-driven conversational agentsBaseJourneyState: Manages journey state with step tracking and serializationBaseJourneyTools: Tools that can modify state and trigger transitionsLLMExecutor: Handles the LLM interaction and tool execution loopIntentRouter: Routes user intents to different journeysMemoryStore: Persistence layer for state and conversation historyPromptManager: Step-based prompt management with templating
pip install agent_runtime_frameworkfrom enum import Enum
from dataclasses import dataclass
from agent_runtime_framework import (
JourneyAgent, BaseJourneyState, BaseJourneyTools,
AgentContext, ToolSchema, ToolSchemaBuilder
)
# 1. Define your journey steps
class OnboardingStep(str, Enum):
WELCOME = "welcome"
COLLECT_NAME = "collect_name"
COLLECT_EMAIL = "collect_email"
COMPLETE = "complete"
# 2. Define your state
@dataclass
class OnboardingState(BaseJourneyState[OnboardingStep]):
step: OnboardingStep = OnboardingStep.WELCOME
name: str = ""
email: str = ""
def is_complete(self) -> bool:
return self.step == OnboardingStep.COMPLETE
def to_dict(self) -> dict:
return {
"step": self.step.value,
"name": self.name,
"email": self.email,
}
@classmethod
def from_dict(cls, data: dict) -> "OnboardingState":
return cls(
step=OnboardingStep(data.get("step", "welcome")),
name=data.get("name", ""),
email=data.get("email", ""),
)
# 3. Define your tools
class OnboardingTools(BaseJourneyTools[OnboardingState]):
async def save_name(self, name: str) -> str:
self.state.name = name
self.state.step = OnboardingStep.COLLECT_EMAIL
await self._notify_state_change()
return f"Great, {name}! Now, what's your email?"
async def save_email(self, email: str) -> str:
self.state.email = email
self.state.step = OnboardingStep.COMPLETE
await self._notify_state_change()
return f"Perfect! You're all set, {self.state.name}!"
# 4. Define your agent
class OnboardingAgent(JourneyAgent[OnboardingState, OnboardingTools, OnboardingStep]):
@property
def key(self) -> str:
return "onboarding-agent"
def get_initial_state(self) -> OnboardingState:
return OnboardingState()
def get_system_prompt(self, state: OnboardingState) -> str:
prompts = {
OnboardingStep.WELCOME: "Welcome! Ask for the user's name.",
OnboardingStep.COLLECT_NAME: "Collect the user's name using save_name tool.",
OnboardingStep.COLLECT_EMAIL: "Collect the user's email using save_email tool.",
OnboardingStep.COMPLETE: "Thank the user for completing onboarding.",
}
return prompts[state.step]
def get_tool_schemas(self, state: OnboardingState) -> list[ToolSchema]:
if state.step == OnboardingStep.COLLECT_NAME:
return [
ToolSchemaBuilder("save_name")
.description("Save the user's name")
.param("name", "string", "The user's name", required=True)
.build()
]
elif state.step == OnboardingStep.COLLECT_EMAIL:
return [
ToolSchemaBuilder("save_email")
.description("Save the user's email")
.param("email", "string", "The user's email", required=True)
.build()
]
return []
def create_tools(self, state: OnboardingState, ctx: AgentContext) -> OnboardingTools:
return OnboardingTools(state=state)
async def execute_tool(self, tools: OnboardingTools, name: str, arguments: dict) -> str:
method = getattr(tools, name)
return await method(**arguments)
# 5. Run your agent
agent = OnboardingAgent(llm_client=my_llm_client)
result = await agent.run(context)State is the heart of your agent. It tracks:
- Current step in the journey
- Collected data from the user
- Progress indicators and flags
State must be serializable (to/from dict) for persistence across conversation turns.
Tools are the actions your agent can take. They:
- Execute business logic
- Modify state
- Trigger step transitions
- Return responses to the LLM
Tools inherit from BaseJourneyTools and have access to the current state.
Each step in your journey can have:
- Different system prompts - Guide the LLM's behavior
- Different available tools - Control what actions are possible
- Different validation logic - Ensure data quality
This creates a structured, predictable conversation flow.
The LLMExecutor handles the core LLM interaction loop:
- Send messages + available tools to LLM
- LLM responds with text or tool calls
- Execute tool calls and add results to messages
- Repeat until LLM returns text or max iterations reached
You typically don't use the LLMExecutor directly - JourneyAgent uses it internally.
Note: This is different from agent_runtime_core.steps.StepExecutor, which is for multi-step workflows with checkpointing.
The framework supports debug mode for development and production mode for deployment:
Debug Mode:
- Exceptions propagate immediately (no swallowing)
- Verbose logging (DEBUG level)
- Clear error messages and stack traces
- Perfect for development and troubleshooting
Production Mode:
- Exceptions are caught and returned as error messages
- Standard logging (INFO level)
- Graceful error handling
- Safe for production deployments
from agent_runtime_framework import configure
# Enable debug mode (development)
configure(debug=True)
# Enable production mode
configure(debug=False)
# Or use environment variable
# AGENT_RUNTIME_DEBUG=1 python your_app.pyExample - Debug mode catches errors immediately:
from agent_runtime_framework import configure, JourneyAgent
# In debug mode, exceptions propagate
configure(debug=True)
class MyTools(BaseJourneyTools[MyState]):
async def process_data(self, data: str) -> str:
# This will raise immediately in debug mode
result = int(data) # ValueError if data is not a number
return f"Processed: {result}"
# When this tool is called with invalid data, you'll get:
# ValueError: invalid literal for int() with base 10: 'abc'
# With full stack trace!Example - Production mode handles errors gracefully:
# In production mode, exceptions are caught
configure(debug=False)
# Same tool, but now returns:
# "Error executing process_data: invalid literal for int() with base 10: 'abc'"
# The agent continues running and can handle the errorConfiguration options:
from agent_runtime_framework import configure, FrameworkConfig
# Fine-grained control
configure(
debug=True, # Enable debug mode
swallow_tool_exceptions=False, # Don't catch exceptions
log_level="DEBUG" # Verbose logging
)
# Or create a custom config
config = FrameworkConfig(
debug=True,
swallow_tool_exceptions=False,
log_level="DEBUG"
)
from agent_runtime_framework import set_config
set_config(config)
# Environment variables
# AGENT_RUNTIME_DEBUG=1 # Enable debug mode
# AGENT_RUNTIME_LOG_LEVEL=DEBUG # Set log levelRoute users to different journeys based on their intent:
from agent_runtime_framework import IntentRouter, RouteDefinition
class Journey(str, Enum):
QUOTE = "quote"
CLAIM = "claim"
SUPPORT = "support"
router = IntentRouter[Journey]()
router.add_route(RouteDefinition(
journey=Journey.QUOTE,
name="start_quote",
description="Get a new insurance quote",
))
router.add_route(RouteDefinition(
journey=Journey.CLAIM,
name="file_claim",
description="File or check on a claim",
))
# Get routing tools for LLM
tools = router.get_tool_schemas()
# After LLM calls a routing tool
journey = router.resolve_tool_call("start_quote") # Journey.QUOTEPersist state and conversation history:
from agent_runtime_framework import (
StateStore, ConversationStore, MemoryManager
)
# Set up stores
state_store = StateStore()
conversation_store = ConversationStore()
manager = MemoryManager(state_store, conversation_store)
# Load context
context = await manager.load_context(conversation_id, "my-agent")
# Save after run
await manager.save_state(conversation_id, "my-agent", new_state)
await manager.save_messages(conversation_id, messages)Organize prompts with templates and step mappings:
from agent_runtime_framework import PromptTemplate, StepPromptMapping
# Simple mapping
prompts = StepPromptMapping[MyStep](
prompts={
MyStep.WELCOME: "Welcome! How can I help?",
MyStep.COLLECTING: "Please provide your information.",
},
default="I'm here to assist you.",
)
# With templates
template = PromptTemplate(
"Hello $name! You are at step ${step}.",
defaults={"name": "there"},
)
prompts.add(MyStep.WELCOME, template)
# Render
prompt = prompts.get(MyStep.WELCOME, name="Alice", step="welcome")Observe and log execution events:
from agent_runtime_framework import LLMExecutor, ExecutorHooks, LoggingHooks
class MyHooks(ExecutorHooks):
async def on_tool_start(self, name: str, arguments: dict) -> None:
print(f"π§ Calling tool: {name}")
async def on_tool_end(self, name: str, result: str) -> None:
print(f"β
Tool completed: {name}")
executor = LLMExecutor(
llm_client=my_llm,
hooks=MyHooks(),
)The framework is designed to work seamlessly with agent_runtime_core, a companion package that provides:
- LLM client abstractions - Unified interface for OpenAI, Anthropic, etc.
- Production utilities - Logging, monitoring, error handling
- Configuration management - Environment-based settings
When agent_runtime_core is installed, the framework can automatically use its LLM clients:
from agent_runtime_core.llm import get_llm_client
# Framework automatically uses agent_runtime_core's LLM client
agent = MyAgent() # No need to pass llm_client explicitlyThe Django adapter also integrates with agent_runtime_core for production deployments.
Use the DjangoRuntimeAdapter to integrate with django_agent_runtime:
from agent_runtime_framework.adapters import DjangoRuntimeAdapter
class MyDjangoAgent(DjangoRuntimeAdapter[MyState, MyTools, MyStep]):
@property
def key(self) -> str:
return "my-agent"
def get_initial_state(self) -> MyState:
return MyState()
def get_system_prompt(self, state: MyState) -> str:
return PROMPTS[state.step]
def get_tool_schemas(self, state: MyState) -> list[ToolSchema]:
return TOOLS[state.step]
def create_tools(self, state: MyState, ctx, backend_client) -> MyTools:
return MyTools(state=state, backend_client=backend_client)
async def execute_tool(self, tools: MyTools, name: str, args: dict) -> str:
method = getattr(tools, name)
return await method(**args)
# Register with Django
from django_agent_runtime.runtime.registry import register_runtime
register_runtime(MyDjangoAgent())The adapter handles:
- Converting Django's
RunContextto framework'sAgentContext - Using Django's checkpoint system for state persistence
- Emitting events through Django's event bus
- Returning results in Django's
RunResultformat
The framework includes comprehensive test utilities:
# Install dev dependencies
pip install agent_runtime_framework[dev]
# Run tests
pytest
# Run with coverage
pytest --cov=agent_runtime_frameworkTest fixtures are provided in tests/conftest.py for common testing scenarios.
Base class for journey-based agents.
Must implement:
key: str- Unique agent identifierget_initial_state() -> StateT- Create initial stateget_system_prompt(state: StateT) -> str- Get prompt for current stepget_tool_schemas(state: StateT) -> list[ToolSchema]- Get available toolscreate_tools(state: StateT, ctx: AgentContext) -> ToolsT- Create tool instanceexecute_tool(tools: ToolsT, name: str, args: dict) -> str- Execute a tool
Optional overrides:
load_state(ctx: AgentContext) -> StateT | None- Load persisted statesave_state(ctx: AgentContext, state: StateT) -> None- Save stateis_terminal_state(state: StateT) -> bool- Check if journey is complete
Base class for journey state with step tracking.
Must implement:
step: StepT- Current step (as a field)is_complete() -> bool- Check if journey is completeto_dict() -> dict- Serialize to dictionaryfrom_dict(data: dict) -> Self- Deserialize from dictionary
Base class for journey tools that operate on state.
Attributes:
state: StateT- The journey statebackend_client: Any- Optional backend clienton_state_change: Callable- Callback for state changes
Methods:
_notify_state_change()- Call after modifying state
Schema definition for LLM tools.
Create with ToolSchemaBuilder:
schema = (
ToolSchemaBuilder("my_tool")
.description("What the tool does")
.param("arg1", "string", "Description", required=True)
.param("arg2", "number", "Description", required=False)
.build()
)Core execution loop for LLM + tool interactions.
executor = LLMExecutor(
llm_client=my_llm,
tool_executor=MethodToolExecutor(tools),
config=LLMExecutorConfig(max_iterations=10),
hooks=MyHooks(),
)
result = await executor.run(
messages=[{"role": "user", "content": "Hello"}],
tools=[tool_schema],
system_prompt="You are helpful.",
)Routes user intents to different journeys.
router = IntentRouter[MyJourney]()
router.add_route(RouteDefinition(
journey=MyJourney.QUOTE,
name="start_quote",
description="Start a quote journey",
))
# Get tool schemas for LLM
tools = router.get_tool_schemas()
# Resolve tool call to journey
journey = router.resolve_tool_call("start_quote")Abstract interface for persistence.
Implementations:
InMemoryStore[T]- In-memory storage (for testing)StateStore- Specialized for agent stateConversationStore- Specialized for message history
Manages step-based prompts with context enrichment.
manager = PromptManager[MyStep](
mapping=StepPromptMapping(...),
context_enricher=lambda state: {"user_name": state.name},
)
prompt = manager.get_prompt(state)Simple step-by-step flow (onboarding, data collection):
WELCOME β COLLECT_INFO β PROCESS β COMPLETE
Different paths based on user input (quote with options):
WELCOME β CHOOSE_TYPE β [OPTION_A β ...] or [OPTION_B β ...]
Repeat steps until condition met (multi-item cart):
START β ADD_ITEM β [MORE_ITEMS? β ADD_ITEM] β CHECKOUT
Handle errors and retry:
STEP β [ERROR β RETRY β STEP] β NEXT_STEP
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
MIT License - see LICENSE file for details.
agent_runtime_core- Core utilities for production agent deploymentsdjango_agent_runtime- Django integration for agent runtimes
Check out the tests/ directory for complete working examples of:
- Basic journey agents
- State management
- Tool execution
- Memory persistence
- Intent routing
- Prompt management
For questions, issues, or feature requests, please open an issue on GitHub.
Built with β€οΈ for creating amazing conversational experiences