A sophisticated LangGraph-based agent system for analyzing employee queries and routing them to specialized organizational agents. This system automatically categorizes employee requests, determines urgency levels, and routes them to the appropriate departmental agent (HR, Payroll, Benefits, IT Support, etc.).
- Intelligent Query Analysis: Uses LLM-powered analysis to understand employee queries
- Automatic Categorization: Classifies queries into 8 different organizational categories
- Urgency Assessment: Evaluates query urgency on a 1-5 scale
- Smart Routing: Routes queries to specialized organizational agents
- Approval Workflow: Identifies queries requiring managerial approval
- Extensible Architecture: Easy to add new agent types and capabilities
The system uses LangGraph to create a workflow with the following components:
- HR Policies: Company policies, procedures, handbook questions
- Payroll: Salary, wages, tax questions, pay stubs
- Benefits: Health insurance, retirement, PTO, vacation
- Performance: Performance reviews, goals, feedback
- Training: Learning opportunities, certifications, skill development
- IT Support: Technical issues, software, hardware, access
- Facilities: Office space, equipment, maintenance
- General: General questions not fitting other categories
- HR Assistant: Handles HR policies, performance, training, and general queries
- Payroll Specialist: Manages payroll-related questions and issues
- Benefits Coordinator: Assists with employee benefits and PTO
- IT Support: Provides technical support and system access help
- Query Analysis: Analyzes and categorizes incoming queries
- Query Routing: Routes queries to appropriate specialized agents
- Query Processing: Processes queries with the assigned agent
- Approval Check: Determines if managerial approval is needed
- Python 3.8+
- OpenAI API key
-
Clone or download the project files
-
Install dependencies:
pip install -r requirements.txt- Set up your OpenAI API key:
Option A - Environment variable:
export OPENAI_API_KEY="your-api-key-here"Option B - Create a .env file:
cp .env.example .env
# Edit .env and add your API keyOption C - Pass directly in code:
agent = LangGraphOrgAgent(openai_api_key="your-api-key")from langraph_org_agent import LangGraphOrgAgent
# Initialize the agent system
org_agent = LangGraphOrgAgent()
# Process an employee query
result = org_agent.process_employee_query(
"I haven't received my paycheck this month. What should I do?"
)
print(f"Category: {result['analysis']['category']}")
print(f"Urgency: {result['analysis']['urgency']}/5")
print(f"Response: {result['response']}")Run the interactive demo:
python example_usage.pyRun the sample queries demo:
python example_usage.py --demo| Query | Expected Category | Assigned Agent |
|---|---|---|
| "I haven't received my paycheck" | payroll | Payroll Specialist |
| "What's our remote work policy?" | hr_policies | HR Assistant |
| "How do I enroll in health insurance?" | benefits | Benefits Coordinator |
| "My laptop won't connect to VPN" | it_support | IT Support |
| "I need time off next week" | benefits | Benefits Coordinator |
| "When is my performance review?" | performance | HR Assistant |
- Create a new agent class:
class NewAgent(OrganizationalAgent):
def __init__(self, llm: ChatOpenAI):
super().__init__(
name="New Agent",
llm=llm,
specialty="Your specialty description"
)- Add to the QueryCategory enum:
class QueryCategory(str, Enum):
# ... existing categories
NEW_CATEGORY = "new_category"- Update the agent mapping:
self.agents = {
# ... existing mappings
QueryCategory.NEW_CATEGORY: NewAgent(self.llm),
}Modify the QueryAnalyzer class to adjust categorization logic:
class QueryAnalyzer:
def __init__(self, llm: ChatOpenAI):
self.llm = llm
# Customize the analysis prompt here
self.analysis_prompt = ChatPromptTemplate.from_messages([
("system", "Your custom system prompt..."),
("human", "Analyze this query: {query}")
])The system includes comprehensive test queries covering all categories. Run tests using:
# Test with sample queries
python -c "
from langraph_org_agent import LangGraphOrgAgent
agent = LangGraphOrgAgent()
result = agent.process_employee_query('Test query here')
print(result)
"Each query returns a structured analysis:
{
"query": "Original employee query",
"analysis": {
"category": "payroll",
"urgency": 4,
"entities": ["paycheck", "month"],
"intent": "Report missing paycheck",
"requires_approval": False
},
"assigned_agent": "payroll",
"response": "Agent response...",
"messages": ["Processing messages..."]
}- API keys should be stored securely (environment variables or secure key management)
- Consider implementing authentication for production use
- Sensitive queries may require additional security measures
- Log queries appropriately while respecting privacy
For production deployment, consider:
- Scalability: Use async processing for high query volumes
- Monitoring: Add logging and monitoring for query processing
- Database Integration: Store query history and responses
- User Authentication: Implement proper user authentication
- Rate Limiting: Add rate limiting for API calls
langgraph==0.2.16: Graph-based workflow orchestrationlangchain==0.3.0: LLM framework and utilitieslangchain-openai==0.2.0: OpenAI integrationpydantic==2.8.2: Data validation and settings managementpython-dotenv==1.0.0: Environment variable management
- Fork the repository
- Create a feature branch
- Add your improvements
- Test thoroughly
- Submit a pull request
This project is provided as-is for educational and development purposes.
For issues or questions:
- Check the error messages for configuration issues
- Ensure your OpenAI API key is properly set
- Verify all dependencies are installed correctly
- Review the example usage for proper implementation patterns
- Add support for multiple LLM providers
- Implement conversation memory/context
- Add integration with ticketing systems
- Support for file attachments and document analysis
- Multi-language support
- Advanced analytics and reporting
- Integration with company databases and systems