Conflicta: AI-Powered Conflict Resolution Training Platform
Inspiration
In our increasingly interconnected world, the ability to navigate and resolve conflicts effectively has never been more crucial. Whether it's workplace disagreements, personal relationships, or community disputes, most people lack the practical experience to understand how conversations can escalate or de-escalate based on subtle changes in communication style and emotional responses.
Traditional conflict resolution training often relies on static scenarios and theoretical frameworks, but real conflicts are dynamic, emotional, and unpredictable. By courtesy of the track provided by Pixida, we were inspired to create Conflicta - an interactive platform that enables users to experience realistic AI-powered conflict simulations, explore different conversation paths, and learn through hands-on experimentation how small changes in communication can dramatically alter outcomes.
Our goal was to democratize access to conflict resolution education by creating an engaging, visual, and interactive learning environment where users can safely practice de-escalation techniques, understand emotional dynamics, and develop better communication skills.
What it does
Conflicta is a comprehensive full-stack application that simulates realistic conflicts between AI agents with distinct personalities and emotional responses. Here's what makes it unique:
🎭 Dynamic AI Personalities: Users create two AI agents with customizable names, personality traits, behavioral instructions, and choose from various models available from multiple AI providers (OpenAI GPT-4o, Google Gemini, Mistral Magistral) with configurable temperature settings for realistic emotional responses.
🌳 Interactive Conversation Trees: Every conversation is visualized as a branching tree where users can click on any previous message and create alternate conversation paths, exploring "what if" scenarios to see how different responses lead to different outcomes.
📊 Real-time Mood Tracking: Each message includes sophisticated mood analysis with 7 emotional states (happy, excited, neutral, calm, sad, frustrated, angry) displayed through color-coded indicators and visual cues.
🎮 User Intervention Tools:
- Custom Messages: Step in at any point to provide manual responses on behalf of either agent
- Escalation/De-escalation Controls: Apply interventions to see how conflicts can be intentionally escalated or calmed down
- Branch Navigation: Switch between different conversation paths to compare outcomes
🤖 3D Avatar Integration: Animated 3D avatars with emotional expressions and ElevenLabs text-to-speech integration with lip-sync using viseme technology bring conversations to life. Core technology used here is provided by the TalkingHead project: https://github.com/met4citizen/TalkingHead
📈 Intelligent Analysis: A third AI observer agent analyzes complete conversations, identifying escalation points, de-escalation opportunities, mood progressions, and providing actionable feedback for improvement.
How we built it
Phase 1: Core Backend Architecture
We built a FastAPI backend with a multi-provider AI architecture:
- Multi-Provider AI Support: Implemented factory pattern to support OpenAI, Google Gemini, and Mistral AI providers with unified interfaces
- Agent Management System: Created configurable AI agents with personality traits, behavioral instructions, and model-specific settings
- Conversation Tree Engine: Developed a complex tree data structure supporting branching conversations from any point
- Mood Analysis Integration: Built sophisticated emotion detection that analyzes each message for 7 distinct emotional states
- Intervention System: Implemented escalation/de-escalation controls that influence AI responses
- Analysis Service: Created a third-party AI observer that provides comprehensive conversation analysis
Key Backend Endpoints:
- Agent creation and management
- Conversation initialization with scenario setup
- AI response generation with mood analysis
- User message insertion with override capabilities
- Tree navigation and branching operations
- Conversation analysis and reporting
Phase 2: Frontend Interface
Built a React/TypeScript frontend, utilising Lovable prototyping in the beginning, and then polishing it with:
- Dual-Panel Interface: Split-screen design with tree visualization on the left and chat interface on the right
- Dynamic Tree Visualization: Interactive D3.js-powered conversation trees with mood-based color coding
- Real-time Chat Interface: Message bubbles with mood indicators, intervention badges, and timestamp tracking
- Context-Aware State Management: React Context API managing complex conversation state and tree navigation
- Visual Feedback Systems: Active path highlighting, node dimming for inactive branches, and smooth transitions
Phase 3: 3D Avatar Integration
The most challenging phase involved integrating animated 3D avatars:
- TalkingHead Library Integration: Implemented 3D avatar rendering with emotional expressions
- ElevenLabs TTS Integration: Connected text-to-speech with voice generation and lip-sync
- Performance Optimization: Built
AvatarPerformanceManagerandOptimizedDualAvatarManagerto prevent system crashes - Viseme Technology: Synchronized avatar lip movements with generated speech
- Dual Avatar Management: Coordinated two avatars representing different agents with distinct voices
Challenges we ran into
Challenge 1: Conversation Tree Navigation Complexity
Initially, implementing a branching conversation system where users could seamlessly navigate between different conversation paths proved incredibly complex. The challenge was maintaining conversation state while allowing users to jump to any node, branch from that point, and continue with either AI generation or manual input.
Solution: We developed a sophisticated tree data structure with path-based navigation, implemented context-aware state management in the frontend, and created visual indicators showing active conversation paths while dimming inactive branches.
Challenge 2: 3D Avatar Integration and Performance
The biggest technical challenge was integrating TalkingHead animated avatars with ElevenLabs TTS. Initially, avatars would work for the first message but then exhibit severe issues:
- Avatars speaking in loops without finishing
- Lip movement without audio
- System lag and crashes
- Audio starting, pausing, restarting repeatedly
Solution: We built comprehensive performance management systems (AvatarPerformanceManager.tsx, OptimizedDualAvatarManager.tsx) with:
- Careful memory management to prevent system overload
- Audio queue management to prevent overlapping speech
- State synchronization between avatar animations and audio playback
- CPU usage optimization to maintain smooth performance
Challenge 3: Multi-Provider AI Coordination
Managing three different AI providers (OpenAI, Google, Mistral) with different API interfaces, response formats, and capabilities while maintaining consistent conversation quality and mood analysis was complex.
Solution: Implemented a factory pattern with abstract base classes, unified response parsing, and provider-specific adapters that normalize different AI responses into consistent formats.
What we learned
Technical Skills
- Advanced AI Prompt Engineering: Learned how to craft system prompts that generate consistent emotional responses and mood classifications across different AI providers
- 3D Animation Integration: Experimented with the TalkingHead library for creating emotionally expressive avatars and synchronizing them with text-to-speech
- Complex State Management: Developed knowledge in managing sophisticated conversation trees with branching paths and real-time updates
- Performance Optimization: Learned techniques for managing CPU-intensive 3D rendering and audio processing
AI Understanding
- Emotion Modeling: Discovered how to consistently extract emotional context from AI responses across different providers
- Personality Consistency: Learned how to maintain agent personality traits throughout long, branching conversations
- Intervention Effects: Understood how escalation and de-escalation prompts affect AI behavior and conversation dynamics
UX Design Insights
- Visual Communication: Learned how color coding, animations, and spatial relationships can effectively communicate complex emotional and conversational data
- Interactive Learning: Discovered the power of hands-on experimentation versus traditional educational approaches
- Accessibility in Complexity: Balanced sophisticated functionality with intuitive user experience
What's next for Conflicta
Advanced Learning Features
- Guided Scenarios: Pre-built conflict scenarios (workplace disputes, family disagreements, community conflicts) with learning objectives
- Performance Analytics: Personal progress tracking showing improvement in de-escalation skills over time
- Collaborative Learning: Multi-user scenarios where teams practice conflict resolution together
Enhanced AI Capabilities
- Emotional Intelligence Scoring: Quantitative assessment of emotional awareness and response effectiveness
- Cultural Context Integration: AI agents with different cultural backgrounds and communication styles
- Professional Specialization: Domain-specific conflict scenarios (healthcare, education, legal, customer service)
Extended Platform Features
- Voice Input: Real-time speech-to-text for more natural conversation flow
- Integration APIs: Connect with learning management systems and professional development platforms
- Community Sharing: Share successful de-escalation strategies and learn from others' approaches
Conflicta represents the future of interactive learning - where AI, 3D visualization, and educational psychology combine to create transformative learning experiences that can genuinely improve how people navigate conflicts in their personal and professional lives.

Log in or sign up for Devpost to join the conversation.