An AI-augmented backend system with asynchronous LLM processing that enhances task management with intelligent summarization, classification, and tagging using LLM APIs. The backend is designed to handle core operations securely while integrating JWT Authentication for stateless user sessions. Data is persisted in PostgreSQL, with Redis used as a distributed cache for frequently accessed dashboard metrics (with auto-TTL of 10 minutes). File attachments are stored securely in Amazon S3 with a local storage fallback. AI processing is fully asynchronous using a bounded ThreadPoolTaskExecutor to ensure zero latency impact on user operations. The application is containerized using Docker with Docker Compose orchestrating PostgreSQL, Redis, and the application. A React frontend with a glassmorphism dark-mode UI provides an interview-ready demo experience.
This project is built as a fully decoupled frontend-backend architecture (service-oriented).
- Frontend Application (Live): https://taskflow-ui-two.vercel.app/
- Frontend Source Code: https://github.com/Kushan-shah/TaskFlow-UI
- Primary Backend API (AWS EC2): http://65.2.191.152:8080/swagger-ui/index.html#/
- Fallback Backend API (Render): https://task-manager-api-live.onrender.com/swagger-ui/index.html#/
- Backend Source Code: https://github.com/Kushan-shah/TaskFlow-AI
- JWT Authentication: Secure, stateless endpoint protection using JSON Web Tokens.
- Role-Based Access Control (RBAC): Granular authorization mechanisms supporting
USERandADMINroles via@PreAuthorize. - Task CRUD Operations: Complete lifecycle management for task entities with soft deletion.
- Filtering & Pagination: Dynamic query execution using Spring Data JPA Specifications.
- Soft Delete: Logical record deletion to preserve data integrity and analytics.
- Redis Caching: Distributed caching layer (via
RedisCacheManagerwith JSON serialization and 10-min TTL) to optimize the dashboard analytics endpoint. Falls back to in-memory cache in dev profile. - AWS S3 Integration: Secure multipart file uploads for task attachments, with local filesystem fallback.
- Automated Scheduler: Spring
@Scheduledcron jobs to identify and log overdue tasks asynchronously. - Global Exception Handling: Centralized
@RestControllerAdviceto format error responses system-wide. - AI Task Summarization: Automatic intelligent summarization of tasks using Google Gemini LLM API.
- AI Priority Prediction: LLM-driven classification of task priority (HIGH / MEDIUM / LOW).
- AI Tag Extraction: Auto-detection of relevant tags from task descriptions.
- Async AI Processing: Spring
@Asyncwith boundedThreadPoolTaskExecutorβ AI calls never block the API response. - React Frontend: Dark-mode glassmorphism UI with dashboard charts (Recharts), AI insight panels, shimmer loading states, and RBAC-aware sidebar.
- Integrated with Google Gemini API (LLM inference via REST)
- Designed prompts to enforce strict JSON output, ensuring deterministic parsing and minimizing hallucination risks
- Implemented defensive JSON parsing with validation to prevent malformed AI responses
- Asynchronous processing using bounded ThreadPoolTaskExecutor
- AI processing is fully isolated from core transactional flow, preventing cascading failures or thread blocking under slow LLM responses
- Fail-fast timeout handling (10s) with graceful degradation
- Structured output parsing (summary, priority, tags)
- Retry mechanism via manual re-trigger endpoint
- AI endpoints can be rate-limited to prevent abuse and control external API cost spikes
- Stateless JWT authorization scales horizontally across multiple servers without sticky sessions
- AI processing failures do not impact core task creation flow
- Errors captured in
aiErrorMessagefield for observability - Timeout protection ensures API responsiveness under slow LLM responses
- Manual retry endpoint allows reprocessing failed tasks
Input from User:
- Title: "Fix slow database queries"
- Description: "Dashboard API is taking 3 seconds due to unoptimized queries"
Output from Google Gemini (Structured JSON):
{
"summary": "Optimize database queries to improve dashboard performance",
"priority": "HIGH",
"tags": ["database", "performance", "optimization"]
}The application strictly adheres to a layered architecture pattern. This design enforces strong separation of concerns, which makes the codebase highly maintainable, testable, and naturally scalable:
- Controller Layer: Intercepts HTTP requests, validates incoming DTO payloads, and routes them to the business logic layer.
- Service Layer: Houses the core business logic, transaction management, and coordinates external service calls (e.g., AWS S3, Gemini AI).
- Repository Layer: Interfaces with PostgreSQL using Spring Data JPA for persistence and querying.
- Caching Layer: Redis-backed distributed cache for hot-path analytics (dashboard). Uses Spring Cache abstraction β swappable with zero code changes.
- Database Layer: The underlying persistent data store (PostgreSQL for production, H2 in-memory for development).
- Chose async processing over synchronous LLM calls to eliminate user-facing latency.
- Async design reduces unnecessary repeated LLM calls, optimizing API usage cost.
- Used bounded thread pool to prevent resource exhaustion under high load (Queue size capped at 50).
- Leveraged Redis caching for read-heavy dashboard endpoints with tenant-isolated eviction.
- Maintained stateless authentication for seamless horizontal scalability.
- Defaulted to
@VersionJPA Optimistic Locking to prevent "Lost Updates" in high-concurrency environments. - Trade-off: AI insights are eventually consistent (not real-time) due to async processing tradeoffs.
This API utilizes a stateless JWT scheme:
- Login: The client submits credentials to
/api/auth/login. Upon successful authentication, the server generates and issues a signed JWT. - Token Passing: Subsequent requests must include the JWT in the
Authorization: Bearer <token>HTTP header. - Validation: A custom Spring Security filter intercepts requests to protected endpoints, extracting and validating the token signature and expiration.
- Stateless Operation: No session data is stored on the server. This statelessness significantly improves horizontal scalability, as any server instance can validate requests independently without relying on a shared session store.
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/auth/register |
Register a new user account |
| POST | /api/auth/login |
Authenticate and retrieve JWT |
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/tasks |
Create a new task |
| GET | /api/tasks |
List tasks (supports filtering & pagination) |
| GET | /api/tasks/{id} |
Retrieve task details by ID |
| PUT | /api/tasks/{id} |
Update an existing task |
| DELETE | /api/tasks/{id} |
Soft delete a task |
| GET | /api/tasks/search?keyword=value |
Search tasks by keyword |
| GET | /api/tasks/dashboard |
Retrieve cached task statistics |
| POST | /api/tasks/{id}/upload |
Upload a file attachment to S3 |
| GET | /api/tasks/{id}/ai |
Retrieve AI insights for a task |
| POST | /api/tasks/{id}/analyze |
Manually trigger AI analysis |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/admin/tasks |
List all tasks across all users |
| GET | /api/admin/users |
List all registered users |
| Parameter | Type | Default | Example |
|---|---|---|---|
status |
Enum | β | TODO, IN_PROGRESS, DONE |
priority |
Enum | β | LOW, MEDIUM, HIGH |
page |
int | 0 | 0 |
size |
int | 10 | 5 |
sortBy |
String | createdAt |
dueDate |
sortDir |
String | desc |
asc |
Automated load tests are run using k6 to validate API performance under concurrent user traffic.
| Parameter | Value |
|---|---|
| Tool | k6 v1.7.1 |
| Ramp-Up | 10 β 25 β 50 concurrent users |
| Duration | 70 seconds |
| Endpoints Tested | Register, Login, Create, List, Dashboard, Get, AI Insights, Update, Search, Delete |
| Metric | Value |
|---|---|
| Total HTTP Requests | 6,132 |
| Throughput | 86 req/s |
| Avg Response Time | 3.65 ms |
| p(95) Response Time | 8.12 ms β |
| p(90) Response Time | 6.83 ms |
| Error Rate | 2.57% β (below 10% threshold) |
| Tasks Created | 707 |
| Data Transferred | 47 MB received / 2.5 MB sent |
| Endpoint | Avg | p(95) | Max |
|---|---|---|---|
| Create Task | 3.50 ms | 8.34 ms | 24.87 ms |
| List Tasks (Paginated) | 3.75 ms | 6.34 ms | 36.62 ms |
| Dashboard (Cached) | 5.17 ms | 9.86 ms | 49.84 ms |
| AI Insights | 2.15 ms | 3.47 ms | 14.44 ms |
β
http_req_duration p(95) < 2000ms β PASSED (actual: 8.12ms)
β
error_rate < 10% β PASSED (actual: 2.57%)
k6 run k6-load-test.jssrc/main/java/com/taskmanager/
βββ controller/ # REST API controllers (Auth, Task, Admin)
βββ service/ # Business logic and transactions
βββ repository/ # Data access layer (Spring Data JPA)
βββ entity/ # JPA entities and enums
βββ dto/ # Request and Response mapping objects
βββ security/ # JWT filters and authorization logic
βββ config/ # Security, Redis Cache, Swagger, CORS, Async configurations
βββ exception/ # Global exception handlers
βββ scheduler/ # Scheduled CRON jobs (Overdue task detection)
βββ util/ # Helper classes and mappers
Copy the sample environment file and insert your credentials:
cp .env.example .envNo PostgreSQL or Redis needed for development:
mvn clean install
mvn spring-boot:run -Dspring-boot.run.profiles=devTo easily spin up the API, PostgreSQL database, and Redis simultaneously:
docker-compose up --builddocker-compose -f docker-compose.prod.yml up --build -dcd ../task-manager-ui
npm install
npm run devThe React app runs on http://localhost:5173 and connects to the Spring Boot API.