Inspiration

The wholesale food distribution industry seafood, produce, and meats moves billions of dollars of product every day, yet the underlying operations are shockingly manual. Distributors still receive orders via messy text messages, rushed voicemails, and fragmented emails (e.g., "Send me 2 cases of salmon, 10lbs of shrimp, and my usual order"). Customer service reps spend hours manually deciphering these messages and typing them into legacy ERPs.

We realized that legacy ERPs only record what happened; they don't take action. We were inspired to build an Enterprise Resource Automation (ERA) platform an AI-native operating system that acts as a digital teammate, instantly turning unstructured natural language into confirmed, inventory-synced transactions.

What it does

Fresh Flow is an intelligent order automation pipeline for wholesale distributors. When a customer sends a raw text message, Fresh Flow: Parses the Intent: Understands complex requests, including quantities, specific SKUs, and contextual phrases like "the usual." Checks Inventory: Matches the requested items against the live product catalog and checks current stock levels using FIFO logic. Generates the Order: Creates a structured order, calculates totals, and identifies items needing manual review (e.g., out-of-stock items). Real-Time Operations: Broadcasts the updates instantly to a live dashboard for the warehouse team and triggers an automated confirmation back to the buyer.

How we built it

Architecture (Hybrid RAG): We implemented a custom Retrieval-Augmented Generation pipeline. Instead of relying on slow agentic tool-calling, our Python backend pre-fetches exact catalog matches and historical customer data, injecting it directly into the LLM context for single-pass generation. Backend: Built with Python, utilizing WebSockets for real-time, event-driven broadcasting to the frontend dashboard. Database: PostgreSQL enriched with pgvector to perform lightning-fast semantic similarity searches across our product catalog during the retrieval phase. AI Engine: Amazon Bedrock using the Nova models for high-speed natural language reasoning and JSON structuring. Frontend: A responsive React application built with Vite and Tailwind CSS to serve as the live command center for distributors.

Challenges we ran into

Our biggest challenge was building an AI architecture that was both fast and reliable. Initially, we implemented an Agentic Tool-Calling pattern. We gave the LLM access to our database tools and asked it to autonomously parse the order and search the catalog.

However, we found that lightweight LLMs struggle with this autonomy. The model would pass entire phrases like "2 cases of salmon" directly into the vector database, which ruined the mathematical embeddings (yielding terrible similarity scores) and caused the system to return zero products. Relying on the LLM to govern its own tool usage resulted in slow execution times (7–10 seconds) and frequent parsing failures.

Accomplishments that we're proud of

We completely re-architected our AI pipeline in the middle of development, pivoting from Agentic Tool-Calling to a Hybrid RAG (Retrieval-Augmented Generation) pattern.

Instead of asking the LLM to search the database, we built a Python pipeline using regex to extract item strings, natively ran pgvector semantic searches to find the exact catalog matches, and pulled the customer's historical order data. We then bundled all of this verified context into a single prompt for the LLM.

This pivot dropped our processing latency to 1–2 seconds, eliminated hallucinations, and allowed the system to flawlessly handle complex logic like combining "the usual order" with new line items in a single pass.

What we learned

We learned a massive architectural lesson: Agents aren't always the answer. While autonomous agents are exciting, using traditional software (regex + SQL/vector search) to pre-fetch context and using the LLM strictly as a powerful "reasoning and formatting" engine (RAG) results in a much cheaper, faster, and infinitely more robust production system.

What's next for Fresh flow

The immediate next step is expanding our integration layer. We plan to build connectors for legacy ERP systems so Fresh Flow can sit on top of existing infrastructure as an intelligent routing layer. Furthermore, we are expanding our autonomous capabilities to include automated procurement having Fresh Flow automatically generate Purchase Orders (POs) for suppliers the moment an incoming order dips a product below its safety stock threshold.

Built With

Share this project:

Updates