Inspiration
In the academic world, a paper is more than just text; it is intellectual property (IP). We noticed a dangerous trend: researchers are forced to choose between powerful AI assistance (which leaks private drafts to cloud servers) and local privacy (which leaves them struggling with scary LaTeX errors and manual formula formatting).
We were inspired by the "Local-First" movement and the release of powerful reasoning models like Gemma3. We asked: Why can't we have a LaTeX editor that is as smart as Overleaf + Copilot, but as private as a physical notebook?
What it does
FoTex is a privacy-centric, AI-powered LaTeX editor designed for the modern researcher. It bridges the gap between local hardware and large language model (LLM) intelligence:
Local AI Co-author: Fixes LaTeX compilation errors, converts descriptions to complex formulas, and completes paragraphs—all running locally via Ollama.
Zero-Config Environment: Ships with a built-in Tectonic engine sidecar, eliminating the need for massive 5GB+ TeX Live installations.
Instant Error Diagnosis: Instead of cryptic logs, FoTex provides one-click AI fixes that understand the context of your specific document.
Data Sovereignty: Your drafts and research never leave your machine. No cloud, no telemetry, no leaks.
How we built it
- Tauri + Rust Backend: Utilizes a lightweight Rust core for secure file I/O and high-speed process management.
- Local LLM Integration: Direct bridge to Ollama API; documents never leave your private hardware.
- Monaco Core: Leveraging the VS Code engine for professional-grade syntax highlighting and performance.
- Sidecar Integration: Seamlessly bundles Tectonic as a sidecar to eliminate the need for 5GB TeX Live installations.
Challenges we ran into
Robust Output Parsing: LLMs often wrap code in conversational text or tags. We engineered a custom regex-based extraction layer in Rust to "rescue" valid LaTeX code even when the model's output is truncated or improperly formatted.
Context Management: Feeding large LaTeX files into local models while maintaining speed required precise prompt engineering to ensure the AI returns complete, valid documents rather than just small snippets.
Asynchronous State Syncing: Managing the state between the Rust-based compilation logs, the Monaco editor, and the Zustand store in React required a complex event-driven architecture to ensure the UI remains responsive during heavy AI thinking tasks.
Accomplishments that we're proud of
Truly Local Workflow: We successfully integrated a full-stack academic writing suite that functions entirely offline.
The "Fix with AI" Button: Creating a seamless loop where a user can go from a "Missing $ inserted" error to a perfectly compiled PDF in a single click.
Performance: Achieving near-instant PDF previews and editor responsiveness despite the heavy lifting of running local neural networks.
What we learned
Prompt Precision over Model Size: We discovered that a well-tuned prompt for a 12B model can often outperform a generic prompt on a much larger cloud model for specialized tasks like LaTeX debugging.
The Value of Resilience: We learned that when working with AI, you must code for failure—building robust fallback mechanisms for when the AI "hallucinates" non-existent LaTeX packages.
What's next for FoTex
Semantic Literature Search: An RRD (Retrieval-Augmented Generation) system for local .bib files combined with RAG, allowing the AI to suggest citations from the user’s own library.
P2P Collaboration: Enabling real-time co-authoring between two local machines using encrypted peer-to-peer protocols, staying true to the "No-Cloud" philosophy.
Built With
- rust
- tauri
- typescript


Log in or sign up for Devpost to join the conversation.