Inspiration

Ever returned to a topic months later, only to realize you’ve forgotten your notes, insights, or where you left off? With Constellation, that fragmented learning experience becomes a thing of the past. As you take new notes, Constellation automatically surfaces related ideas from your past entries, helping you make meaningful connections across time. It ensures you’re building on your existing knowledge instead of repeating yourself—turning note-taking into an evolving, intelligent conversation with your own mind.

What it does

Constellation seamlessly analyzes your text/markdown notes using a lightweight local LLM (Llama 3.2-1B) to identify semantic relationships between concepts across your personal knowledge base. Without any manual tagging or organization required, it creates an intuitive mind map of your ideas, highlighting connections you might never have discovered on your own. Whether you're a student connecting concepts across disciplines, a researcher tracking the evolution of your thinking, a writer synthesizing ideas for your next project, or someone journaling to track personal growth over time, Constellation helps you leverage your entire knowledge history. For journal writers, it surfaces past reflections that reveal how your perspective has evolved on important life themes, allowing you to recognize patterns, celebrate progress, and gain deeper self-awareness through the constellation of your own thoughts.RetryClaude can make mistakes.

How we built it

  • Frontend: A responsive React interface designed for focused writing, with seamless suggestions and connections.
  • Backend: A Django backend handles note management, similarity queries, and knowledge graph logic.
  • AI Engine: A locally hosted Llama 3.2-1B model via Ollama analyzes notes and generates vector embeddings—ensuring privacy and speed.
  • Vector Similarity: Semantic similarity is calculated using cosine similarity between note embeddings.
  • Storage: PostgreSQL is used to store notes and their corresponding embeddings, containerized with Docker for easy deployment.

Challenges we ran into

  • Configuring and deploying Django and Docker containers
  • Trouble-shooting frontend-backend communication
  • Getting the local LLM to reliably return JSON-formatted outputs

Accomplishments we're proud of

  • Creating a full-stack system from scratch using Django, Docker, React and Ollama
  • Using a local LLM to parse vector embeddings
  • Using a SentenceTransformer to create vector embeddings for word

What we learned

  • how to run a llm locally
  • how to create vector embeddings
  • how to use docker
  • how to build a backend with django

What we will build next

  • Knowledge graph visualizer: Build an interactive graph view to explore how your thoughts connect over time.
  • Keyword & theme mapping: Develop an inverted index based on recurring themes or keywords to support instant retrieval - even as the note archive scales.
  • Maybe put everything(frontend, backend, llama) in one container
Share this project:

Updates