AI Infrastructure Engineer, Researcher, and Open-Source Contributor. I build AI systems that ship to production — where architecture decisions compound into products developers actually use. Previously built and led engineering on two widely-adopted open-source AI projects.
Currently pursuing Electronics & Computer Science at VIT Mumbai, working toward research in neural systems and brain-inspired computation.
> cat ~/work/featured.mdSQL-native memory layer for LLMs and AI agents.
- Independently architected & built the first two production versions of the platform
- Multi-database backends (SQLite, PostgreSQL, MySQL, MongoDB, Oracle), DigitalOcean one-click deployment, Gradient AI playground
- Authored developer documentation, API reference, and cookbook for onboarding and adoption
Enterprise multi-agent orchestration framework.
- Integrated AI Co-Scientist system and enhanced agent utilities — vision, streaming, memory
- Authored architecture guides, cloud deployment docs, and vector DB integration walkthroughs
- Built automated test suites, error handling infrastructure, and refactored for scalability
> ls -la ~/projects/- Fine-tuned Gemma-3B on a custom dataset from The Almanack of Naval Ravikant
- Quantized (Q8_0) and deployed fully offline via Ollama for private inference
- Built with Unsloth, llama.cpp, and a custom PDF scraping pipeline
- Local LLM journaling system with RAG + feedback loops for evolving context
- Chat history summarized, embedded, and made searchable via Qdrant
- Built with LangChain, bge-m3, and Whisper
- Real-time inventory system with sub-500ms avg data load and 10-20ms sync
- Performance optimized — LCP 0.75s, CLS 0.01, INP 80ms
- PostgreSQL as source of truth, Supabase for real-time pub/sub, Redis for instant stats
> git log --author="harshalmore31" --oneline- Contributed a Gradio-based chat interface with function-calling support and browser search for real-time context
- Matched Streamlit version functionality with native GPT-OSS response handling
- Lowered the barrier for developers to experiment with OpenAI's open-weight models via Hugging Face/Gradio ecosystem
> echo $TECH_STACK


