Skip to content

amrgaberM/amrgaberM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 

Repository files navigation

Amr Hassan

Typing SVG

Backend Engineer specializing in production APIs and AI/ML systems.
Cairo, Egypt · Open to remote

LinkedIn Portfolio Email


About

Backend engineer focused on building production-grade APIs and AI systems end-to-end — from architecture to deployment.

  • Reduced PDF processing from 3 minutes to 17 seconds (10x) with 9/10 accuracy using async pipelines and vector search
  • Built a semantic code search system with 85% retrieval accuracy at 38ms latency
  • Trained a 124M parameter Transformer from scratch — 41% validation loss reduction on consumer GPUs
  • Shipped automated PR review agent with sub-200ms end-to-end response via Groq LPU inference

Open to: Backend Engineer · AI Engineer · Python Developer — Cairo or remote


Featured Projects

SmartDoc API — Production RAG

GitHub Demo

Problem: Processing large PDFs for semantic search was slow and inaccurate.
Solution: Async RAG pipeline with sliding window chunking and in-database vector search.
Result: 3min → 17s processing · 9/10 accuracy · 28% accuracy gain over fixed chunking · <5ms search latency

Django Celery PostgreSQL Redis

CodeLens — Semantic Code Search

GitHub Demo

Problem: Navigating large codebases is slow and context is lost.
Solution: Hybrid search combining semantic embeddings with BM25 + AST-based chunking.
Result: 85% retrieval accuracy · 38ms latency · 2,000+ code chunks indexed

FastAPI ChromaDB Python

CodeSense AI — Automated Code Review

GitHub Demo

Problem: Manual PR reviews are slow and miss security vulnerabilities.
Solution: Event-driven agent processing GitHub webhooks with Groq LPU inference.
Result: Sub-200ms end-to-end · automated inline PR comments · real-time bug and vulnerability detection

FastAPI Docker Groq

FabulaGPT — GPT from Scratch

GitHub

Problem: Understanding Transformer internals beyond API calls.
Solution: 124M parameter decoder-only Transformer built in raw PyTorch, no shortcuts.
Result: 41% validation loss reduction (8.56→5.05) · gradient accumulation on dual T4 GPUs

PyTorch CUDA Python


Tech Stack

Backend & APIs
Python Django FastAPI PostgreSQL Redis Celery

AI & ML
PyTorch LangChain ChromaDB Groq

DevOps & Tools
Docker GitHub Actions Linux Nginx


Backend Engineer · AI Systems · Python
Cairo, Egypt · Open to remote opportunities

LinkedIn Portfolio Email

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors