Kaggle Gemma 4 Good Hackathon submission · Fully offline · No accounts · No cloud
DreamLit is a native Android app that combines AI-powered dream journaling with a personal lucid dreaming coach — all running entirely on your device. Gemma 4 decodes your dreams, identifies Jungian archetypes, tracks recurring symbols, and coaches you toward lucidity. Your dreams, emotions, and inner life never leave your phone.
| Feature | Details |
|---|---|
| Voice-First Dream Capture | Record dreams on wake via SpeechRecognizer; listen back to original audio |
| AI Dream Interpretation | Symbols · Jungian archetypes · Emotional tone · Waking connections · Reflection prompt |
| Dream Visualization | On-device Stable Diffusion turns your dream into unique artwork |
| Personal Profile | Age group · Occupation · Life context injected into AI prompts for grounded analysis |
| Lucid Dreaming Coach | MILD · WBTB · WILD · SSILD technique guides; nightly intention setting; morning check-ins |
| Dream Sign Recognition | AI identifies recurring symbols across all your entries as personal lucidity triggers |
| Reality Check Reminders | Configurable WorkManager notifications every 1–4 hours |
| WBTB Alarm | AlarmManager alarm after 5–6 hrs with in-app wake-window guidance |
| Analytics & Streaks | Recall score · Word count trends · 60-day heatmap · Current & longest streaks |
| 24 Achievements | Milestones for streaks, lucid counts, recall, word count, techniques tried |
| CSV Import / Export | Full data portability; import from any DreamLit-format CSV |
| 100% Offline & Private | AICore (Pixel 8+/S24+) or LiteRT fallback — no network calls, ever |
InferenceEngine (factory)
├── AICoreEngine ← Android AICore / ML Kit GenAI (Pixel 8+, S24+)
└── LiteRTEngine ← LiteRT-LM fallback (Android 8+, 4 GB+ RAM)
VisualEngine ← MediaPipe on-device Stable Diffusion
Feature layer (engine-agnostic):
DreamInterpreter · LucidCoach · PatternAnalyzer
All text generation engines implement the same LLMEngine interface, so every feature works identically on both AI paths.
Full architecture diagram: see ARCHITECTURE.md
| Tier | Devices | AI Engine | Notes |
|---|---|---|---|
| Tier 1 | Pixel 8, Pixel 8 Pro, Pixel 9, Samsung Galaxy S24+ | Android AICore | NPU-accelerated, best battery, model managed by OS |
| Tier 2 | Any Android 8.0+ with 4 GB+ RAM | LiteRT-LM | ~2.6 GB model download on first launch |
Dream visualization (Stable Diffusion) requires an additional ~1.5 GB download, deferred until the user first requests it.
- Android Studio Meerkat (2024.3) or later
- JDK 17+
- Android SDK with API 35 (targetSdk)
git clone https://github.com/JJRPF/dreamlit.git
cd dreamlitOpen the project in Android Studio. It will sync Gradle automatically.
AICore requires enrollment in the beta program:
- Join the
aicore-experimentalGoogle Group - On your Pixel 8+/S24+, opt into the Android AICore beta via Google Play Store (search "Android AICore", tap "Join beta")
- Wait for the Gemma model to download in the background (usually 1–2 hours on Wi-Fi)
- The app detects AICore automatically — no code changes needed
Note: If AICore is unavailable (wrong device, model not yet downloaded), the app automatically falls back to LiteRT-LM.
The LiteRT model (gemma-4-e2b.task, ~2.6 GB) is downloaded at first launch via the in-app download screen. To skip the wait during development, place the model file manually:
adb push gemma-4-e2b.task /data/data/com.dreamlit/files/gemma-4-e2b.taskObtain the model file from Google AI Edge Model Explorer.
# Debug build
./gradlew assembleDebug
# Install on connected device
./gradlew installDebug
# Release build (requires signing config in app/build.gradle.kts)
./gradlew assembleReleaseFor voice capture without internet:
- On the device: Settings → General Management → Language → Text-to-Speech (varies by OEM)
- Enable offline speech recognition for your language
- Or: Settings → Accessibility → Live Transcribe → download offline model
app/src/main/java/com/dreamlit/
├── ai/
│ ├── LLMEngine.kt # Common interface (generate, isReady, close)
│ ├── AICoreEngine.kt # ML Kit GenAI Prompt API
│ ├── LiteRTEngine.kt # LiteRT-LM fallback
│ ├── InferenceEngine.kt # Factory: AICore → LiteRT → NoOp
│ ├── VisualEngine.kt # MediaPipe on-device Stable Diffusion
│ ├── DreamInterpreter.kt # Transcript → structured JSON analysis
│ ├── LucidCoach.kt # Dream sign extraction + technique recommendation
│ └── PatternAnalyzer.kt # Batched cross-entry weekly insight
├── data/
│ ├── JournalEntry.kt # Room entity
│ ├── JournalDao.kt # All DB queries
│ ├── AppDatabase.kt # Room database singleton
│ └── CsvManager.kt # Import/export
├── hardware/
│ ├── AudioRecorder.kt # MediaRecorder M4A wrapper
│ └── VoiceToText.kt # SpeechRecognizer Flow wrapper
├── notifications/
│ ├── RealityCheckWorker.kt # WorkManager periodic reality checks
│ ├── WbtbAlarmReceiver.kt # AlarmManager WBTB alarm
│ └── MorningCheckInWorker.kt # Daily morning dream prompt
└── ui/
├── DreamLitNavHost.kt # Bottom nav + NavHost
├── theme/ # Midnight indigo dark theme
├── capture/ # Voice recorder + text input + backdater
├── dreams/ # Dream list + AI detail screen
├── lucid/ # Intentions, technique guides, dream signs
├── insights/ # Heatmap, charts, achievements
└── profile/ # User context, CSV, engine status
- No network permissions — the app cannot make internet requests
- No analytics, no tracking, no accounts
- All AI inference runs locally via Android AICore or LiteRT-LM
- All data stored in Room (SQLite) on-device only
- Audio recordings saved to app-private storage, never shared
- Full data export/import via CSV at any time
The Kaggle notebook at kaggle_notebook.ipynb documents the dual-path inference design, on-device image generation pipeline, privacy architecture, and "for good" impact narrative for the Gemma 4 Good Hackathon.
MIT License — see LICENSE for details.
Built for the Kaggle Gemma 4 Good Hackathon · April 2026