ADR-0035: LLM Model Download Timing and Delivery
DateMarch 1, 2026
CategoryOnboarding & Features
Tagsmodel-downloadonboarding
Context
- LocusFlow uses an on-device LLM (LFM2.5-1.2B-Instruct, GGUF Q4_K_M, ~700 MB) for reflection
summaries, morning briefings, and weekly synthesis (ADR-0021).
- The LLM model is not bundled with the APK; it must be downloaded on-demand.
- The existing
ModelManager (ADR-0021) handles download-on-first-use, lifecycle management,
and state tracking via ModelState.
- Feature flags (ADR-0028) gate LLM-powered features (
feature_llm_reflection_summary,
feature_llm_morning_briefing, feature_llm_weekly_synthesis).
- The onboarding flow (ADR-0034) includes an AI opt-in screen (screen 4) and a conditional model
download screen (screen 5).
- A
feature_llm_enabled master toggle already exists in the feature flag system (referenced in
the Settings UI spec). All LLM sub-features are subordinate to this toggle.
- The model is ~700 MB — large enough that a blocking foreground download is a poor experience,
but small enough to complete in a few minutes on most connections.
Constraints:
- Download requires network access — the only network operation in the app (ADR-0005 permits
model downloads as a carve-out).
- Mid-range device storage must be considered (~700 MB persistent, ~900 MB runtime).
- Users who decline AI should never be prompted to download.
- The download must survive process death and Activity recreation.
Decision
1. AI Opt-In Gate in Onboarding
The onboarding AI opt-in screen (screen 4, ADR-0034) presents a clear choice:
- Enable AI features: Sets
feature_llm_enabled = true (ADR-0028), advances to the model
download screen (screen 5).
- Skip AI features: Leaves
feature_llm_enabled = false (the default), skips screen 5, and
advances directly to the Ready screen (screen 6). All LLM-powered features remain unavailable
until the user enables AI from Settings.
This opt-in is the sole mechanism that gates LLM feature availability. It is not a separate
preference — it maps directly to the existing feature_llm_enabled feature flag.
2. Background Download with Foreground Progress
When the user opts in to AI and reaches the model download screen:
- The download starts immediately and automatically in the background.
- The screen displays:
- A progress bar showing download percentage.
- Estimated remaining time (when calculable).
- A model information card (see §2a below) to keep the user engaged while downloading.
- The user can advance to the next onboarding screen at any time — the download continues in
the background.
- If the user advances before the download completes, a non-intrusive persistent banner in the
main app displays download progress until the model is ready.
2a. Model Information Card Content
The download screen includes a static informational section covering:
| Topic | Content |
|---|
| What it is | "LFM2.5-1.2B — a 1-billion parameter language model from Liquid AI" |
| What it does | Reflection summaries, weekly synthesis, morning briefings — on device |
| Privacy guarantee | "All processing happens on your phone. Nothing leaves your device." |
| File size | "~700 MB download, ~900 MB memory when active" |
| Benchmarks | Key quality metrics: MMLU score, IFEval instruction-following rank |
| External links | "Learn more" links to Liquid AI's model page and benchmark results |
External links on this screen open in the system browser via Intent.ACTION_VIEW. This is
the only user-initiated external navigation in the app, consistent with ADR-0005 (no in-app
network stack — browser intents are Android platform behaviour, not app network calls).
Suggested links to include:
- Liquid AI model page:
https://www.liquid.ai/lfm2
- LEAP SDK documentation:
https://docs.liquid.ai/leap
The exact URLs and benchmark numbers should be verified at implementation time and may be
updated in a future release without a new ADR.
3. Download Implementation
The download is orchestrated by the existing ModelManager (ADR-0021) and LeapModelDownloader:
// Existing ModelState sealed hierarchy (ADR-0021)
sealed interface ModelState {
data object NotDownloaded : ModelState
data class Downloading(val progress: Float) : ModelState
data object Downloaded : ModelState
data class Loading(val progress: Float) : ModelState
data object Ready : ModelState
data class Error(val message: String) : ModelState
}
- WorkManager backs the download to survive process death. The existing
LeapModelDownloader already integrates with WorkManager for progress tracking and retry.
- The
OnboardingViewModel (ADR-0034) observes ModelManager.modelState: StateFlow<ModelState>
to drive the progress bar on the download screen.
- On download completion,
SettingsRepository.setLlmModelDownloaded(true) is called (ADR-0027).
4. Post-Onboarding Download Access
If the user skips AI during onboarding but later enables it in Settings:
- The Settings AI toggle sets
feature_llm_enabled = true.
- The existing
ModelDownloadScreen (already in ui/features/llm/) handles download with
the same progress UI.
- No onboarding replay is required — the Settings path is self-contained.
5. Error Handling
| Scenario | Behavior |
|---|
| No network at download | Show error with "Retry" button; user can skip to next screen |
| Download interrupted | WorkManager resumes automatically; progress bar reflects state |
| Insufficient storage | Show storage requirement (~700 MB); explain how to free space |
| Download corrupt/failed | Delete partial file, show "Retry"; log error locally |
Errors on the download screen do not block onboarding progression. The user can always
advance and retry the download later from Settings.
6. Storage and Cleanup
- The model file is stored in the app's internal files directory (already managed by
ModelManager).
- If the user disables AI features from Settings, the model file is not automatically deleted.
A separate "Delete model" action in Settings allows explicit cleanup.
- Storage requirements (~700 MB on disk) are communicated on the AI opt-in screen before download.
Rationale
- Background download with progress respects the user's time. 700 MB can take 1–5 minutes on
typical connections; blocking the entire onboarding on a download is unacceptable.
- Non-blocking advancement ensures the onboarding flow is never stalled by network conditions.
The user sees the app's value proposition regardless of download speed.
- Informational content during download ("keep the user occupied") turns wait time into
education — explaining what the model does, privacy guarantees, and on-device processing.
- Feature flag integration (not a separate preference) keeps the opt-in consistent with the
existing flag system and avoids preference fragmentation.
- WorkManager is the correct Android primitive for large, network-dependent, survivable
downloads — it handles retry, backoff, and constraints (network availability) natively.
- Settings as a fallback path ensures users who skip AI during onboarding are never locked out.
Consequences
- Positive:
- Users who want AI get it downloading in the background while they continue onboarding.
- Users who don't want AI are never burdened by download prompts or storage use.
- The download shares existing infrastructure (
ModelManager, LeapModelDownloader,
ModelState) — minimal new code.
- Negative:
- Background download during onboarding may compete with the UI thread for resources on very
low-end devices. Mitigated by WorkManager's built-in constraints and low-priority threading.
- Users on metered connections may be surprised by the ~700 MB download. Mitigated by clear
size disclosure on the opt-in screen.
- Follow-up:
- The AI opt-in screen content and model explanation text will be defined in a UI/UX spec.
- If a future model upgrade changes the file size significantly, the opt-in screen text must
be updated.
- Metered-connection detection and a "Download on Wi-Fi only" option may be added in a
future iteration.
Alternatives Considered
- Blocking foreground download — rejected. A 700 MB download blocking the onboarding flow
for 1–5+ minutes is a poor first-run experience and increases abandonment risk.
- Fully deferred download (no prompt during onboarding) — rejected. Users who opt in to AI
would discover on their first use of a feature that they need to wait for a download,
which is a worse surprise than downloading during onboarding.
- Bundling the model in the APK — rejected. Adding ~700 MB to the APK violates Play Store
size guidelines and penalizes users who don't want AI features. The model is also expected to
be updated independently of app releases.
- Separate SharedPreferences for AI opt-in — rejected. ADR-0027 standardized on DataStore;
the existing
feature_llm_enabled flag in DataStore (via ADR-0028) is the correct home.
Notes
- Related ADRs: ADR-0021 (LLM runtime), ADR-0027 (settings persistence), ADR-0028 (feature
flags), ADR-0034 (onboarding flow structure).
- The
ModelDownloadScreen composable in ui/features/llm/ may be refactored to extract a
shared ModelDownloadContent composable used by both the onboarding flow (screen 9,
ADR-0034) and the standalone Settings download path.
- The external links (Liquid AI model page, LEAP SDK docs) should be verified for accuracy
at implementation time. They require no in-app network code — system browser handles them.