InitSpark is a lightweight, Kotlin Multiplatform (KMP) coroutine-based startup orchestration library. It provides a structured way to declare, sequence, and execute initialization tasks (called sparks) during your app's startup phase across JVM, Android, and iOS natively.
InitSpark is published to Maven Central.
Add the dependency to your build.gradle.kts:
sourceSets {
commonMain.dependencies {
implementation("io.github.ktomek:initspark:1.0.0") // Replace with latest version
}
}For JVM or Android-only projects:
dependencies {
implementation("io.github.ktomek:initspark:1.0.0")
}- π₯ Declarative DSL to define sparks
- β±οΈ Time tracking for individual sparks and phases
- βοΈ Three execution modes:
await,async, andspark - π² Dependency management between sparks (with cycle detection)
β οΈ Spark importance levels:CRITICAL(fail-fast) andOPTIONAL(failure-tolerant)- π Configurable retry policies with
None,Fixed, andExponentialbackoff - π‘ Reactive
SparkEventstream for lifecycle monitoring - π Flexible
Keyinterface for spark identification - π§ͺ Built-in testing support
- π Performance metrics via
SparkTimingInfo
Add to your build.gradle:
repositories {
maven("https://jitpack.io")
}
dependencies {
implementation("com.github.ktomek:initspark:<version>")
}class DatabaseSpark @Inject constructor(...) : Spark {
override suspend fun execute() { /* initialize database */ }
}val sparks = setOf(
DatabaseSpark(),
NotificationSpark(),
AnalyticsSpark(),
/* ... */
)
val config = buildSparks(sparks) {
// Sequential, must complete before next spark starts
await { System.loadLibrary("crypto-lib") }
await<LoggerSpark>()
await<ActivityLifecycleSpark>()
val ioContext = Dispatchers.IO
val coreDeps = setOf(Key("Database"))
// Parallel, completion is tracked
async<DatabaseSpark>(
key = "Database".asKey(),
context = ioContext
)
async<NotificationSpark>(
context = ioContext,
needs = coreDeps,
policy = SparkPolicy(importance = SparkImportance.OPTIONAL)
)
async<AnalyticsSpark>(
context = ioContext,
needs = coreDeps,
policy = SparkPolicy(
retry = RetryPolicy(
retryCount = 3,
backoff = Backoff.Exponential(initialDelayMillis = 200)
)
)
)
// Parallel, fire-and-forget (not tracked)
spark<ConsentManagerSpark>(context = ioContext, needs = coreDeps)
}val initSpark = InitSpark(config, CoroutineScope(Dispatchers.Default))
// Suspending version (preferred)
initSpark.initialize()
// Blocking version (for Java interop or legacy code)
initSpark.initializeBlocking()| Builder function | Execution | Tracked | Default key |
|---|---|---|---|
await { } / await<T>() |
Sequential | β | Class simple name |
async { } / async<T>() |
Parallel | β | Class simple name |
spark { } / spark<T>() |
Parallel | β | Class simple name |
Each builder function accepts:
| Parameter | Type | Description |
|---|---|---|
key |
Key? |
Optional unique identifier (defaults to class name) |
needs |
Set<Key> |
Keys of sparks that must complete first |
context |
CoroutineContext |
Coroutine dispatcher |
policy |
SparkPolicy |
Importance and retry configuration |
Key is an interface, letting you use any type with proper equality β data object, enum entry, or a plain string:
// String-backed key (default)
"Database".asKey() // or Key("Database")
// Custom key type (recommended for robustness)
data object DatabaseKey : Key
enum class AppKey : Key { DATABASE, ANALYTICS }Control how failures propagate using SparkPolicy:
// CRITICAL (default): failure throws and halts initialization
async<DatabaseSpark>(policy = SparkPolicy(importance = SparkImportance.CRITICAL))
// OPTIONAL: failure is logged and emitted as a SparkEvent.Failed, but other sparks continue
async<AnalyticsSpark>(policy = SparkPolicy(importance = SparkImportance.OPTIONAL))Attach a RetryPolicy to automatically retry failing sparks:
val policy = SparkPolicy(
retry = RetryPolicy(
retryCount = 3,
backoff = Backoff.Exponential(initialDelayMillis = 100L, factor = 2.0)
)
)| Strategy | Description |
|---|---|
Backoff.None |
No delay between retries (default) |
Backoff.Fixed(delayMillis) |
Constant delay |
Backoff.Exponential(initialDelayMillis, factor) |
Delay Γ factor on each attempt |
Use the events flow to receive real-time lifecycle updates from the orchestrator:
launch {
initSpark.events.collect { event ->
when (event) {
is SparkEvent.Started -> log("βΆ ${event.key} started")
is SparkEvent.Completed -> log("β
${event.key} done in ${event.duration}")
is SparkEvent.Failed -> log("β ${event.key} failed: ${event.error}")
is SparkEvent.Retry -> log("π ${event.key} retry #${event.retryCount}")
}
}
}// Suspend until all TRACKABLE sparks are done
initSpark.waitUntilTrackableInitialized()
// Suspend until ALL sparks (including fire-and-forget) are done
initSpark.waitUntilInitialized()
// Or observe via StateFlow
initSpark.isTrackableInitialized.collect { ready -> if (ready) onReady() }
initSpark.isInitialized.collect { ready -> if (ready) onFullyReady() }Access detailed performance metrics after initialization:
initSpark.waitUntilInitialized()
with(initSpark.timing) {
// Per-spark durations
allDurations().forEach { (declaration, duration) ->
Timber.d("Spark '${declaration.key}' [${declaration.type}] took $duration")
}
// Cumulative total (sum of all individual durations)
Timber.d("Sum of all durations: ${sumOfDurations()}")
Timber.d("Sum by type: ${sumOfDurationsByType()}")
// Wall-clock window (first start β last finish)
Timber.d("Total wall-clock time: ${executionDelta()}")
Timber.d("Wall-clock by type: ${executionDeltaByType()}")
}| Method | Returns |
|---|---|
durationOf(declaration) |
Duration for one spark, or null |
allDurations() |
Map<SparkDeclaration, Duration> |
sumOfDurations() |
Cumulative sum of all measured durations |
sumOfDurationsByType() |
Cumulative sum grouped by SparkType |
executionDelta() |
Wall-clock window (first start β last stop) |
executionDeltaByType() |
Wall-clock window grouped by SparkType |
Contributions are welcome!
Please review our CONTRIBUTING.md for details on code style, testing, and how to submit pull requests.
This project is licensed under the MIT License.
MIT License
Copyright (c) 2023 ktomek
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.