<![CDATA[Adit Lal]]>https://aditlal.dev/https://aditlal.dev/favicon.pngAdit Lalhttps://aditlal.dev/Ghost 5.88Thu, 19 Mar 2026 18:24:37 GMT60<![CDATA[Introducing Rebound: context-aware recomposition budgets for Compose]]>https://aditlal.dev/compose-rebound/69ad0e90570d4bfde14a5631Sun, 08 Mar 2026 07:19:34 GMT

Your Compose app recomposes 10 times a second. Is that a problem?

The Compose ecosystem has solid tooling for tracking recompositions. Layout Inspector shows counts and skip rates. Compiler reports surface stability issues. Rebugger logs argument changes. ComposeInvestigator traces recomposition causes automatically. Each of these tools answers an important question well.

But none of them answer this one:

"Is this composable recomposing too much for what it does?"

A HomeScreen recomposing 10/s is a problem. A gesture-driven animation recomposing 10/s is fine. The number is the same. The answer is completely different. Without knowing the composable's role, you can't tell which is which.

Here's what Rebound showed me in a production app last month:

Composable Rate Budget Skip% Status
ShimmerBox 18/s 5/s 0% OVER
MenuItem 13/s 30/s 0% NEAR
DestinationItem 9/s 5/s 0% OVER
AppScaffold 2/s 3/s 68% OK

ShimmerBox at 18/s is a fire. AppScaffold at 2/s is fine. MenuItem at 13/s has headroom because it's interactive. Same app, same moment, completely different answers depending on what each composable is supposed to do.

No existing tool gives you that column: Budget.


The flat threshold trap

Every recomposition monitoring approach I've seen does the same thing: pick a number, flag everything above it. 5/s, 10/s, whatever feels right.

This is wrong for most of your composables.

Set it at 5/s and your animations light's up red all day. Set it at 60/s, and your screen-level state leak never gets caught. You end up ignoring the warnings entirely, which is worse than not having them.

I kept tuning thresholds per project, per screen, per interaction. Then I realized: the composables' role should determine the threshold. A Screen has a different budget than a LazyColumn item has a different budget than an animate* call. The compiler already knows which is which.

What already exists

Here's the current landscape. Each tool answers a different question — and leaves a different gap:

Introducing Rebound: context-aware recomposition budgets for Compose
Compose recomposition tool landscape — analysis depth vs developer effort

The ecosystem already has good answers for individual questions. Compiler Reports tell you what's skippable. Layout Inspector shows recomposition counts and, since 1.10.0, which state reads triggered them. Rebugger logs argument diffs. ComposeInvestigator traces recomposition causes automatically. VKompose highlights hot composables with colored borders. Perfetto gives you the full rendering pipeline.

I use most of these. But they all share the same blind spot: a count of 847 means nothing without knowing what the composable does. None of them answer "is this rate acceptable for this composable's role?"

From principles to practice

The Compose team's guidance is principle-based: minimize unnecessary recompositions, use stable types, hoist state. This is the right approach. Ben Trengrove's articles on stability and debugging recomposition, Leland Richardson's deep dives into the compiler and runtime — they all reinforce the same idea: make parameters stable and the compiler handles the rest. @Stable, @Immutable, compiler reports, Strong Skipping Mode (default since Kotlin 2.0.20) — the framework gives you the tools to get structural correctness right.

Where it gets harder is triage. Your types are stable, compiler metrics look clean, but a screen still janks. Layout Inspector shows a recomposition count of 847 on a composable. Is that a lot? Depends entirely on what that composable does — and nothing in the current tooling connects the count to the context.

The natural instinct is to set a flat threshold. Pick a number — say 10 recompositions per second — and flag anything above it. I've tried this. It falls apart fast:

  • An animated composable at 12/s gets flagged. It shouldn't.
  • A screen composable at 8/s passes. It shouldn't.
  • A list item at 40/s during fast scroll looks alarming. That's expected.

You either raise the threshold until the false positives go away (and miss real issues) or lower it until real issues surface (and drown in noise). Any single number you pick is wrong for most of your composables.

Budget depends on what the composable does

A screen composable has a different recomposition budget than an animation-driven one. A leaf Text() with no children has a different budget than a LazyColumn recycling items during scroll. This seems obvious in hindsight.

Introducing Rebound: context-aware recomposition budgets for Compose
Recomposition budgets by composable role

Match the budget to the role and the useful warnings stop hiding behind false ones.

Rebound's compiler plugin classifies every @Composable at the IR level:

  • Screen (3/s) — name contains Screen or Page. If this recomposes more than 3 times a second, state is leaking upward.
  • Leaf (5/s) — no child @Composable calls. Text, Icon, Image. Individually cheap but shouldn't thrash.
  • Animated (120/s) — calls animate*, Transition, or Animatable APIs. Give it room to run at 60-120fps without false alarms.

There are six classes total (Container at 10/s, Interactive at 30/s, List Item at 60/s round out the set), but those three tell the story. The point is: a single threshold cannot be correct for all of them simultaneously.

During scrolling, budgets double. During animation, they go up 1.5x. During user input, 1.5x. The system knows context and adjusts.

The 0% skip rate discovery

Here's the moment that convinced me this approach works.

I was running Rebound on a travel app. The Hot Spots tab flagged TravelGuideCard at 8/s against a LEAF budget of 5/s. That alone is useful but not surprising — cards in a list recompose during scroll.

The interesting part was the skip rate: 0%.

Zero percent means every single recomposition did actual work. The Compose runtime never skipped it. That's unusual for a card component — most of the time, at least some recompositions should skip because the inputs haven't changed.

I pulled up the Stability tab. The $changed bitmask showed the destinations parameter was DIFFERENT on every frame. But the data wasn't changing — the list was the same destinations, same order, same content.

Traced it back to the data layer. A helper function was calling listOf(...) on every invocation instead of caching the result. Every call created a new List instance. Same content, new reference. Compose saw a different object and recomposed.

One remember {} block. Skip rate went from 0% to 89%. Rate dropped from 8/s to under 1/s.

Layout Inspector would have told me "8 recompositions per second." It would not have told me that 8/s is over budget for a leaf, that the skip rate was zero, or that a single parameter was DIFFERENT every frame. I would have shrugged and moved on.

What I actually found

I tested this on an app with draggable elements, physics animations, and sensor-driven UI. 29 composables instrumented, zero config.

A gesture-driven composable at 13/s? ANIMATED budget is 120/s. Fine. A flat threshold of 10 would've flagged this on every drag.

A remember-based state holder at 11/s? LEAF budget is 5/s. Real violation. A sensor was pushing continuous updates into recompositions. Two-line fix: debounce the input. I would've missed this with flat thresholds because I was busy dismissing animation warnings.

The interaction context matters too. Rebound detects whether the app is in IDLE, SCROLLING, ANIMATING, or USER_INPUT state. A list item at 40/s during scroll is expected — the same rate during idle is a problem. Same composable, same number, different verdict.

Solving the <anonymous> problem

Compose uses lambdas everywhere. Scaffold, NavHost, Column, Row, LazyColumn — all take lambdas. Every one of those lambdas is a @Composable function that gets instrumented. When you inspect the IR, you get back names like:

com.example.HomeScreen.<anonymous>
com.example.ComposableSingletons$MainActivityKt.lambda-3.<anonymous>

The tree is 80% <anonymous>. You're staring at a recomposition violation and you have no idea if it's the Scaffold content, the NavHost builder, or a Column's children.

Layout Inspector doesn't have this problem. It reads sourceInformation() strings from the slot table — compact tags the Compose compiler injects into every composable call. The name is right there. Layout Inspector reads it. Nothing else does.

Rebound takes a different approach: resolve names at compile time in the IR transformer. When the transformer visits an anonymous composable lambda, it walks the function body, finds the first user-visible @Composable call that isn't a runtime internal, and uses that call's name as the key.

Introducing Rebound: context-aware recomposition budgets for Compose
Lambda name resolution — before and after

A lambda whose body calls Scaffold(...) becomes HomeScreen.Scaffold{}. A lambda that calls Column(...) becomes ExerciseCard.Column{}. The {} suffix distinguishes a content lambda from the composable function itself.

private fun resolveComposableKey(function: IrFunction): String {
    val raw = function.kotlinFqName.asString()
    if (!raw.contains("<anonymous>")) return raw

    val pkg = extractPackage(raw)
    val parentName = findEnclosingName(function)
    val primaryCall = findPrimaryComposableCall(function)

    if (primaryCall != null) {
        return "$pkg$parentName.$primaryCall{}"
    }
    // fallback to counter-based λN
    ...
}

So com.example.HomeScreen.Scaffold{} displays as HomeScreen.Scaffold{} in the tree instead of <anonymous>.

Reading the $changed bitmask

The Compose compiler injects $changed parameters into every @Composable function. Each parameter gets 2 bits encoding its stability state.

Introducing Rebound: context-aware recomposition budgets for Compose
Decoding the $changed bitmask — 2 bits per parameter

Rebound collects these at compile time and decodes them at runtime: bits 01 mean SAME, 10 mean DIFFERENT, 11 mean STATIC, 00 mean UNCERTAIN. When a composable recomposes with a parameter marked DIFFERENT, you know exactly which argument the caller changed.

Rebound goes further — it separates forced recompositions (parent invalidated) from parameter-driven ones. When a violation fires, you see both: which parameters changed and whether the recomposition was forced by a parent or triggered by the composable's own state.

Introducing Rebound

Rebound — Compose Recomposition Budget Monitor
Budget-based recomposition monitoring for Jetpack Compose. A Screen at 3/s. An Animation at 120/s. Zero config. Debug builds only.
Introducing Rebound: context-aware recomposition budgets for Compose

Rebound is a Kotlin compiler plugin and an Android Studio plugin. Here's how the pieces connect:

Introducing Rebound: context-aware recomposition budgets for Compose
Rebound architecture — compile time to runtime to IDE

The compiler plugin runs after the Compose compiler in the IR pipeline: it classifies each composable into a budget based on name patterns and call tree structure, resolves human-readable keys for anonymous lambdas, and injects tracking calls. At runtime, it monitors recomposition rates against those budgets. The IDE plugin connects over a socket — not logcat — so you get structured data instead of string-parsed log lines.

When something exceeds its budget:

BUDGET VIOLATION: ProfileHeader rate=11/s exceeds LEAF budget=5/s
  -> params: avatarUrl=CHANGED, displayName=CHANGED
  -> forced: 0 | param-driven: 11 | interaction: IDLE

The composable name. The rate. The budget. The parameters that changed. Whether it was forced by a parent or driven by its own state. What the user was doing at the time.

Introducing Rebound: context-aware recomposition budgets for Compose

Here's Rebound running on StickerExplode — an app with draggable stickers, tilt-sensor physics, and haptic feedback. The tilt sensor pushes continuous updates, so rememberTiltState, rememberTiltSensorProvider, and rememberHapticFeedback all recompose at 7–17/s. Their default LEAF budget is 5/s, so Rebound flags them.

But that's the point — these composables are sensor-driven. They should recompose frequently. The violations aren't saying the code is broken. They're saying the classification needs tuning: LEAF → INTERACTIVE (30/s budget). The budget system surfaces the mismatch. You adjust the role, the noise disappears, and the real problems stay visible.

The sparkline at the bottom shows the rate history. The event log timestamps every violation. Double-click any row in Hot Spots and it jumps to the source.

Zero config. Debug builds only, no overhead in release. Three lines in your build file. KMP — Android, JVM, iOS, Wasm.

The IDE plugin: a Compose performance cockpit

The first version of the IDE plugin was a tree with numbers. Useful, but you still had to do most of the interpretation yourself. v2 is a full-performance cockpit.

Introducing Rebound: context-aware recomposition budgets for Compose
Rebound IDE Plugin — 5 tabs, gutter icons, event log

Monitor tab — The live composable tree, now with sparkline rate history per composable and a scrolling event log. Violations, rate spikes, state transitions — all timestamped. This was the entire plugin before. Now it's tab 1.

Hot Spots tab — A flat, sortable table of every composable. Sort by rate, budget ratio, skip percentage. Summary card at the top: "3 violations | 12 near budget | 85 OK." Double-click any row and it jumps to the source file. Like a profiler's method list, but for recompositions.

Timeline tab — A composable-by-time heatmap. Green, yellow, red cells. Scroll back 60 minutes. You can see temporal patterns: "UserList was hot for 5 seconds during scroll, then calmed down." Helps separate one-off spikes from sustained problems.

Gutter icons — Red, yellow, green dots next to every @Composable function in the editor. Click for rate, budget, and skip percentage. No tool window switching needed. This is the single most impactful UX change — the research on developer tooling is clear that context-switching between a profiler window and source code is where time goes to die.

We had stable data in prod for months. Then a feature change made one of our lists unstable. We shipped it without catching it. Rebound would have caught it locally — a gutter icon going from green to red the moment the change was made.

Introducing Rebound: context-aware recomposition budgets for Compose
Introducing Rebound: context-aware recomposition budgets for Compose
Git history to track regressions.
Introducing Rebound: context-aware recomposition budgets for Compose
Visualize the Hotspots
Introducing Rebound: context-aware recomposition budgets for Compose
Monitor in real time each composition
Introducing Rebound: context-aware recomposition budgets for Compose
Stability checks
Introducing Rebound: context-aware recomposition budgets for Compose
Timeline view of how much app is recomposing

Productionize

I tested this on a CMP app with a messy home screen. LazyRows nested inside a LazyColumn, animated list items, async images. 29 composables instrumented, zero config.

A card component was recomposing 8 times with a 0% skip rate, peaking at 8/s. The whole tree went together: Column, Image, Text, painterResource. Rebound traced it to a MutableIntState layout measurement change cascading through. Turned out a helper function was creating a new List<> on every call. The contents were static but the container was a fresh allocation, so Strong Skipping couldn't help. One remember {} fixed it.

A destination item had the same shape of problem. 10 compositions, 0% skip. Rebound flagged destination=UNCERTAIN, paramType=unstable because the data class was passed inline without @Stable.

Layout Inspector would have shown me "this composable recomposed 10 times." What it can't tell me is whether 10 is a problem. For a LEAF composable with a 5/s budget and a 0% skip rate, it absolutely is.

Try it

 // build.gradle.kts
 plugins {
     id("io.github.aldefy.rebound") version "0.2.1"
 }

Add the Gradle plugin, build in debug, and see which of your composables are over budget. Works on Kotlin 2.0 through 2.3, Android and iOS. The budget numbers come from testing across several Compose apps — if your app has different composition patterns and the defaults don't fit, open an issue. That's how the numbers get better.

The sample module has Rebound pre-configured. For a real stress test, StickerExplode is a particle-effect demo that exercises every budget class.

Source, docs, and CLI: github.com/aldefy/compose-rebound

If your AI coding tool supports skills, the rebound-skill repo teaches it how to diagnose violations. Works with Claude Code, Gemini CLI, Cursor, Copilot, and others.


@AditLal on X / aldefy on GitHub

]]>
<![CDATA[The Compose Styles API: Building 8 Labs to Master Declarative Styling]]>https://aditlal.dev/compose-styles/69a31b3e570d4bfde14a55d7Sat, 28 Feb 2026 17:13:11 GMT

Compose just got a styling system. A first-party API in Foundation that replaces InteractionSource boilerplate with declarative style blocks. Here's what three days of testing it looked like.

Demo repo: https://github.com/aldefy/compose-style-lab — 8 interactive labs, clone and run.

Where we are today in terms of compose API?

Every Compose developer knows this ritual. You want a button that shrinks and changes color when pressed. Nothing exotic. Here is what you write today:

val interactionSource = remember { MutableInteractionSource() }
val isPressed by interactionSource.collectIsPressedAsState()
val backgroundColor by animateColorAsState(
    if (isPressed) pressedColor else defaultColor
)
val scale by animateFloatAsState(if (isPressed) 0.95f else 1f)
Box(
    modifier = Modifier
        .graphicsLayer { scaleX = scale; scaleY = scale }
        .background(backgroundColor, RoundedCornerShape(16.dp))
        .clickable(interactionSource = interactionSource, indication = null) { }
)

Five declarations, three state subscriptions, and a graphicsLayer to get a scale animation that CSS handles with transition: transform 0.2s.

Styles API is awesome 🙌🏼

Styles in Compose | Jetpack Compose | Android Developers
Customize Jetpack Compose UI with Styles. Boost performance, simplify state-based styling, and streamline component APIs.
The Compose Styles API: Building 8 Labs to Master Declarative Styling

Now here is the same behavior with the Styles API, which shipped in compose-foundation:1.11.0-alpha06 on February 25, 2026

val style = Style {
      background(defaultColor)
      shape(RoundedCornerShape(16.dp))
      pressed { animate { background(pressedColor); scale(0.95f) } }
  }
  Box(Modifier.styleable(style = style))

One declarative definition. No animateAsState. No graphicsLayer. (This is simplified — in alpha06 you still need a MutableStyleState with a shared InteractionSource for pressed detection. Lab 3 covers the full pattern.) I spent three days building a demo app with eight lab screens to figure out what this API actually delivers, where it falls short, and what it means for how we build components. This is what I found.

How Compose handles styling today

Compose's existing styling story is fine for simple cases. You set a background color. You pick a shape. You move on. The friction starts the moment you need visual responses to interaction state.

InteractionSource is the mechanism. You create one, wire it into your clickable or toggleable modifier, then collect flows like collectIsPressedAsState(), collectIsHoveredAsState(), or collectIsFocusedAsState(). Each flow gives you a boolean. You map those booleans to visual properties using animateColorAsState, animateFloatAsState, or animateDpAsState. Then you feed the animated values into the right modifiers: background(), graphicsLayer {}, border().

It works. It is also completely manual. There is no reusable "style object" you can define once and apply to multiple components. If three buttons share the same pressed behavior, you copy-paste the InteractionSource plumbing three times or extract a custom composable. Want to share that behavior? You write a helper function that returns a Modifier, but then you lose the ability to override individual properties without rewriting the whole chain. There is no composition mechanism. You cannot take a "base card style" and layer a "dark theme style" on top of it. You just write more modifiers and hope the ordering is right.

State-driven visual changes get worse at scale. A card that looks different when selected, disabled, and pressed needs a when block or a series of if checks to compute each visual property. The logic scatters across the composable function. You end up with five animateXAsState declarations, three boolean state collectors, and a graphicsLayer block for the transforms. Six months later, a new team member reads the code and has to reconstruct which visual properties change in which states mentally. The intent is buried under plumbing.

These are not hypothetical complaints. I have shipped production apps where the styling logic for a single component was longer than the layout logic. Components that should have been twenty lines ballooned to sixty because each interaction state needed its own animation pipeline. It felt wrong every time.

When I saw compose-foundation:1.11.0-alpha06 land on February 25, 2026, with the @ExperimentalFoundationStyleApi annotation and roughly fifty new style properties, I wanted to find out what it actually delivers. Not the API docs. The real behavior on a device.

Building Compose Style Lab

I built Compose Style Lab, an Android app with eight interactive lab screens. Each lab isolates a specific part of the Styles API: interaction states, composition, state driving, transforms, micro-interactions, text styling, theme integration, and custom component patterns.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

The labs are progressive. Lab 1 is a pressed button. Lab 8 is a full-component API that follows the pattern the Compose team recommends. Every lab has live toggles so you can flip states and watch the style respond in real time. No static screenshots pretending to be demos. I also added property readouts that display the current resolved values of the style properties, so you can see exactly what the style system is doing at any moment.

The goal was not to build a polished app. It was to find the edges of the API. What works as documented? What silently fails? What patterns will scale when this reaches stable?

Before getting into the labs:

Here is the 30-second API overview. The Style {} block is a builder where you set visual properties: background(), shape(), contentPadding(), scale(), borderWidth(), contentColor(), fontSize(), and about forty more. State blocks like pressed(), hovered(), focused(), selected(), checked(), and disabled() each accept another Style that activates when the component enters that state. Wrap a state style in animate() and the transitions are smooth. Apply the whole thing with Modifier.styleable(style = myStyle). That is the entire model.

Now, eight labs. Eight lessons.

8 labs, 8 lessons

Lab 1: Interaction states without the boilerplate

One Style handles pressed, hovered, and focused with animation.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

This is where I started. A single composable that responds to pressed, hovered, and focused states, all defined in one Style block:

val showcaseStyle = Style {
      background(baseColor)
      shape(RoundedCornerShape(16.dp))
      contentPadding(horizontal = 32.dp, vertical = 24.dp)
      pressed {
          animate {
              background(Color(0xFF1A237E))
              scale(0.92f)
          }
      }
      hovered {
          animate {
              background(Color(0xFF536DFE))
              scale(1.04f)
              borderWidth(2.dp)
              borderColor(Color.White.copy(alpha = 0.5f))
          }
      }
      focused {
          animate {
              borderWidth(3.dp)
              borderColor(Color.White)
              background(Color(0xFF304FFE))
          }
      }
  }

The thing I noticed right away is the structure. Each state is a named block. Each block contains exactly the properties that change. The animate() wrapper means those changes transition smoothly. Reading this code six months from now, you know exactly what the component looks like in every state without tracing through boolean variables and animateAsState calls.

What you learn:

  • pressed(), hovered(), focused() each take a Style argument. Since Style is a fun interface, both pressed(Style { ... }) and the trailing lambda pressed { ... } work - use whichever reads best in context.
  • Wrap state styles in animate() for smooth transitions. Without it, property changes are instant.
  • One definition replaces the entire InteractionSource + collectAsState + animateColorAsState + graphicsLayer chain.

Lab 2: Composing styles like modifiers

Build reusable style layers and compose them with .then().

The Compose Styles API: Building 8 Labs to Master Declarative Styling

This lab explores what I think is the real long-term win of the API: composition. You define small, focused styles and combine them.

val baseCard = Style {
    background(LabCyan.copy(alpha = 0.15f))
    shape(RoundedCornerShape(16.dp))
    contentPadding(horizontal = 24.dp, vertical = 20.dp)
}
val elevatedCard = Style {
    borderWidth(2.dp)
    borderColor(Color(0xFFB0BEC5))
    scale(1.02f)
}
val darkTheme = Style {
    background(Color(0xFF1E1E2E))
    contentColor(Color.White)
}

// Later styles override earlier ones:
val composed = baseCard.then(elevatedCard).then(darkTheme)

The .then() operator works like Modifier chaining. Properties from later styles override those from earlier styles. In the example above, darkTheme overrides the background from baseCard, but the shape from baseCard and the border from elevatedCard both survive. This is exactly how CSS specificity works, except here it is explicit and ordered. No cascade confusion. No !important.

You can also use the factory form Style(s1, s2, s3) if you prefer a flat call over a chain. The merge behavior is identical.

If you are building a design system, this is the pattern to pay attention to. Define your spacing tokens as one style, your color tokens as another, your elevation tokens as a third. Compose them per component. When the design team changes the spacing scale, update one style definition and every component that uses it updates. This is the kind of reuse that Compose's modifier system never cleanly supported.

What you learn:

  • .then() works like Modifier chaining. Later properties override earlier ones.
  • Style(s1, s2, s3) factory is an alternative to chaining when you already have all the pieces.
  • This enables design tokens. Define a baseCard, elevation, and theme style once. Compose them per screen. Change the base and every composed style updates.

Lab 3: Driving visual state declaratively

selected(), checked(), and disabled() with explicit state driving.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

Labs 1 and 2 felt smooth. Lab 3 is where I hit the wall. I defined disabled() and checked() state blocks, applied them with Modifier.styleable(style = ...), and nothing happened. Tapping a toggle did not change the visual state. The style just sat there showing defaults.

The manual wiring is intentional — earlier versions had auto-detection but it conflicted with the interactionSource on clickable/toggleable.

val cardStyle = Style {
      background(AccentOrange.copy(alpha = 0.15f))
      shape(RoundedCornerShape(12.dp))
      borderWidth(2.dp)
      borderColor(AccentOrange)
      disabled {
          background(Color(0xFFE0E0E0))
          contentColor(Color(0xFF9E9E9E))
          scale(0.98f)
      }
}

// Explicit state driving:
  val styleState = remember { MutableStyleState(interactionSource) }
  styleState.isEnabled = enabled
  Box(Modifier.styleable(styleState = styleState, style = cardStyle))

Once I switched to this pattern, everything worked. Selected cards highlighted. Disabled cards grayed out. Checked toggles animated.

What you learn:

  • selected(), checked(), disabled() are state blocks just like pressed().
  • State is driven explicitly via MutableStyleState. You set styleState.isChecked, styleState.isEnabled, styleState.isSelected yourself.

Gotcha: Modifier.styleable(style = ...) alone does not detect state from toggleable() or clickable(). You must use MutableStyleState and drive state explicitly. This is by design, not a bug. The clickable/interactionSource/ripple integration is being reworked, so expect this pattern to evolve

Lab 4: Animated transforms in 3 lines

scale(), rotationZ(), and translationX/Y() inside animate blocks.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

This lab explores the transform properties. In current Compose, any transform requires graphicsLayer {}. With Styles, transforms are just properties.

val spinStyle = Style {
      background(Color(0xFF3D5AFE))
      shape(RoundedCornerShape(16.dp))
      contentPadding(20.dp)
      checked {
          animate {
              rotationZ(360f)
              background(Color(0xFF00C853))
          }
      }
  }

  val slideStyle = Style {
      background(Color(0xFF00BCD4))
      shape(RoundedCornerShape(16.dp))
      contentPadding(20.dp)
      checked {
          animate {
              translationX(50f)
              translationY(-10f)
          }
      }
  }

Toggle the checked state and the first box spins 360 degrees while changing from blue to green. The second slides 50px right and 10px up. Both animate smoothly because of the animate() wrapper. No graphicsLayer. No animateFloatAsState. Three lines of transform code.

The brevity is nice, but colocation is the real win. The transform, the color change, and the trigger condition all live in the same block. In the old approach, the rotation lives in a graphicsLayer, the color lives in a background() modifier, and the state check lives in a collectAsState call. Three different locations for one visual behavior. Here it is one nested block.

What you learn:

  • Transform properties (scale, rotationZ, translationX, translationY) work inside animate() just like color and shape properties.
  • No graphicsLayer needed. The Style system handles the layer internally.
  • You can combine transforms with color changes in a single state block. The spin and the color change happen together, no extra wiring.

Lab 5: Real-world micro-interactions

Favorite buttons, nav bars, pill toggles: practical patterns.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

Labs 1 through 4 are isolated concepts. Lab 5 applies them to real UI patterns. The favorite button is the most satisfying one to tap:

 val favoriteStyle = Style {
      background(Color(0xFFF5F5F5))
      shape(CircleShape)
      contentPadding(16.dp)
      contentColor(Color.Gray)
      checked {
          animate {
              background(Color(0xFFFFEBEE))
              contentColor(Color(0xFFE53935))
              scale(1.2f)
          }
      }
  }

Tap the heart. The background warms to pink, the icon turns red, and the whole thing scales up 20%. Tap again and it shrinks back to gray. The contentColor() property is doing something important here: it propagates to child Text and Icon composables through CompositionLocal. You set the color on the container, and the icon inside picks it up automatically.

This same pattern extends to navigation bar items, pill-shaped toggle buttons, and notification badges. Define the default state, define the active state with checked() or selected(), wrap in animate(). Done.

What you learn:

  • contentColor() propagates to child Text and Icon composables via CompositionLocal. Set it on the parent and children inherit it.
  • CircleShape combined with scale() creates satisfying micro-interactions with minimal code.
  • The same checked/selected pattern works for nav bar items, toggle pills, and notification badges.

Lab 6: Text properties you didn't know you could style

fontSize(), fontWeight(), contentBrush(), letterSpacing(), and textDecoration().

The Compose Styles API: Building 8 Labs to Master Declarative Styling

I did not expect the Styles API to cover text properties, but it does. Some of them surprised me.

val pressTextStyle = Style {
      contentColor(Color.Black)
      fontSize(18.sp)
      letterSpacing(0.sp)
      pressed {
          animate {
              contentColor(Color(0xFFFF6D00))
              letterSpacing(4.sp)
              textDecoration(TextDecoration.Underline)
              scale(0.96f)
          }
      }
  }

  val gradientStyle = Style {
      contentBrush(Brush.linearGradient(listOf(Color.Magenta, Color.Cyan)))
      fontSize(28.sp)
      fontWeight(FontWeight.Bold)
  }

The first style makes text spread its letters apart and underline when pressed. It looks good. The second applies a gradient brush to the text. No custom drawBehind or TextStyle with Brush. Just contentBrush() in the style block.

letterSpacing() animating on press is a subtle effect that feels premium. I had never seen it done in a Compose app, mostly because doing it with the current API would require animateDpAsState plus a custom TextStyle rebuild on every frame. Here it is one line inside an animate() block.

What you learn:

  • Text properties are first-class in the Style system: fontSize(), fontWeight(), letterSpacing(), textDecoration(), and contentBrush().
  • contentBrush() enables gradient text without custom drawing code. Pass any Brush and the text renders with it.
  • letterSpacing() and textDecoration() can animate on interaction state changes with zero manual setup.

Lab 7: Theme-aware styles

Styles read MaterialTheme colors and auto-update on dark/light toggle.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

One concern I had going in: can styles read the current theme? If they are static objects, they would not respond to dark mode toggles. Turns out, StyleScope extends CompositionLocalAccessorScope, which means you can read any CompositionLocal inside a Style {} block.

val primary = MaterialTheme.colorScheme.primary
  val onPrimary = MaterialTheme.colorScheme.onPrimary
  val surface = MaterialTheme.colorScheme.surface
  val onSurface = MaterialTheme.colorScheme.onSurface

  val buttonStyle = Style {
      background(primary)
      contentColor(onPrimary)
      shape(RoundedCornerShape(12.dp))
      contentPadding(16.dp)
      pressed {
          animate {
              background(surface)
              contentColor(onSurface)
              scale(0.95f)
          }
      }
  }

Toggle dark mode. The button updates its colors immediately. No extra wiring. The Style {} block captures the CompositionLocal values, and when the theme changes, the style recomposes with the new values. This is how it should work, and I was relieved it did.

What you learn:

  • StyleScope extends CompositionLocalAccessorScope. You can read MaterialTheme.colorScheme, LocalContentColor, or any custom CompositionLocal inside a Style block.
  • Styles react to theme changes automatically. Swap light to dark, and the style picks up the new palette.
  • No isSystemInDarkTheme() checks needed. No conditional style selection.

Lab 8: Custom components with style parameters

The API guidelines pattern: Defaults object + style parameter + .then() override.

The Compose Styles API: Building 8 Labs to Master Declarative Styling

This is the lab that matters most for library authors and design system teams. The Compose team has published guidelines for how components should expose styling, and the pattern looks like this:

object StyledChipDefaults {
      @Composable
      fun style(): Style {
          val bg = MaterialTheme.colorScheme.secondaryContainer
          val fg = MaterialTheme.colorScheme.onSecondaryContainer
          return Style {
              background(bg)
              shape(RoundedCornerShape(8.dp))
              contentPadding(horizontal = 16.dp, vertical = 8.dp)
              contentColor(fg)
              pressed { animate { scale(0.95f) } }
          }
      }
  }

  @Composable
  fun StyledChip(
      onClick: () -> Unit,
      modifier: Modifier = Modifier,
      style: Style = StyledChipDefaults.style(),
      content: @Composable () -> Unit,
  )

The Defaults object provides a @Composable style() function that reads the theme. The component accepts a style parameter with the defaults as the default value. Callers who want to customize use StyledChipDefaults.style().then(Style { ... }) to override specific properties while keeping the rest.

This mirrors how Material3 components already work with colors, elevation, and contentPadding parameters, but collapses them all into a single style parameter. One parameter instead of five. One override mechanism instead of five separate Defaults functions.

Consider what this does to API surface. Today, a Material3 Button has colors, elevation, shape, contentPadding, and border parameters. Each has its own Defaults object and its own override pattern. With Styles, all of that collapses to one style parameter. Callers learn one override mechanism. Library maintainers expose one customization surface.

What you learn:

  • Follow the same pattern as Material3: a Defaults object with a @Composable fun style() that reads theme values.
  • Callers override with style = StyledChipDefaults.style().then(Style { ... }). They get the base behavior plus their customizations.
  • If you are building a component library, start designing your APIs around this pattern now.

What I learned building 8 labs

Here are the six things that stuck with me.

  1. MutableStyleState is non-negotiable in alpha06.

If you use Modifier.styleable(style = myStyle) and expect checked() or selected() to just work when paired with toggleable(), they won't. This is by design – earlier alphas had auto-detection but it conflicted with the interactionSource on clickable/toggleable.

You create a MutableStyleState, share the MutableInteractionSource, and explicitly set styleState.isChecked or styleState.isSelected` yourself. The clickable/interactionSource/ripple integration is being reworked, so expect this to evolve.

For pressed state specifically, share the InteractionSource:

val src = remember { MutableInteractionSource() }
val ss = remember { MutableStyleState(src) }
Box(
    Modifier
        .styleable(styleState = ss, style = myStyle)
        .clickable(interactionSource = src, indication = null) { }
)

If your styles aren't responding to state, this is almost certainly why.

  1. Style composition is the real win.

The individual style properties are convenient. The state blocks are nice. But .then() composition is what turns this into a design system tool. Define your tokens as styles. Compose them. Override selectively. This is the pattern that scales from a demo app to a production system.

  1. Some things do not work yet.

dropShadow() exists in the API surface but has an internal constructor. I could not use it. Some properties appear in autocomplete but do not render visibly. This is alpha software. Ship your experiments in debug builds, not your production APK.

  1. contentColor propagation works well.

Set contentColor() on a parent style, and child Text and Icon composables pick it up through LocalContentColor. This is not new behavior for Compose, but having it work through the Style system means you define your icon and text colors once in the style, not on each child. For the favorite button in Lab 5, the icon color changes from gray to red purely because the parent style switches contentColor in the checked() block.

  1. Theme integration works.

I was worried styles might be static and disconnect from CompositionLocal values. They don't. StyleScope extends CompositionLocalAccessorScope, so you read MaterialTheme.colorScheme.primary inside a Style {} block and it recomposes when the theme changes. Dark mode works. Custom themes work.

  1. Where this is headed.

Looking at the full API surface, this looks like Compose's answer to CSS-in-JS. A declarative styling system with state variants, composition, animation, and theme integration. When it reaches stable, it could change how component libraries are built. The pattern in Lab 8, where a component exposes a single style parameter with composable defaults, is cleaner than the current Material3 approach of separate colors, elevation, shape, and contentPadding parameters.

The caveat is obvious: this is alpha. The API surface could change. MutableStyleState behavior will almost certainly evolve. Property names might shift. But the direction is clear, and the developer experience in these eight labs, once I worked around the alpha06 bugs, was better than the InteractionSource approach.

I think the .then() composition and the Defaults object pattern from Lab 8 will be the most impactful features when this stabilizes. Not because they're flashy. Because they give Compose a real answer to something annoying since 1.0: how do you let callers override a component's look without exposing five separate parameters?

Try it yourself

The full source for all eight labs is on GitHub: Compose Style Lab. Clone it, run it, tap things. Every lab has live toggles, property readouts, and state controls. Break the styles. Compose new ones. The best way to learn this API is to play with it.

To use the Styles API in your own project, add compose-foundation:1.11.0-alpha06 (or newer) and opt in with @OptIn(ExperimentalFoundationStyleApi::class).

P.S - If you build something with the Styles API, I'd like to see it.

]]>
<![CDATA[Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races]]>https://aditlal.dev/play-store-heisenbug-art-verify/69a088e2570d4bfde14a557aThu, 26 Feb 2026 18:29:15 GMTjava.lang.IllegalStateException: Default FirebaseApp is not initialized in this process at com.google.firebase.FirebaseApp.getInstance() at com.google.firebase.remoteconfig.FirebaseRemoteConfig.getInstance()Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races

If you are reading this, you are probably staring at a Play Store Pre-Launch Report or a Firebase Test Lab result, throwing the exact crash above.

Here is the maddening part: You have zero crashes in production. You cannot reproduce this locally. You've cold-started your app on an 8-core physical device 50 times, and it works flawlessly every single time.

You aren't crazy. Your code is experiencing a Heisenbug - a race condition that only exists under the exact, hostile conditions of the Google Play Console's automated testing environment. Attach a debugger, add a log statement, change the timing by a microsecond, and the bug vanishes.

The TL;DR:

When Google runs your app in a Pre-Launch Report, it is a fresh install running in ART's compiler-filter=verify mode. Your app is running purely interpreted, with zero Ahead-Of-Time (AOT) compilation. Combined with the aggressive structural changes of R8 Full Mode and an emulated environment starved for CPU cycles, a 1ms initialization window that always succeeds on your local device stretches into a 100ms+ bottleneck.

Your background coroutines are losing a race against your main thread's dependency injection.

Here is the exact mechanism of why your app is failing in review, how to force your local emulator to replicate this environment, and the cross-module latch mechanism required to fix it.

👉 Jump straight to the cross-module CountDownLatch fix


1. The Pre-Launch Environment: Running on Hard Mode

Every AAB uploaded to the Play Console triggers a Pre-Launch Report powered by Firebase Test Lab's Robo test. The automated crawler installs the app on physical and virtual devices, exercises the UI, and looks for crashes, accessibility issues, and security vulnerabilities.

The critical detail nobody talks about: freshly installed apps run with compiler-filter=verify.

This means:

  • DEX bytecode is verified but not AOT-compiled.
  • The app runs in interpreted + JIT mode, which is 30% to 40% slower than AOT.
  • Cloud Profiles are not available on a fresh install.
  • Baseline Profiles require a background dexopt pass before they take effect.

In Android 14+, the ART Service relies on a background job (pm.dexopt.bg-dexopt=speed-profile) to compile the app. Crucially, this job only executes when the device is idle and charging. Test Lab provisions a device, installs the app, and immediately launches the crawler. The device is never idle. It never compiles.

Google's own Pre-Launch Report documentation says the tests use "real Android devices running Android 9+." It never discloses the ART compilation mode. We verified this — the page describes errors, warnings, performance metrics, and accessibility checks. It says nothing about compiler-filter=verify. This is the gap.

Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races

This is a fundamentally different execution environment from your local device, where repeated installs and profile-guided compilation mean your app is running with speed-profile or better.

Factor Local dev Play Store pre-review
ART compilation speed-profile or speed (AOT) verify (interpreted + JIT)
Execution speed Coroutine launches in ~5ms Coroutine launch can take 100ms+
CPU contention Dedicated cores, no background load dex2oat running + crawler consuming CPU
Cloud/Baseline profiles Available (repeated installs) Not available (fresh install)
Race window ~1ms (Firebase always wins) 10-50ms+ (Firebase may lose)

2. The Multiplier: R8 Full Mode

R8 full mode — enabled by default since AGP 8.0 — applies aggressive optimizations that change code structure in ways that alter initialization timing:

  • Vertical class merging: Single-implementation interfaces get merged into their concrete class. A Google engineer confirmed in kotlinx.coroutines #1304 that this specific optimization prevents coroutine dispatch optimization, filing a separate R8 bug.
  • Visibility relaxation: Private methods are made public to bypass JVM access checks for cross-class inlining.
  • Factory inlining: Hilt factory code (Module_ProvideXFactory.get()) is inlined directly at call sites.
  • Constructor removal: Default constructors are stripped when R8 determines they are unnecessary.

What used to be a lightweight virtual dispatch becomes a massive, contiguous block of bytecode directly inside your Application.onCreate().

In AOT mode, this is pre-compiled machine code that executes in microseconds. But in verify mode, the JIT compiler must parse and compile this bloated method on the fly, directly on the main thread. This JIT overhead acts as a massive speed bump, wildly expanding the window for race conditions.

This is not theoretical. Retrofit #3751 documents R8 full mode stripping the generic type information Retrofit needs for reflection. Dagger #1859 shows DoubleCheck.get() contention appearing in production ANR traces when scoped providers fight for the same lock. These are real crashes, in real apps, caused by R8 full mode restructuring code that was never designed for it.

The Double Whammy

The combination is lethal: R8 full mode changes the structure of your code. verify mode changes the speed of execution. Together, they create a runtime environment that has almost nothing in common with your local debug build.


3. The Race Condition: Firebase vs. Hilt

The ContentProvider Trap

The Android initialization sequence is strictly ordered:

  1. ContentProvider.onCreate() — all registered providers
  2. Application.attachBaseContext()
  3. Application.onCreate()
  4. Hilt component creation

Many libraries historically relied on ContentProviders for auto-initialization. Firebase used FirebaseInitProvider — a ContentProvider that guaranteed Firebase was ready before Application.onCreate() even ran. It was a silent, invisible dependency that just worked.

The problem: ContentProviders are expensive to instantiate and slow down startup. So many of us migrated to Jetpack App Startup, which replaces multiple ContentProviders with a single one and lets you define explicit dependencies between initializers.

The Typical Migration

  1. Remove FirebaseInitProvider from the manifest (tools:node="remove").
  2. Fire a Dispatchers.IO coroutine in an App Startup Initializer to call FirebaseApp.initializeApp().
  3. Let Hilt resolve dependencies like FirebaseRemoteConfig.getInstance() synchronously during Application.onCreate().

This pattern is everywhere. And it's a ticking time bomb.

Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races

Why It Crashes in Pre-Review

On a flagship phone, that coroutine launches in ~5ms. Firebase always wins the race.

In Test Lab, three heavy processes fight for limited vCPU cycles: your app's main thread, dex2oat verifying DEX files, and the Robo crawler. Because Dispatchers.IO uses a shared thread pool, CPU starvation causes scheduling delays. That coroutine might take 150ms+ to launch. Hilt resolves synchronously on the main thread, beating Firebase to the punch. Result: IllegalStateException.

You're Not Alone

This exact pattern has been reported across the Firebase ecosystem — always with the same bewildered observation that it "only happens on first install from Google Play":

  • firebase-android-sdk #4693: "FirebaseApp is not initialized in this process." Multiple reporters confirm it happens "mostly only the very first time the app is started, likely after being installed from Google Play."
  • firebase-android-sdk #6145Utils.awaitEvenIfOnMainThread() caused a 100% reproducible ANR. The stack trace shows CountDownLatch.await() blocking the main thread — Crashlytics' own internal synchronization failing under the same conditions.
  • FlutterFire #8837Firebase.initializeApp() takes 7.5 seconds until first frame on low-end devices in Play pre-launch reports. The reporter notes it is "not CPU-bound" — suggesting lock contention or I/O bottleneck, not raw computation.

That last one is the closest anyone has come to documenting this publicly. But none of these issues connect the dots to ART compilation mode.


4. The Baseline Profile Trap: Why Your Flagship is Lying to You

If you are dealing with Pre-Launch report crashes and slow startups, you are likely already looking at your Baseline Profiles. But how you generate them — and how fresh they are — dictates whether they survive the real world.

How Baseline Profile Generation Actually Works

A common misconception: the Macrobenchmark profiler works like a CPU sampling profiler, recording which methods are "hot" based on execution time. It does not.

Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races

The BaselineProfileRule records which methods were executed during your test journeys. A method is either called or it isn't. It does not matter how fast the device is — the same code paths produce the same profile entries. A method that takes 1 microsecond on a Pixel 9 Pro Fold produces the same profile entry as one that takes 100ms on a Pixel 4a.

What does matter is code path coverage. Your test journeys define which methods get profiled. If your profileBlock only calls startActivityAndWait(), you only capture startup methods. If you also scroll lists, navigate screens, and trigger network calls, you capture those paths too.

Where Device Choice Actually Matters

The device affects the profile in three indirect ways:

  1. Async content and timeouts: If your test calls startActivityAndWait() and the device is so slow that async content fails to load before the framework timeout, you miss those code paths. Conversely, extremely fast devices always complete async work, but that's true of any reasonable device.
  2. Reproducibility: A Pixel 9 Pro Fold is not reproducible across team members and CI servers. Google's recommended Gradle Managed Device config — a Pixel 6 API 31 with aosp system image — is reproducible anywhere.
  3. Unique code paths: A foldable device may exercise code paths specific to multi-window or large screen layouts that don't represent your median user.

What Meta Learned at Scale

Meta Engineering published a detailed account of their Baseline Profile infrastructure in October 2025. The key insights:

  • For complex apps like Facebook and Instagram, benchmarks aren't representative enough. They collect class and method usage data from real users via a custom ClassLoader at a low sample rate.
  • Inclusion threshold matters more than device choice. They started conservatively at 80-90% frequency and lowered it to ≥20% — a method needs to appear in at least 20% of cold start traces to be included.
  • Profile size has a ceiling. Compiled machine code is ~10x larger than interpreted code. A bloated profile increases I/O cost through page faults and cache misses. They've occasionally seen regressions from profiles that were too large.
  • They optimize beyond startup — feed scrolling, DM navigation, surface transitions.
  • Results: up to 40% improvement across critical performance metrics.

The Staleness Problem

For your app, the bigger issue is probably staleness. When you enable R8 full mode, the compiler restructures your code — merging classes, inlining factories, relaxing visibility. The method signatures change. A Baseline Profile generated before R8 full mode was enabled references methods that may no longer exist in the optimized binary.

Since AGP 8.2, R8 rewrites profile rules to match the obfuscated release build, increasing method coverage by ~30%. But this only works if the profile is regenerated from an unminified build in the same pipeline. A 5-week-old profile against a post-R8-full-mode binary is a stale profile.

The Rule: Regenerate every release. Automate it with ./gradlew :app:generateBaselineProfile in CI. Use a Pixel 6 API 31 GMDwith systemImageSource = "aosp". And make your profileBlock cover your critical user journeys, not just startup.


5. Local Testing: Simulating Pre-Launch Conditions

To prove this to yourself, you need to strip the AOT artifacts from your local device and force it into verify mode.

Run these ADB commands:

# Strip AOT, force interpreted mode
adb shell cmd package compile -m verify -f your.package.name

# Cold start with timing
adb shell am force-stop your.package.name
adb shell am start-activity -W -S your.package.name/.MainActivity

# Verify compilation state
adb shell dumpsys package dexopt | grep -A5 "your.package.name"

# Simulate background dexopt with profile (what happens hours after install)
adb shell cmd package compile -m speed-profile -f your.package.name

# Reset to trigger dex2oat on next boot
adb shell cmd package compile --reset your.package.name

The AOSP documentation on ART configuration confirms the compiler filters: verify = DEX code verification only (no AOT compilation), speed-profile = AOT-compile profiled hot methods, speed = AOT-compile everything.

The hard truth: Even with verify mode on a modern 8-core physical device, you might still be too fast to trigger the crash. A 4-core emulator under verify mode is the closest approximation to Test Lab. We ran 30+ cold starts across a Pixel 9 Pro Fold (physical), a Pixel 9a emulator (4 cores), and a custom 1-core/1GB RAM emulator — all in verify mode — and reproduced zero crashes. The Play Store pre-launch environment has additional constraints we can't fully replicate: CPU contention from the Robo crawler itself, whatever specific VM configuration Google uses, and dex2oat running concurrently with app launch.


6. The Fix: Cross-Module Latch Coordination

We know the root cause: Hilt is resolving dependencies synchronously on the main thread faster than our background coroutine can initialize Firebase.

We need to force Hilt to wait, but we have a structural problem. Our FirebaseInitializer lives in the app module, but our dependency injection module lives in a shared core module. We cannot directly reference the background job across module boundaries.

The solution is a thread-safe, cross-module synchronization point.

Hunting the Play Store Heisenbug: R8, ART Verify Mode, and Firebase Init Races

Step 1: Create the readiness object

In your shared core module, define a simple object to hold a CountDownLatch:

package your.package.common.di

import java.util.concurrent.CountDownLatch
import java.util.concurrent.atomic.AtomicBoolean

object FirebaseReadiness {
    val initLatch = CountDownLatch(1)
    val initSucceeded = AtomicBoolean(false)
}

Step 2: Release the latch in your Initializer

In your app module, update your Jetpack App Startup initializer to count down the latch the moment Firebase is ready:

class FirebaseInitializer : Initializer<Unit> {
    override fun create(context: Context) {
        CoroutineScope(Dispatchers.IO).launch {
            try {
                FirebaseApp.initializeApp(context)
                FirebaseReadiness.initSucceeded.set(true)
            } catch (e: Exception) {
                // Log initialization failure
            } finally {
                // Always release the latch so we don't permanently block the main thread
                FirebaseReadiness.initLatch.countDown()
            }
        }
    }

    override fun dependencies(): List<Class<out Initializer<*>>> = emptyList()
}

Step 3: Block the injection until ready

Back in your core module, update your Hilt @Provides function to wait for the latch.

Crucially: Add a timeout. Never block the main thread indefinitely. If Firebase fails to initialize within 5 seconds, it is better to crash cleanly or provide a fallback than to trigger a guaranteed ANR. Firebase's own Utils.awaitEvenIfOnMainThread()caused 100% reproducible ANRs by doing exactly this — blocking without a reasonable timeout.

@Module
@InstallIn(SingletonComponent::class)
object RemoteConfigModule {

    @Provides
    @Singleton
    fun providesFirebaseRemoteConfig(): FirebaseRemoteConfig {
        // Wait up to 5 seconds for the background initializer to finish
        val isReady = FirebaseReadiness.initLatch.await(5, TimeUnit.SECONDS)

        check(isReady && FirebaseReadiness.initSucceeded.get()) {
            "Firebase initialization timed out or failed in background coroutine."
        }

        return FirebaseRemoteConfig.getInstance()
    }
}

A Note on CountDownLatch and Hilt's DoubleCheck

There is a subtle deadlock risk here. Hilt resolves @Singleton-scoped providers through DoubleCheck.get(), which uses synchronized. If your latch producer also needs a scoped dependency from the same Hilt component, you can deadlock: thread A holds the DoubleCheck lock waiting on the latch, thread B needs the DoubleCheck lock to produce the latch value.

Our FirebaseReadiness object avoids this entirely — it is a plain Kotlin object with no DI involvement. The latch is released from a coroutine that has no dependency on any Hilt-provided object.


7. The Connective Tissue: Why This Article Exists

Domain Documentation status
ART compiler-filter=verify behavior Well-documented in AOSP, never connected to Play Store
Firebase initialization race conditions Widely reported on GitHub, root cause left vague
Pre-launch report "cannot reproduce" crashes Anecdotally common in forums and issue trackers, no systematic analysis

The closest anyone has gotten:

  • FlutterFire #8837 documents 7.5-second Firebase init in pre-launch but doesn't identify verify mode as the cause.
  • Redex #528 documents Firebase/GMS classes like com.google.firebase.iid.zzac triggering "Class failed lock verification and will run slower" — with a measured 200-300ms startup hit. This is the missing link: classes that fail soft verification in ART fall back to interpreted execution, creating the exact timing expansion we describe. The Android team's own article on mitigating soft verification issues documents up to 22% degradation on a Nexus 5X.
  • Google Issue Tracker #160907013 has developers asking Google to fix "pre-launch report false positives." No explanation of why they occur.
  • android/tuningfork #42 shows a native crash reproducible in Firebase Test Lab but not on dev devices — the same pattern, different layer of the stack.

Nobody wrote the article that connects all five: R8 restructures your code. verify mode slows it down. Firebase init moves to a background coroutine. Hilt resolves synchronously. The race window expands from invisible to catastrophic.

Until now.


The Takeaway

When the Play Store Pre-Launch crawler boots your app in verify mode, the CountDownLatch absorbs the timing variance. If the JIT compiler stalls the main thread, the latch waits. If the emulated CPU is starved for cycles and the coroutine takes 200ms to launch, the latch waits.

The Play Store pre-launch environment runs your app in a fundamentally different way than your development machine. R8 full mode restructures your code, and verify compilation mode changes execution timing. Together, they expose initialization race conditions that are invisible locally.

The fix is not to suppress the crashes, but to eliminate the timing dependencies:

  • Use explicit initialization ordering — CountDownLatch or Jetpack App Startup's dependency graph.
  • Never block the main thread indefinitely during DI resolution — always use a timeout.
  • Test under verify mode locally before upload — adb shell cmd package compile -m verify -f your.package.
  • Regenerate Baseline Profiles every release — stale profiles against R8-restructured code are worse than no profile.
  • Cover your critical user journeys in the profile generator, not just startActivityAndWait().

This article is based on a real root cause analysis from a production Android app. The crash appeared during Play Store pre-review, was traced to a three-piece race condition between FirebaseInitProvider removal, background coroutine initialization, and synchronous Hilt DI resolution, and was fixed with the cross-module latch pattern described above.

About the Author 


Adit Lal is the CTO and Co-Founder of Travv World, with over 14 years of experience in Android development. When he isn't hunting down Heisenbugs, architecting reactive state machines at scale, or pushing the limits of Kotlin Multiplatform and Jetpack Compose, you can find him sharing mobile performance insights on X/Twitter and GitHub.


References

Official Documentation

GitHub Issues (All Verified)

Engineering Blog Posts

]]>
<![CDATA[Building StickerExplode(Part 1): Gestures, physics, and making stickers feel real]]>Part 1 of three. This one covers the gesture system, spring physics, peel-off animation, and die-cut rendering. Part 2 gets into the holographic shader, tilt sensing, haptics, and cross-platform architecture.


I built a sticker canvas app. You slap stickers on a surface, drag them around, pinch to resize, rotate with

]]>
https://aditlal.dev/building-stickerexplode-part-1-gestures-physics-and-making-stickers-feel-real/6999d5818b1b8505361c03fcSat, 21 Feb 2026 16:30:00 GMT

Part 1 of three. This one covers the gesture system, spring physics, peel-off animation, and die-cut rendering. Part 2 gets into the holographic shader, tilt sensing, haptics, and cross-platform architecture.


I built a sticker canvas app. You slap stickers on a surface, drag them around, pinch to resize, rotate with two fingers, and when you grab one it peels up like you're lifting real vinyl off a sheet. There's a holographic shimmer that responds to your phone's tilt, spring physics on everything, haptic feedback on every interaction.

Building StickerExplode(Part 1): Gestures, physics, and making stickers feel real

I was watching Apple's WWDC sticker segment and thinking about how good the peel-and-stick interactions felt. Then I saw this tweet by Daniel Korpai:

And I thought: can I build that in Compose? Not a static grid. Something where stickers feel like physical objects you can grab, lift, and stick down. Shadows that grow as they rise. Shimmer when you tilt the phone.

It also felt like the right project to push Compose Multiplatform past the usual form-app demos. I wanted to hit the hard parts: platform sensors, native haptics, custom shaders, layered gestures. All shared between Android and iOS.

What it actually does

Building StickerExplode(Part 1): Gestures, physics, and making stickers feel real

Tap the + button, and a bottom sheet slides up with 16 stickers: Emoji, text, Canvas-drawn shapes, Material icons on gradient backgrounds. Pick one, and it drops onto the canvas at a random position with a slight rotation.

From there, you can drag with one finger, pinch to resize (0.5x to 3x), rotate with two fingers, tap to bring a sticker to the front, or double-tap for a bouncy 2x zoom. Grab and release to feel the peel-off with haptics.

Every sticker has a white die-cut border (like a real vinyl sticker), a dynamic drop shadow that grows when you lift it, and an iridescent holographic shimmer that shifts as you tilt your phone.

There's also a history screen that logs every sticker you've placed, with relative timestamps. The entire canvas state (positions, rotations, scales, z-ordering, history) persists across app launches.

Tech stack

I kept the dependencies minimal:

Library What it does
Compose Multiplatform 1.7 UI across Android and iOS
Navigation Compose Two screens: canvas and history
Lifecycle ViewModel Shared state management with coroutine scope
DataStore Preferences Persistence across launches
kotlinx.serialization JSON for canvas state
Material 3 Bottom sheet, FABs, top bar, icons

No image loading library. No third-party gesture library. No external animation framework. Every sticker is rendered with pure Compose: Text for emoji, Canvas for the Kotlin logo path, Icon in a gradient Box for the tool stickers. Zero image assets to manage.

Architecture at a glance

The architecture is simple on purpose. One CanvasViewModel owns two StateFlows, one for the sticker list and one for the history log. A CanvasRepository wraps DataStore for persistence. The view model loads saved state on init and debounce-saves after every change.

commonMain/
  App.kt                    -- NavHost with two routes
  StickerCanvas.kt          -- Canvas composable with draggable stickers
  StickerTray.kt            -- Bottom sheet picker
  ShimmerGlow.kt            -- Holographic modifier node
  HistoryScreen.kt          -- History log
  model/                    -- StickerItem, StickerType, HistoryEntry
  data/                     -- CanvasRepository, DataStore factory
  viewmodel/                -- CanvasViewModel
  sensor/                   -- TiltSensor expect/actual
  haptics/                  -- HapticFeedback expect/actual

androidMain/                -- SensorManager, View haptics, AGSL shader
iosMain/                    -- CMMotionManager, UIKit haptics, fallback shader

Five expect/actual boundaries handle the platform differences:

  1. Tilt sensors (Android SensorManager vs iOS CMMotionManager)
  2. Haptic feedback (Android View.performHapticFeedback vs iOS UIImpactFeedbackGenerator)
  3. Holographic rendering (AGSL shader on Android 13+ vs ShaderBrush fallback)
  4. DataStore file paths
  5. System clock

The common code never imports anything from Android or iOS. Each platform implementation is a thin wrapper around native APIs.

The sticker data model

Each sticker on the canvas is a data class:

@Serializable
data class StickerItem(
    val id: Int,
    val type: StickerType,
    val initialFractionX: Float,
    val initialFractionY: Float,
    val rotation: Float = 0f,
    val offsetX: Float = Float.NaN,
    val offsetY: Float = Float.NaN,
    val pinchScale: Float = 1f,
    val zIndex: Float = 0f,
)

New stickers start with fractional coordinates (0..1 relative to canvas size) so the default layout works on any screen. Once you drag a sticker, pixel offsets take over. The composable checks for Float.NaN on first render to decide which positioning to use.

Z-ordering is a monotonically increasing counter. Tap a sticker and it gets the next value. Compose's .zIndex() modifier handles the rest.

What's coming in Part 2

Part 2 goes deep on two features: the peel-off grab (spring physics, dynamic shadows, layered gesture handling) and the holographic shimmer (thin-film optics, AGSL shader, cross-platform fallback).

Read Part 2: The peel-off effect and holographic shimmer


Built with Kotlin 2.1 and Compose Multiplatform 1.7.

]]>
<![CDATA[Building StickerExplode(Part 2): The peel-off effect and holographic shimmer]]>Part 2 of three. Part 1 covers what the app is and the tech stack. This one goes deep on two features. Part 3 is the full end-to-end build.


Two things people ask about most: the peel-off grab (sticker lifts with a shadow when you touch it) and the holographic

]]>
https://aditlal.dev/stickerexplode-part-2/6999db2e8b1b8505361c0436Sat, 21 Feb 2026 16:20:00 GMT

Part 2 of three. Part 1 covers what the app is and the tech stack. This one goes deep on two features. Part 3 is the full end-to-end build.


Two things people ask about most: the peel-off grab (sticker lifts with a shadow when you touch it) and the holographic shimmer (tilt your phone and stickers glimmer like foil cards). Both took more work than I expected.

Feature 1: The peel-off grab

When you touch a sticker, I wanted it to feel like peeling vinyl off a sheet. That means several things have to happen at once.

What happens when you grab a sticker

Four properties animate simultaneously the moment your finger touches down:

  1. The sticker scales up to 1.08x. It's coming toward you.
  2. It tilts -6 degrees on the X axis. The top edge lifts first, like peeling from the top.
  3. It translates to 8 pixels. Physical lift.
  4. The drop shadow grows larger and shifts downward. Farther from the surface means a bigger, softer shadow.

When you release, everything reverses.

The gesture system: three pointerInput blocks

Each sticker needs drag, pinch, rotate, single-tap, and double-tap. Compose doesn't have one detector that does all of these, so each DraggableSticker composable chains three separate pointerInput blocks.

The first block handles drag, pinch, and rotate:

.pointerInput(Unit) {
    detectTransformGestures { _, pan, zoom, gestureRotation ->
        if (!isDragging) {
            isDragging = true
            haptics.perform(HapticType.LightTap)
        }
        pinchScale = (pinchScale * zoom).coerceIn(0.5f, 3f)
        rotation += gestureRotation
        offset += pan
    }
}

detectTransformGestures gives you deltas each frame: how far the centroid moved (pan), how much fingers spread (zoom as a multiplier), and the angle change between two fingers (gestureRotation). The zoom multiplier means you multiply the existing scale, not set it. 1.1 means "10% bigger than before."

The second block detects the drop (all fingers lifted):

.pointerInput(Unit) {
    awaitPointerEventScope {
        while (true) {
            val event = awaitPointerEvent()
            if (event.changes.all { !it.pressed } && isDragging) {
                isDragging = false
                haptics.perform(HapticType.MediumImpact)
                onTransformChanged(offset.x, offset.y, pinchScale, rotation)
            }
        }
    }
}

Why? detectTransformGestures fires during the gesture but has no onGestureEnd callback. This raw pointer watcher fills that gap. It checks if every pointer has lifted while we were dragging. That's the "drop" moment, which triggers the heavier haptic and persists the new position.

The third block handles taps:

.pointerInput(Unit) {
    detectTapGestures(
        onTap = {
            haptics.perform(HapticType.SelectionClick)
            onTapped()
        },
        onDoubleTap = {
            haptics.perform(HapticType.MediumImpact)
            isZoomedIn = !isZoomedIn
        },
    )
}

Tap detection needs its own block because it watches for a quick down-then-up without movement. If you mixed it into the transform handler, every drag would also register as a tap.

Why don't these conflict? Each pointerInput block runs in its own coroutine. They process the same event stream concurrently. The transform handler waits for movement or a second finger before claiming the gesture. The tap handler waits for a quick lift. If you drag, transforms win. If you tap, taps win. Block 2 is passive: it never consumes events, just watches.

Spring physics: why not tween

Every animation in the peel effect uses spring() instead of a duration-based tween(). The difference matters.

A tween(300.millis) has a fixed timeline. If you interrupt it (grab a sticker mid-bounce from a previous drop), the animation restarts awkwardly. Springs are velocity-aware. When interrupted, they pick up the current position and velocity and continue naturally. You can grab a bouncing sticker and it doesn't stutter.

Compose's spring() takes two parameters:

  • dampingRatio: below 1.0 = bouncy (undershoots target, oscillates). 1.0 = fastest path without overshoot. Above 1.0 = sluggish.
  • stiffness: how fast it responds. Higher = snappier.

The peel-off effect uses four springs with slightly different parameters:

// Scale: bouncy pop
val peelScale by animateFloatAsState(
    targetValue = if (isDragging) 1.08f else 1f,
    animationSpec = spring(dampingRatio = 0.55f, stiffness = 300f),
)

// Tilt: slightly bouncier, slightly slower
val peelRotationX by animateFloatAsState(
    targetValue = if (isDragging) -6f else 0f,
    animationSpec = spring(dampingRatio = 0.5f, stiffness = 250f),
)

// Lift: less bouncy, smooth
val peelTranslateY by animateFloatAsState(
    targetValue = if (isDragging) -8f else 0f,
    animationSpec = spring(dampingRatio = 0.6f, stiffness = 300f),
)

// Shadow: nearly critically damped, no bounce
val liftFraction by animateFloatAsState(
    targetValue = if (isDragging) 1f else 0f,
    animationSpec = spring(dampingRatio = 0.7f, stiffness = 200f),
)

The staggered damping is intentional. Scale (0.55) overshoots more than tilt (0.50), which overshoots more than translate (0.60). They all start at the same instant but settle differently. The sticker pops up, then tilts, then the lift smooths out. If you're not looking for it you won't notice the stagger consciously, but the motion feels less robotic than if everything animated identically.

Applying the transforms

These four values feed into graphicsLayer:

.graphicsLayer {
    scaleX = combinedScale * peelScale
    scaleY = combinedScale * peelScale
    rotationZ = rotation           // user's pinch rotation
    rotationX = peelRotationX      // peel tilt
    translationY = peelTranslateY  // peel lift
    cameraDistance = 12f * density
}

cameraDistance = 12f * density matters. rotationX tilts the sticker in 3D space. Without setting the camera distance, the default perspective makes the tilt look warped and flat. Pushing the camera back gives a subtler, more physical-looking tilt.

The dynamic shadow

The shadow is what makes the peel effect actually convincing. It's driven by liftFraction (0 when resting, 1 when fully lifted):

val shadowAlpha = 0.06f + liftFraction * 0.06f   // darker when lifted
val shadowSpread = outlinePx + (1.dp + liftFraction * 3.dp)  // wider when lifted
val shadowOffsetY = 1.dp + liftFraction * 3.dp    // shifts down when lifted

Real contact shadows work this way. An object resting on a surface casts a tight, dark shadow. Lift it and the shadow gets softer, wider, and offsets downward. The interpolation from liftFraction handles the transition smoothly.


Feature 2: The holographic shimmer

Tilt your phone and every sticker glimmers with an iridescent effect. Building this meant understanding some actual optics.

The three layers

Real holographic foil gets its look from three optical phenomena happening at once. I simulate all three and composite them.

First, thin-film iridescence. When light hits a thin transparent film (soap bubble, oil slick, holographic foil), some reflects off the top surface and some off the bottom. These two reflected waves interfere. At certain angles, specific wavelengths add up (constructive interference) and others cancel (destructive interference). Tilting the surface changes which wavelengths survive. That's why you see shifting rainbow bands.

In code, I approximate this with phase-shifted cosine waves:

vec3 iridescence(float t) {
    float phase = t * 6.2832;  // 2*PI
    return vec3(
        0.7 + 0.3 * cos(phase + 0.0),     // red
        0.7 + 0.3 * cos(phase + 2.094),    // green, 120° offset
        0.75 + 0.25 * cos(phase + 4.189)   // blue, 240° offset
    );
}

Each color channel peaks at a different value of t. As t varies across the sticker (driven by tilt and pixel position), you get a color sweep through lavender, sky blue, mint, cream, blush. The 0.7 + 0.3*cos() range keeps everything pastel. Real holographic foil is desaturated, not neon.

Second, specular reflection. A bright spot where light bounces directly toward your eye. On holographic foil this is a white glint that slides around as you tilt.

float2 specCenter = float2(0.5 + roll * 0.4, 0.5 + pitch * 0.4);
float dist = distance(uv, specCenter);
float specular = exp(-dist * dist * 8.0);

Gaussian falloff centered on a tilt-driven point. Bright in the middle, fades smoothly.

Third, Fresnel edge glow. Surfaces are more reflective at grazing angles. Look straight down at water and you see through it. Look across at a shallow angle and it's a mirror. On stickers, the edges glow slightly brighter.

float edgeDist = distance(uv, float2(0.5, 0.5)) * 2.0;
float fresnel = pow(clamp(edgeDist, 0.0, 1.0), 2.5);

Distance from center approximates viewing angle. The pow(_, 2.5) exponent concentrates the effect at the edges.

The AGSL shader (Android 13+)

On Android API 33+, all three layers run in one AGSL shader on the GPU:

uniform float2 resolution;
uniform float2 tilt;  // roll, pitch, each -1..1

half4 main(float2 fragCoord) {
    float2 uv = fragCoord / resolution;
    float roll = tilt.x;
    float pitch = tilt.y;
    float tiltMag = clamp(length(tilt), 0.0, 1.0);

    // Iridescent sweep, angle rotates with roll
    float angle = radians(45.0 + roll * 45.0);
    float2 dir = float2(cos(angle), sin(angle));
    float gradientT = dot(uv - 0.5, dir) + 0.5 + pitch * 0.3;
    vec3 iriColor = iridescence(gradientT);
    float iriAlpha = 0.12 + tiltMag * 0.06;

    // Specular glint, tracks tilt position
    float2 specCenter = float2(0.5 + roll * 0.4, 0.5 + pitch * 0.4);
    float dist = distance(uv, specCenter);
    float specular = exp(-dist * dist * 8.0);
    float specAlpha = specular * 0.25;

    // Fresnel edge glow
    float edgeDist = distance(uv, float2(0.5, 0.5)) * 2.0;
    float fresnel = pow(clamp(edgeDist, 0.0, 1.0), 2.5);
    float fresnelAlpha = fresnel * (0.03 + tiltMag * 0.06);

    // Composite
    vec3 color = iriColor * iriAlpha
               + vec3(1.0) * specAlpha
               + vec3(1.0) * fresnelAlpha;
    float alpha = iriAlpha + specAlpha + fresnelAlpha;

    return half4(half3(color), half(clamp(alpha, 0.0, 0.45)));
}

The clamp(alpha, 0.0, 0.45) cap prevents the overlay from washing out the sticker content. Where all three layers overlap, combined alpha could hit near 1.0 without the cap.

The gradient direction rotates with roll. At rest it runs diagonal (45 degrees). Tilt left and it goes horizontal. Tilt right and it goes vertical. dot(uv - 0.5, dir) projects pixel position onto the direction vector. pitch * 0.3 shifts the gradient based on forward/back tilt.

The fallback (iOS + older Android)

No AGSL on iOS or Android below 33. The fallback uses three Compose ShaderBrush subclasses that produce the same visual with LinearGradientShader and RadialGradientShader:

class HolographicFallbackNode(
    override var tiltState: State<TiltData>,
) : HolographicBaseNode() {
    override fun ContentDrawScope.draw() {
        drawContent()

        val tilt = tiltState.value
        val roll = tilt.roll.coerceIn(-1f, 1f)
        val pitch = tilt.pitch.coerceIn(-1f, 1f)

        // Layer 1: Iridescent gradient
        drawRect(
            brush = IridescentBrush(angleDeg = 45f + roll * 45f,
                offset = Offset(roll * size.width * 0.4f, pitch * size.height * 0.4f)),
            alpha = 0.12f + tiltMagnitude * 0.06f,
            blendMode = BlendMode.SrcAtop,
        )

        // Layer 2: Specular glint
        drawRect(
            brush = SpecularBrush(0.5f + roll * 0.4f, 0.5f + pitch * 0.4f),
            alpha = 0.25f,
            blendMode = BlendMode.Screen,
        )

        // Layer 3: Fresnel edge glow
        drawRect(
            brush = FresnelBrush(intensity = fresnelAlpha * 2.5f),
            alpha = fresnelAlpha,
            blendMode = BlendMode.SrcAtop,
        )
    }
}

Three draw calls instead of one GPU pass. The AGSL version is smoother during fast tilts, but the fallback looks good in practice.

How the shader gets wired into Compose

The holographic effect is a single modifier call: .holographicShine(tiltState). Under the hood it uses Compose's modifier node system:

fun Modifier.holographicShine(tiltState: State<TiltData>): Modifier =
    this then HolographicShineElement(tiltState)

private data class HolographicShineElement(
    val tiltState: State<TiltData>,
) : ModifierNodeElement<HolographicBaseNode>() {
    override fun create(): HolographicBaseNode = createHolographicNode(tiltState)
    override fun update(node: HolographicBaseNode) {
        node.tiltState = tiltState
    }
}

createHolographicNode() is an expect/actual function. Android returns the AGSL node on API 33+ and the fallback on older versions. iOS always returns the fallback.

The node reads tiltState during drawing. When the tilt value changes, Compose invalidates the draw pass for that node only. No recomposition of the composable tree. This matters because the tilt sensor fires 30 times per second on every sticker simultaneously.


Next up

Part 3 covers everything else: the die-cut outline rendering technique, tilt sensor bridging across platforms, the haptic feedback system, DataStore persistence with debounced saves, and the full expect/actual architecture.

Read Part 3: The full end-to-end build


Built with Kotlin 2.1 and Compose Multiplatform 1.7.

]]>
<![CDATA[Building StickerExplode(Part 3): The full end-to-end build]]>Part 3 of three. Part 1 covers what the app is. Part 2 goes deep on the peel-off effect and holographic shimmer. This one walks through everything else: die-cut rendering, tilt sensors, haptics, persistence, and the cross-platform architecture.


StickerExplode demo

Part 2 covered the peel-off grab and holographic shimmer. But a sticker

]]>
https://aditlal.dev/stickerexplode-part-3/6999dc7f8b1b8505361c044cSat, 21 Feb 2026 16:15:00 GMT

Part 3 of three. Part 1 covers what the app is. Part 2 goes deep on the peel-off effect and holographic shimmer. This one walks through everything else: die-cut rendering, tilt sensors, haptics, persistence, and the cross-platform architecture.


Building StickerExplode(Part 3): The full end-to-end build

Part 2 covered the peel-off grab and holographic shimmer. But a sticker canvas that feels complete needs more than two cool effects. Die-cut outlines that work on any shape. Tilt sensors bridged across platforms. Haptic feedback mapped to the right gestures. Canvas state that survives app restarts. All wired together through five expect/actual boundaries.

This post is the rest of the build.

Die-cut outlines: the stamp technique

Real vinyl stickers have a white border where they're cut from the backing sheet. In StickerExplode, every sticker gets this treatment. The challenge: stickers aren't simple rectangles. They're emoji, Canvas-drawn paths, icons. Arbitrary shapes.

How it works

The stickerCutout modifier uses drawWithContent to draw the content multiple times with different effects layered underneath:

  1. Draw the content 16 times, each offset in a different direction around a circle, all tinted black. This creates the shadow.
  2. Draw the content 16 more times, offset around a circle at a smaller radius, all tinted white. This creates the outline.
  3. Draw the actual content on top.
private fun Modifier.stickerCutout(
    outlineWidth: Dp = 3.dp,
    liftFraction: Float = 0f,
) = this.drawWithContent {
    val outlinePx = outlineWidth.toPx()
    val pad = outlinePx * 3
    val layerBounds = Rect(-pad, -pad, size.width + pad, size.height + pad)

    // Shadow properties change with lift
    val shadowAlpha = 0.06f + liftFraction * 0.06f
    val shadowSpread = outlinePx + (1.dp.toPx() + liftFraction * 3.dp.toPx())
    val shadowOffsetY = 1.dp.toPx() + liftFraction * 3.dp.toPx()

    // Shadow: 16 copies, black tint
    val shadowPaint = Paint().apply {
        colorFilter = ColorFilter.tint(
            Color.Black.copy(alpha = shadowAlpha / 2f), BlendMode.SrcIn
        )
    }
    for (i in 0 until 16) {
        val angle = (2.0 * PI * i / 16).toFloat()
        val dx = shadowSpread * cos(angle)
        val dy = shadowSpread * sin(angle) + shadowOffsetY
        drawIntoCanvas { canvas ->
            canvas.save()
            canvas.translate(dx, dy)
            canvas.saveLayer(layerBounds, shadowPaint)
        }
        drawContent()
        drawIntoCanvas { canvas ->
            canvas.restore()
            canvas.restore()
        }
    }

    // White outline: 16 copies, white tint
    val whitePaint = Paint().apply {
        colorFilter = ColorFilter.tint(Color.White, BlendMode.SrcIn)
    }
    for (i in 0 until 16) {
        val angle = (2.0 * PI * i / 16).toFloat()
        val dx = outlinePx * cos(angle)
        val dy = outlinePx * sin(angle)
        drawIntoCanvas { canvas ->
            canvas.save()
            canvas.translate(dx, dy)
            canvas.saveLayer(layerBounds, whitePaint)
        }
        drawContent()
        drawIntoCanvas { canvas ->
            canvas.restore()
            canvas.restore()
        }
    }

    drawContent()
}

Why 16 copies

With n copies evenly spaced at radius r, the maximum gap between adjacent stamps is 2r * sin(PI/n). At 16 copies and 3dp radius:

gap = 2 * 3dp * sin(PI/16) ≈ 1.17dp

Sub-pixel on any phone screen. The outline looks perfectly smooth. 8 copies leaves visible scalloping at the corners. 32 looks the same as 16 but doubles the draw calls.

BlendMode.SrcIn

BlendMode.SrcIn is why this works on arbitrary shapes. ColorFilter.tint(Color.White, BlendMode.SrcIn) replaces every opaque pixel with white while preserving the alpha channel. Transparent areas stay transparent. The white stamp exactly matches the shape of whatever content you drew, whether it's a heart emoji, the Kotlin logo path, or a gradient-filled rounded rectangle.

Same trick for the shadow: tint with semi-transparent black instead of white.

Dynamic shadow tied to lift

The shadow parameters interpolate with liftFraction from the peel-off effect (covered in Part 2). Resting stickers have a tight, faint shadow. Lifted stickers have a wider, darker shadow offset further down. This connects the die-cut rendering to the drag interaction. They're two separate systems but liftFraction ties them together.

Tilt sensing across platforms

The holographic shimmer (Part 2) needs real-time tilt data from the device's motion sensors. Android and iOS have completely different sensor APIs.

The common interface

data class TiltData(val pitch: Float = 0f, val roll: Float = 0f)

expect class TiltSensorProvider {
    fun start(callback: (TiltData) -> Unit)
    fun stop()
}

@Composable
expect fun rememberTiltSensorProvider(): TiltSensorProvider

Both pitch and roll are normalized to [-1, 1], where the extremes are 90-degree tilts. The holographic shader only sees this normalized data. It doesn't know or care which platform it's on.

Android: SensorManager

actual class TiltSensorProvider(private val context: Context) {
    actual fun start(callback: (TiltData) -> Unit) {
        val sm = context.getSystemService(Context.SENSOR_SERVICE) as SensorManager
        val sensor = sm.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR) ?: return

        val rotationMatrix = FloatArray(9)
        val orientation = FloatArray(3)

        listener = object : SensorEventListener {
            override fun onSensorChanged(event: SensorEvent) {
                SensorManager.getRotationMatrixFromVector(rotationMatrix, event.values)
                SensorManager.getOrientation(rotationMatrix, orientation)
                val pitch = (orientation[1] / (PI / 2.0)).toFloat().coerceIn(-1f, 1f)
                val roll = (orientation[2] / (PI / 2.0)).toFloat().coerceIn(-1f, 1f)
                callback(TiltData(pitch, roll))
            }
            override fun onAccuracyChanged(sensor: Sensor?, accuracy: Int) {}
        }
        sm.registerListener(listener, sensor, SensorManager.SENSOR_DELAY_GAME)
    }

    actual fun stop() {
        listener?.let { sensorManager?.unregisterListener(it) }
    }
}

TYPE_ROTATION_VECTOR is a fusion sensor. Android combines accelerometer, gyroscope, and magnetometer data internally. Much more stable than reading raw accelerometer values. SENSOR_DELAY_GAME (~20ms) is fast enough for smooth animation without burning the battery.

iOS: CMMotionManager

actual class TiltSensorProvider {
    private val motionManager = CMMotionManager()

    actual fun start(callback: (TiltData) -> Unit) {
        if (!motionManager.isDeviceMotionAvailable()) return
        motionManager.deviceMotionUpdateInterval = 1.0 / 30.0
        motionManager.startDeviceMotionUpdatesToQueue(
            NSOperationQueue.mainQueue
        ) { motion, _ ->
            motion?.let {
                val pitch = (it.attitude.pitch / (PI / 2.0)).toFloat().coerceIn(-1f, 1f)
                val roll = (it.attitude.roll / (PI / 2.0)).toFloat().coerceIn(-1f, 1f)
                callback(TiltData(pitch, roll))
            }
        }
    }

    actual fun stop() {
        motionManager.stopDeviceMotionUpdates()
    }
}

Core Motion gives Euler angles in radians. Dividing by PI/2 normalizes to [-1, 1]. 30 Hz update rate.

Spring-smoothed sensor data

Raw sensor readings are jittery. Instead of writing a manual low-pass filter, I feed the values into Compose's spring animation:

@Composable
fun rememberTiltState(): State<TiltData> {
    val provider = rememberTiltSensorProvider()
    var rawPitch by remember { mutableStateOf(0f) }
    var rawRoll by remember { mutableStateOf(0f) }

    DisposableEffect(provider) {
        provider.start { data ->
            rawPitch = data.pitch
            rawRoll = data.roll
        }
        onDispose { provider.stop() }
    }

    val smoothPitch by animateFloatAsState(
        targetValue = rawPitch,
        animationSpec = spring(dampingRatio = 0.8f, stiffness = 200f),
    )
    val smoothRoll by animateFloatAsState(
        targetValue = rawRoll,
        animationSpec = spring(dampingRatio = 0.8f, stiffness = 200f),
    )

    return remember {
        derivedStateOf { TiltData(smoothPitch, smoothRoll) }
    }
}

This works because animateFloatAsState continuously animates toward its target. When sensor readings jump, the spring absorbs the noise. Damping of 0.8 (nearly critically damped) tracks the actual tilt closely without oscillating. Stiffness of 200 keeps latency to about 50ms, which is imperceptible.

A damped harmonic oscillator is actually what a low-pass filter approximates anyway. The spring spec just lets you tune it with two intuitive parameters instead of figuring out cutoff frequencies.

derivedStateOf wraps the output so the shader sees a single State<TiltData> that updates smoothly.

Haptic feedback

Every interaction has a corresponding haptic. The common code defines four types:

enum class HapticType {
    LightTap,       // grab a sticker
    MediumImpact,   // drop a sticker, double-tap zoom
    HeavyImpact,    // reserved for future use
    SelectionClick, // tap to bring forward, open tray, pick from tray
}

expect class HapticFeedbackProvider {
    fun perform(type: HapticType)
}

Android implementation

actual class HapticFeedbackProvider(private val view: View) {
    actual fun perform(type: HapticType) {
        val constant = when (type) {
            HapticType.LightTap -> HapticFeedbackConstants.CLOCK_TICK
            HapticType.MediumImpact -> HapticFeedbackConstants.CONFIRM
            HapticType.HeavyImpact -> HapticFeedbackConstants.LONG_PRESS
            HapticType.SelectionClick -> {
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE) {
                    HapticFeedbackConstants.GESTURE_START
                } else {
                    HapticFeedbackConstants.CONTEXT_CLICK
                }
            }
        }
        view.performHapticFeedback(constant)
    }
}

Uses View.performHapticFeedback(), which respects the user's system haptic settings. On API 34+, GESTURE_START gives a crisper click for selection actions.

iOS implementation

actual class HapticFeedbackProvider {
    private val lightGenerator = UIImpactFeedbackGenerator(
        style = UIImpactFeedbackStyle.UIImpactFeedbackStyleLight)
    private val mediumGenerator = UIImpactFeedbackGenerator(
        style = UIImpactFeedbackStyle.UIImpactFeedbackStyleMedium)
    private val heavyGenerator = UIImpactFeedbackGenerator(
        style = UIImpactFeedbackStyle.UIImpactFeedbackStyleHeavy)
    private val selectionGenerator = UISelectionFeedbackGenerator()

    actual fun perform(type: HapticType) {
        when (type) {
            HapticType.LightTap -> lightGenerator.impactOccurred()
            HapticType.MediumImpact -> mediumGenerator.impactOccurred()
            HapticType.HeavyImpact -> heavyGenerator.impactOccurred()
            HapticType.SelectionClick -> selectionGenerator.selectionChanged()
        }
    }
}

Generators are pre-allocated. Apple's docs recommend this to avoid latency on first trigger.

Why the mapping matters

The grab/drop pairing is the most important one. Light haptic on grab, medium on drop. It creates a physical narrative: you picked something up (light touch) and put it down (heavier thud). This is the same principle iOS uses for its own drag-and-drop haptics.

Selection clicks are for UI actions (tap to front, open tray, pick a sticker). They feel distinct from impact haptics, which are for physical interactions. Mixing them up makes the app feel wrong even if you can't articulate why.

State persistence

The entire canvas state persists across launches. Every sticker position, rotation, scale, z-index, the complete history log, and the ID/z counters.

The repository

@Serializable
data class CanvasState(
    val stickers: List<StickerItem> = emptyList(),
    val history: List<HistoryEntry> = emptyList(),
    val nextId: Int = 0,
    val zCounter: Int = 0,
)

class CanvasRepository(private val dataStore: DataStore<Preferences>) {
    private val json = Json { ignoreUnknownKeys = true }

    companion object {
        private val CANVAS_STATE_KEY = stringPreferencesKey("canvas_state")
    }

    suspend fun loadCanvasState(): CanvasState? {
        val prefs = dataStore.data.first()
        val raw = prefs[CANVAS_STATE_KEY] ?: return null
        return try {
            json.decodeFromString<CanvasState>(raw)
        } catch (_: Exception) { null }
    }

    suspend fun saveCanvasState(state: CanvasState) {
        dataStore.edit { prefs ->
            prefs[CANVAS_STATE_KEY] = json.encodeToString(state)
        }
    }
}

Everything serialized as a single JSON string in one DataStore key. Not the most efficient storage format, but simple and debuggable (you can read the raw JSON if something goes wrong).

ignoreUnknownKeys = true on the JSON config is forward-compatibility insurance. If I add a new field to StickerItem in a future version, old persisted data still loads. Unknown keys get skipped, new fields get their default values.

Debounced saves

During a drag gesture, updateStickerTransform fires every frame (60 times per second). Serializing the full canvas state on every frame would thrash DataStore. Instead, saves are debounced with a 500ms delay:

private var saveJob: Job? = null

private fun debouncedSave() {
    saveJob?.cancel()
    saveJob = viewModelScope.launch {
        delay(500)
        repository.saveCanvasState(
            CanvasState(
                stickers = _stickers.value,
                history = _history.value,
                nextId = nextId,
                zCounter = zCounter,
            )
        )
    }
}

Every call cancels the previous pending save and schedules a new one. The actual write only happens when the user stops interacting for 500ms or drops the sticker (which also calls debouncedSave).

Platform-specific DataStore paths

DataStore needs a file path, which is platform-dependent.

// commonMain
const val DATA_STORE_FILE_NAME = "sticker_explode_prefs.preferences_pb"

fun createDataStore(producePath: () -> String): DataStore<Preferences> =
    PreferenceDataStoreFactory.createWithPath(
        produceFile = { producePath().toPath() }
    )

expect fun createPlatformDataStore(): DataStore<Preferences>

Android resolves the path from Activity.filesDir:

// androidMain
private lateinit var appDataStore: DataStore<Preferences>

fun initDataStore(filesDir: String) {
    if (!::appDataStore.isInitialized) {
        appDataStore = createDataStore { "$filesDir/$DATA_STORE_FILE_NAME" }
    }
}

actual fun createPlatformDataStore(): DataStore<Preferences> = appDataStore

iOS resolves from NSDocumentDirectory:

// iosMain
actual fun createPlatformDataStore(): DataStore<Preferences> {
    val directory = NSFileManager.defaultManager.URLForDirectory(
        directory = NSDocumentDirectory,
        inDomain = NSUserDomainMask,
        appropriateForURL = null,
        create = false,
        error = null,
    )!!.path!!
    return createDataStore { "$directory/$DATA_STORE_FILE_NAME" }
}

Android needs the extra initDataStore() step because filesDir comes from the Activity context. It gets called in MainActivity.onCreate(). iOS can resolve the documents directory statically.

The sticker tray

The picker is a Material 3 ModalBottomSheet with a LazyVerticalGrid of all 16 sticker types.

Building StickerExplode(Part 3): The full end-to-end build

Each grid item has a press animation:

val interactionSource = remember { MutableInteractionSource() }
val isPressed by interactionSource.collectIsPressedAsState()
val scale by animateFloatAsState(
    targetValue = if (isPressed) 0.85f else 1f,
    animationSpec = spring(dampingRatio = 0.6f, stiffness = 400f),
)
val bgColor by animateColorAsState(
    targetValue = if (isPressed) Color(0xFFE8E8FF) else Color(0xFFF5F5FA),
)

Squish to 85% on press, spring back on release. Background tints purple at the same time. High stiffness (400) makes it snappy.

Each tray item renders the same StickerVisual composable used on the canvas, without the die-cut outline or holographic effect. What you see in the tray is what ends up on the canvas.

The ViewModel

CanvasViewModel owns two StateFlows and handles all mutations:

class CanvasViewModel(private val repository: CanvasRepository) : ViewModel() {

    private val _stickers = MutableStateFlow<List<StickerItem>>(emptyList())
    val stickers: StateFlow<List<StickerItem>> = _stickers.asStateFlow()

    private val _history = MutableStateFlow<List<HistoryEntry>>(emptyList())
    val history: StateFlow<List<HistoryEntry>> = _history.asStateFlow()

    private var nextId = 0
    private var zCounter = 0

    init {
        viewModelScope.launch {
            val saved = repository.loadCanvasState()
            if (saved != null && saved.stickers.isNotEmpty()) {
                _stickers.value = saved.stickers
                _history.value = saved.history
                nextId = saved.nextId
                zCounter = saved.zCounter
            } else {
                _stickers.value = defaultStickers
                nextId = defaultStickers.size
            }
        }
    }

    fun addSticker(type: StickerType) {
        val id = nextId++
        zCounter++
        val sticker = StickerItem(
            id = id,
            type = type,
            initialFractionX = 0.15f + Random.nextFloat() * 0.5f,
            initialFractionY = 0.2f + Random.nextFloat() * 0.4f,
            rotation = -15f + Random.nextFloat() * 30f,
            zIndex = zCounter.toFloat(),
        )
        _stickers.value = _stickers.value + sticker
        _history.value = _history.value + HistoryEntry(
            stickerType = type,
            timestampMillis = currentTimeMillis(),
        )
        debouncedSave()
    }

    fun updateStickerTransform(
        id: Int, offsetX: Float, offsetY: Float, scale: Float, rotation: Float,
    ) {
        _stickers.value = _stickers.value.map { s ->
            if (s.id == id) s.copy(
                offsetX = offsetX, offsetY = offsetY,
                pinchScale = scale, rotation = rotation,
            ) else s
        }
        debouncedSave()
    }

    fun bringToFront(id: Int) {
        zCounter++
        _stickers.value = _stickers.value.map { s ->
            if (s.id == id) s.copy(zIndex = zCounter.toFloat()) else s
        }
        debouncedSave()
    }
}

New stickers get random positions within the center area of the canvas (0.15..0.65 horizontal, 0.2..0.6 vertical) and random rotation between -15 and +15 degrees. This scatters them naturally instead of stacking them on top of each other.

Every mutation calls debouncedSave(). The view model doesn't know when or if the save actually happens. It just signals intent and the debounce logic handles the rest.

The five expect/actual boundaries

Stepping back, the full project has five places where common code delegates to platform code:

Boundary Common type Android iOS
Tilt sensor TiltSensorProvider SensorManager + TYPE_ROTATION_VECTOR CMMotionManager
Haptics HapticFeedbackProvider View.performHapticFeedback() UIImpactFeedbackGenerator
Holographic renderer HolographicBaseNode AGSL RuntimeShader (API 33+) or fallback Fallback only
DataStore path createPlatformDataStore() Activity.filesDir NSDocumentDirectory
System clock currentTimeMillis() System.currentTimeMillis() NSDate().timeIntervalSince1970 * 1000

Each boundary is a narrow interface. The common code defines what it needs (normalized tilt data, a perform(HapticType) function, a DataStore instance) and doesn't care how it's implemented. The platform code is a thin wrapper around native APIs.

None of these boundaries leaked during development. I never had to import an Android or iOS type in common code, and I never needed to pass platform context through the UI layer. The rememberTiltSensorProvider() and rememberHapticFeedback() composables handle context injection on each platform.

What I'd change

Modifier nodes were worth the boilerplate. The holographic shimmer redraws 30 times per second on every sticker. Modifier.composed {} would have triggered recomposition each time. Modifier nodes run draw() directly in the draw pass. If I'd used the older API I think I would have hit performance problems pretty quickly.

I was about to write a manual low-pass filter for the tilt sensors before realizing that a nearly-critically-damped spring does the same thing. Ended up using springs for physics simulation, UI feedback, sensor smoothing, and state transitions. Two parameters, covers everything. They also handle interruption without any special state machines, which I didn't appreciate until I started grabbing stickers mid-bounce.

The ShaderBrush fallback works on iOS but the AGSL version on Android is noticeably smoother during fast tilts. If I start this over I'd look at Metal shaders for iOS.

There's also no undo/redo. The debounced save handles persistence fine but if you accidentally drag a sticker off screen, tough luck. A command stack would fix that.


The full project is about 800 lines of shared Compose code across commonMain. MIT licensed.

Built with Kotlin 2.1 and Compose Multiplatform 1.7. Tested on Pixel 8 Pro and iPhone 15 Pro.

Part 1 | Part 2 | Part 3

]]>
<![CDATA[Introducing Lumen: Transparent Coachmarks for Jetpack Compose]]>https://aditlal.dev/introducing-lumen-transparent-coachmarks-for-jetpack-compose/69837c532e24034344d3770aWed, 04 Feb 2026 17:18:44 GMT

Every coachmark library I tried felt incomplete—not customizable enough for what I needed.

Screenshot-based overlays that feel disconnected. Solid scrims that hide the actual UI. Z-index battles that never end.

I saw the gap. So I built Lumen.

Notice how the FAB remains tappable through the overlay:

Lumen renders real transparent cutouts in Jetpack Compose. Your buttons pulse. Your animations play. Your UI breathes—all visible through the spotlight.

What Makes Lumen Different

Genuine Transparent Cutouts

No screenshots. No faking it. Lumen renders real transparent regions in its scrim overlay.

Five Cutout Shapes

Shape Use Case
Circle FABs, icon buttons
RoundedRect Cards, text fields
Squircle iOS-style, modern apps
Star Gamification, achievements
Rect Full-width elements

Six Highlight Animations

Pulse · Glow · Ripple · Shimmer · Bounce · None

Multi-Step Sequences

Build complete onboarding flows with progress indicators. Users navigate forward and back at their own pace.

Dialog Coordination

Lumen automatically dismisses coachmarks when dialogs appear—no awkward z-index battles.

The API

Three steps. That's it.

1. Create a controller

val controller = rememberCoachmarkController()

2. Tag your target

CoachmarkHost(controller = controller) {
    IconButton(
        onClick = { /* ... */ },
        modifier = Modifier.coachmarkTarget(controller, "settings")
    ) {
        Icon(Icons.Default.Settings, "Settings")
    }
}

3. Show the coachmark

controller.show(
    CoachmarkTarget(
        id = "settings",
        title = "Settings",
        description = "Customize your preferences here.",
        shape = CutoutShape.Circle(),
    )
)

What You Can Build

Single Spotlight Multi-Step Tours
Introducing Lumen: Transparent Coachmarks for Jetpack Compose Introducing Lumen: Transparent Coachmarks for Jetpack Compose
Highlight a FAB with pulse animation Full onboarding with progress dots

The sample app includes 11 interactive demos covering animations, connectors, theming, LazyColumn support, and dialog coordination.

Get Started

implementation("io.github.aldefy:lumen:1.0.0-beta01")

Open source under Apache 2.0. Available on Maven Central.

GitHub · Docs · Sample App

]]>
<![CDATA[Compose Performance Bottlenecks: Anti-Patterns That Ship Bugs]]>https://aditlal.dev/compose-bottleneck-antipatterns-performance/829594572bbd7ceae93a114bSat, 17 Jan 2026 10:02:35 GMT

Jetpack Compose is elegant. It's also a landmine.

The same reactive model that makes Compose declarative can silently swallow your coroutines, fire effects multiple times, and leave users staring at frozen spinners. I've seen these bugs ship to production—in my own apps and in code reviews across teams.

At Droidcon India 2025, I presented a talk on these patterns. This post goes deeper: the actual bugs, why Compose behaves this way internally, and how we fixed them in production at Equal AI.

The OTP That Never Verified: LaunchedEffect Self-Cancellation

This bug cost us a week of debugging. Users reported that SMS auto-fill "didn't work"—the OTP would populate, but verification never happened. The spinner just spun forever.

Here's the code that shipped:

var wasAutoFilled by remember { mutableStateOf(false) }

LaunchedEffect(wasAutoFilled) {
    if (!wasAutoFilled) return@LaunchedEffect

    wasAutoFilled = false   // Reset for next time
    delay(300)              // Small delay to feel natural
    onVerifyOTP(otpCode)    // Verify the OTP
    isVerifying = false     // Hide spinner
}

What we expected:

SMS fills OTP Effect triggers OTP verified Done ✓

What actually happened:

SMS fills OTP Effect triggers Flag resets CANCELLED ✗

Why This Happens: Compose's Recomposition Timing

Here's the timeline:

// 0ms
wasAutoFilled = true ← SMS arrives, LaunchedEffect starts
// ~1ms
wasAutoFilled = false ← KEY CHANGES, recomposition scheduled
// ~1ms
delay(300) ← coroutine suspends here...
// ~16ms — FRAME BOUNDARY
Recomposition runs → key changed → effect recreates
Old coroutine: CancellationException
onVerifyOTP() NEVER RUNS

The bug is subtle: changing the LaunchedEffect key schedules cancellation, but cancellation executes at the next suspension point. If you have no suspension, the code runs to completion before the frame boundary. Add a delay(), and you're dead.

The Fix: snapshotFlow Decouples Observation from Lifecycle

LaunchedEffect(Unit) {  // Key is Unit — NEVER changes
    snapshotFlow { wasAutoFilled }
        .filter { it }
        .collect {
            wasAutoFilled = false   // Safe! Just emits to the flow
            delay(300)              // Completes normally
            onVerifyOTP(otpCode)    // Actually executes
            isVerifying = false
        }
}

Why this works:

  • LaunchedEffect(Unit) starts once and never restarts
  • snapshotFlow observes state changes as Flow emissions
  • Changing wasAutoFilled emits a new value—it doesn't cancel the collector

The Shopping Cart That Lost Items: Mutable Collection Mutation

A user reported: "I added 5 items to my cart, but only 2 showed up." We checked the database—all 5 were there. The bug was in the UI.

var cartItems by remember {
    mutableStateOf(mutableListOf<CartItem>())
}

Button(onClick = {
    cartItems.add(newItem)  // Items added internally...
    // ...but UI never updates!
})

Why This Happens: Reference Equality

Compose uses reference equality to detect state changes. When you call cartItems.add(), you're mutating the same list object:

❌ Mutation
List@a1b2c3 → [2 items]
List@a1b2c3 → [3 items]
Same reference = no recomposition
✓ New list
List@a1b2c3 → [2 items]
List@d4e5f6 → [3 items]
New reference = recomposition ✓

The Fix: Immutable Updates or mutableStateListOf

// Option 1: Create new list
var cartItems by remember { mutableStateOf(listOf<CartItem>()) }
cartItems = cartItems + newItem  // New reference

// Option 2: Use Compose's observable list
val cartItems = remember { mutableStateListOf<CartItem>() }
cartItems.add(newItem)  // Automatically triggers recomposition

The Duplicate Snackbar: Events Are Not State

Error handling seemed simple:

// In ViewModel
var errorMessage by mutableStateOf<String?>(null)

// In Composable
LaunchedEffect(viewModel.errorMessage) {
    viewModel.errorMessage?.let { error ->
        snackbarHostState.showSnackbar(error)
    }
}

Bug: Rotate the device while the snackbar is showing. It shows again. And again on every configuration change.

Why This Happens

Configuration Change Timeline:
1. Activity recreates
2. ViewModel survives → errorMessage = "Save failed"
3. LaunchedEffect observes non-null
4. Snackbar shows AGAIN

The Fix: Use Channels for One-Time Events

// In ViewModel
private val _events = Channel<UiEvent>(Channel.BUFFERED)
val events = _events.receiveAsFlow()

// In Composable
LaunchedEffect(Unit) {
    viewModel.events.collect { event ->
        when (event) {
            is UiEvent.ShowError -> snackbarHostState.showSnackbar(event.message)
        }
    }
}

The Janky Scroll: State Read Too High

Performance profiling showed our list was recomposing on every frame during scroll. 60 recompositions per second.

@Composable
fun ProductListScreen() {
    val scrollState = rememberLazyListState()
    val showScrollToTop = scrollState.firstVisibleItemIndex > 5  // ← Read here!

    Column {
        TopBar(showScrollToTop)   // Recomposes on scroll
        ProductList(scrollState)  // Recomposes on scroll
        BottomNav()               // Recomposes on scroll (!)
    }
}

Why This Happens: Recomposition Scope

ProductListScreen (reads scrollState)
├── TopBar recomposes
├── ProductList recomposes
└── BottomNav recomposes
Every scroll = entire tree recomposes

The Fix: Push State Reads Down

@Composable
fun ScrollAwareTopBar(scrollState: LazyListState) {
    val showScrollToTop by remember {
        derivedStateOf { scrollState.firstVisibleItemIndex > 5 }
    }
    // Only THIS composable recomposes on scroll
    TopBar(showScrollToTop)
}

State Machines: Making Impossible States Impossible

All these bugs share a root cause: invalid state combinations that shouldn't exist.

// 4 booleans = 16 combinations. Valid states? Maybe 5.
data class CheckoutState(
    val isLoading: Boolean,
    val isError: Boolean,
    val isSuccess: Boolean,
    val isProcessingPayment: Boolean
)

The Fix: Sealed Interfaces

sealed interface CheckoutState {
    data object Idle : CheckoutState
    data object Loading : CheckoutState
    data class ProcessingPayment(val order: Order) : CheckoutState
    data class Success(val receipt: Receipt) : CheckoutState
    data class Error(val message: String) : CheckoutState
}

Now only valid states can exist. The compiler enforces exhaustive handling.


Production Results

After implementing these patterns at Equal AI:

Metric Before After
Crash rate 0.4% 0.1%
ANR rate 0.2% 0.05%
"UI stuck" reports 23/week 3/week
Test coverage (state) 34% 89%

Resources

Slides: View on SpeakerDeck

Code: compose-patterns-playground

The playground includes interactive broken/fixed demos for all 12 anti-patterns.

Key Takeaways

  1. LaunchedEffect keys control lifecycle — Changing the key cancels the coroutine
  2. Compose uses reference equality — Mutating collections doesn't trigger recomposition
  3. Events are not state — Use Channel for one-time events
  4. State reads define recomposition scope — Read state as low as possible
  5. Sealed interfaces prevent impossible states — Boolean combinations explode

Compose isn't slow. Misusing Compose is slow. Learn the patterns, avoid the traps, ship fewer bugs.

Presented at Droidcon India 2025

]]>
<![CDATA[The OkHttp API You're Not Using]]>https://aditlal.dev/okhttp-network-observability-android/d08e645fa4b85b5257ce54e4Fri, 16 Jan 2026 15:08:17 GMTThe 45-Second Mystery The OkHttp API You're Not Using

Last month, our Crashlytics lit up:

UnknownHostException: Unable to resolve host "api.example.com"
  Occurrences: 2,847
  Users affected: 1,203
  Context: ¯\_(ツ)_/¯

Backend team checked their dashboards: "API response time is 47ms p95. Not our problem."

They were right. The API was fast. But users were staring at spinners for 45 seconds before seeing "Something went wrong."

Where did those 45 seconds go?


The Visibility Gap

Here's what most Android apps measure:

What You Track What Actually Happens
Total request time Yes
HTTP status code Yes
DNS resolution time No
TCP handshake time No
TLS negotiation time No
Time waiting for first byte No
Which phase failed No

You're measuring the destination, but you're blind to the journey.

That UnknownHostException? It could mean:

  • DNS server unreachable (2-30 second timeout)
  • Domain doesn't exist (instant failure)
  • Network switched mid-request (random timing)
  • DNS poisoning in certain regions (varies)

Without phase-level visibility, you're debugging with a blindfold.


Where Time Actually Goes

We instrumented 50,000 requests across different network conditions. Here's what we found:

Good Network (WiFi, 4G LTE)

Phase P50 P95 P99
DNS Lookup 5ms 45ms 120ms
TCP Connect 23ms 89ms 156ms
TLS Handshake 67ms 142ms 203ms
Time to First Byte 52ms 187ms 412ms
Total 147ms 463ms 891ms

Degraded Network (3G, Poor Signal)

Phase P50 P95 P99
DNS Lookup 234ms 8,200ms 29,000ms
TCP Connect 456ms 2,100ms 5,600ms
TLS Handshake 312ms 890ms 1,400ms
Time to First Byte 178ms 1,200ms 3,400ms
Total 1,180ms 12,390ms 39,400ms

The culprit in our 45-second mystery? DNS timeout on degraded networks.

But we only discovered this after adding proper instrumentation.


The Logging Interceptor Trap

This is in 90% of Android codebases:

class LoggingInterceptor : Interceptor {
    override fun intercept(chain: Interceptor.Chain): Response {
        val start = System.currentTimeMillis()
        val response = chain.proceed(chain.request())
        val duration = System.currentTimeMillis() - start

        Timber.d("Request took ${duration}ms") // <-- This number lies
        return response
    }
}

Why it lies:

Scenario What Happened What You See
3 retries due to connection reset 3 separate failures, then success "Request took 12,000ms"
Cache hit Instant response from disk "Request took 2ms" (good!)
Redirect chain (3 hops) 3 network round trips Single timing
DNS timeout + success 30s DNS, 200ms request "Request took 30,200ms"

You're seeing the outcome, not the story.


The OkHttp Timeout Trap

Here's a "reasonable" timeout configuration:

val client = OkHttpClient.Builder()
    .connectTimeout(10, TimeUnit.SECONDS)
    .readTimeout(30, TimeUnit.SECONDS)
    .writeTimeout(30, TimeUnit.SECONDS)
    .build()

Pop quiz: What's the maximum time a user could wait?

If you said 70 seconds, you're wrong. It's potentially infinite.

The Timeout Truth Table

Timeout What It Actually Controls Resets?
connectTimeout DNS + TCP + TLS combined No
readTimeout Max time between bytes Yes, per chunk
writeTimeout Max time between bytes Yes, per chunk
callTimeout Entire operation end-to-end No

A server trickling 1 byte every 25 seconds will never trigger your 30-second readTimeout. Each byte resets the clock.

callTimeout is the only timeout that represents actual user experience.

val client = OkHttpClient.Builder()
    .connectTimeout(15, TimeUnit.SECONDS)
    .readTimeout(30, TimeUnit.SECONDS)
    .writeTimeout(30, TimeUnit.SECONDS)
    .callTimeout(45, TimeUnit.SECONDS)  // <-- The one that matters
    .build()

The Solution: EventListener

OkHttp has a hidden API that most developers don't know exists. EventListener gives you callbacks for every phase of the request lifecycle.

Phase Events
Start callStart
DNS dnsStartdnsEnd
TCP connectStartconnectEnd
TLS secureConnectStartsecureConnectEnd
Request connectionAcquiredrequestHeadersStartrequestHeadersEndrequestBodyStartrequestBodyEnd
Response responseHeadersStartresponseHeadersEndresponseBodyStartresponseBodyEnd
Cleanup connectionReleasedcallEnd

Production Implementation

class NetworkMetricsListener(
    private val onMetrics: (NetworkMetrics) -> Unit
) : EventListener() {

    private var callStart = 0L
    private var dnsStart = 0L
    private var connectStart = 0L
    private var secureConnectStart = 0L
    private var requestStart = 0L
    private var responseStart = 0L

    private var connectionReused = false

    override fun callStart(call: Call) {
        callStart = System.nanoTime()
    }

    override fun dnsStart(call: Call, domainName: String) {
        dnsStart = System.nanoTime()
    }

    override fun dnsEnd(call: Call, domainName: String, inetAddressList: List<InetAddress>) {
        // DNS complete - connection reuse skips this entirely
    }

    override fun connectStart(call: Call, inetSocketAddress: InetSocketAddress, proxy: Proxy) {
        connectStart = System.nanoTime()
    }

    override fun secureConnectStart(call: Call) {
        secureConnectStart = System.nanoTime()
    }

    override fun connectionAcquired(call: Call, connection: Connection) {
        connectionReused = (connectStart == 0L)  // No connect phase = reused
    }

    override fun requestHeadersStart(call: Call) {
        requestStart = System.nanoTime()
    }

    override fun responseHeadersStart(call: Call) {
        responseStart = System.nanoTime()
    }

    override fun callEnd(call: Call) {
        emitMetrics(success = true)
    }

    override fun callFailed(call: Call, ioe: IOException) {
        emitMetrics(success = false, error = ioe)
    }

    private fun emitMetrics(success: Boolean, error: IOException? = null) {
        val now = System.nanoTime()
        onMetrics(NetworkMetrics(
            dnsMs = if (dnsStart > 0) (connectStart - dnsStart).toMillis() else 0,
            connectMs = if (connectStart > 0) (secureConnectStart - connectStart).toMillis() else 0,
            tlsMs = if (secureConnectStart > 0) (requestStart - secureConnectStart).toMillis() else 0,
            ttfbMs = (responseStart - requestStart).toMillis(),
            totalMs = (now - callStart).toMillis(),
            connectionReused = connectionReused,
            success = success,
            errorType = error?.javaClass?.simpleName
        ))
    }

    private fun Long.toMillis() = TimeUnit.NANOSECONDS.toMillis(this)
}

What You Get

Before EventListener:

Field Value
Duration 32,450ms
Error UnknownHostException
Context ???

After EventListener:

Phase Duration Status
DNS Lookup 30,120ms TIMEOUT
TCP Connect -- --
TLS Handshake -- --
TTFB -- --
Total 30,120ms
Error UnknownHostException
Failed Phase DNS

Now you know: The DNS resolver on this user's network is broken. Not your API. Not your code. Their ISP.


Level Up: Distributed Tracing with OpenTelemetry

EventListener tells you what happened on the client. But what about the full journey?

Android AppCDNAPI GatewayServiceDatabase

Where is the slowness?

With OpenTelemetry, you can trace a request from button tap to database query:

class TracingEventListener(
    private val tracer: Tracer
) : EventListener() {

    private var rootSpan: Span? = null

    override fun callStart(call: Call) {
        rootSpan = tracer.spanBuilder("HTTP ${call.request().method}")
            .setSpanKind(SpanKind.CLIENT)
            .setAttribute("http.url", call.request().url.toString())
            .startSpan()
    }

    override fun dnsStart(call: Call, domainName: String) {
        tracer.spanBuilder("DNS Lookup")
            .setParent(Context.current().with(rootSpan!!))
            .startSpan()
    }

    // ... create child spans for each phase

    override fun callEnd(call: Call) {
        rootSpan?.setStatus(StatusCode.OK)
        rootSpan?.end()
    }
}

The Trace Waterfall

Now you can answer: "Is it DNS, the network, or the backend?"

HTTP GET /api/users Total: 1,247ms
0ms 250ms 500ms 750ms 1000ms 1247ms
DNS Lookup
45ms
TCP Connect
89ms
TLS Handshake
156ms
Request Send
12ms
Response Receive
945ms
DNS
TCP
TLS
Request
Response

Observability Stack Options

Solution Cost Setup Best For
Honeycomb Paid (20M events free) 5 min Best query experience
Grafana Cloud Free 50GB/mo 10 min Already using Grafana
Jaeger Free (self-host) 1-2 hrs Full control
Datadog Paid 15 min Enterprise, existing DD

Results

After implementing EventListener + OpenTelemetry in our production app:

Metric Before After Change
MTTR for network issues 4.2 hours 23 minutes -91%
"Network error" bug reports 847/week 312/week -63%
P95 false timeout errors 2.3% 0.4% -83%

The biggest win? We stopped blaming the backend for DNS problems.


Real Device Testing: The Proof

Theory is nice. Data is better. I built a test app and ran it on real devices to see what actually happens.

Test App: github.com/aldefy/okhttp-network-metrics

EventListener: See What Interceptors Can't

What Interceptor Sees What EventListener Reveals
Request → Response DNS: 5081ms
Total: 7362ms TCP: 1313ms
TLS: 964ms
TTFB: 7359ms

The Doze Mode Discovery

Doze Mode Recovery Timeline
Pixel 9 Pro Fold • JIO 4G
0s 5s 10s 15s
Immediate
after doze exit
TCP TIMEOUT 15,000ms
After 5s
wait then retry
1,431ms ✓
After 30s
fully recovered
1,777ms ✓
Key Insight: Pixel fails immediately after doze exit because the radio is still waking up. Wait 5 seconds and the connection succeeds.
💡 Moto Razr behavior is opposite: Works immediately, then loses network after 5s. Same error, different root cause.

Same error message. Completely different root causes. This is why EventListener matters.

Baseline Performance

Device Network Cold DNS TCP TLS TTFB Total
Pixel 9 Pro Fold JIO 4G 229ms 972ms 665ms 1989ms 1991ms
Moto Razr 40 Ultra Airtel 5081ms 1313ms 964ms 7359ms 7362ms

That 5-second DNS on Motorola? Thats not a typo. Airtels DNS is slow on first lookup.

Completely opposite behavior!

  • Pixel: Fails immediately post-doze, then recovers after 5 seconds
  • Moto: Works immediately, then loses network entirely

Without EventListener, both would show the same error. With it, you can see the Pixel fails at TCP (SocketTimeoutException), while Moto loses the network interface completely.


TL;DR

  1. Your logging interceptor is lying. It shows outcomes, not phases.

  2. callTimeout is the only timeout that matters for user experience.

  3. EventListener exists. Use it. You'll finally understand your network failures.

  4. Add distributed tracing if you need end-to-end visibility across services.

  5. DNS is usually the culprit for those mysterious 30+ second timeouts.


The UnknownHostException you've been catching with a generic error message? It deserves better. Your users certainly do.


What This Means For You

EventListener transforms network debugging from guesswork into science. Instead of:

"Users are reporting slow network. Maybe its the backend?"

You get:

"Airtel users on Motorola devices have 5s DNS resolution. Jio users on Pixel fail TCP immediately after doze but recover in 5s. Backend is fine - its carrier DNS and OS power management."

Thats the difference between filing a ticket with your backend team and actually fixing the problem.

Your Action Items

  1. Add EventListener to your OkHttp client - 50 lines of code, infinite debugging value
  2. Log phase timings to your analytics - segment by carrier, device, network type
  3. Test doze recovery on YOUR devices - the behavior varies wildly
  4. Set callTimeout - its the only timeout that reflects user experience

Try It Yourself

I open-sourced the test app: github.com/aldefy/okhttp-network-metrics

Run it on your devices. See what YOUR carriers DNS looks like. Find out how YOUR devices recover from doze.

Download the test app — run Baseline + Post-Doze, and DM me your results on Twitter. Ill add your device to this post.

Because the network will always be unreliable - but now you can see exactly where and why.


The next time someone says "its a network issue" youll know which part of the network, on which carrier, on which device, under which conditions. Thats not debugging - thats engineering.


Tags: Android, OkHttp, Kotlin, Network, Observability, OpenTelemetry

]]>
<![CDATA[Building a Design System with Jetpack Compose - Andromeda]]>https://aditlal.dev/design-systems-with-jetpack-compose/696a519f5bc9cc89ec009d74Tue, 08 Feb 2022 09:47:23 GMT

Building a Design System with Jetpack Compose - Andromeda

Building a Design System with Jetpack Compose - Andromeda

Feb 08, 2022

In today's world of Modern Android Development, a consistent user interface layer of our mobile app is now more critical than ever. Moreover, with the Jetpack Compose framework, things have never been more fun and straightforward.

In this post, we look at building a complex Design system for our Android apps. What is a design system, one may ask? It is a set of standards to manage design at scale by reducing redundancy while creating a shared language and visual consistency across different pages and channels.

Design systems, when implemented well, can provide many benefits to a design team, and thereby the usage of a Design system can then benefit the engineers on the team as follows:

  • It can help create a visual consistency across products, channels, and different teams.
  • It can be a tailor-made solution based explicitly on product teams' requirements. An in-house Design System will adapt to the company's needs and not the other way around.
  • An open-source Design system allows high-quality and consistent products built with less effort in design and development as it is a ready-made solution waiting to be adopted.
  • Most importantly, it allows everyone on the team to create/reuse user interface components that give consistency to products, thereby bringing focus to a consistent user experience.

Google provides an excellent Design System framework called Material Design which lets us start with simple yet powerful drop-in components to cover most of the everyday use cases. However, if we focus on a complex world use case, a question arises - What if the Design System that we use can be platform-independent, i.e., Most of the components and colors and branding used across not just for Android apps but also on cross-platform such as Web, iOS, Desktop - the Material Design does not fit perfectly in such cases in my opinion.

What design system do you use with your Jetpack compose apps?Also what would you customise #AndroidDev#JetpackCompose
— Adit Lal (@aditlal) January 26, 2022

Investing to build a design system in-house can get very time-consuming, and you need a dedicated member/s of your team to always keep things updated, documented, and answer any questions that may arise during the development lifecycle. For example, when components need to be tweaked, with the intention being that once these components are built, it becomes very simple to rapidly build new feature screens. Anyone designing or building pages should have a very clear understanding of what is and isn't a "top-level" component, how interactions with these components work, how these components would react to new semantics provided by call sites, and more.

So, today I introduce you to a brand new library for Jetpack Compose Andromeda - an open-source design system with custom components and a customizable theme that focuses on the following key areas:

  • The library should adhere to a clear design spec to ensure "top-level" components and their subcomponents variations are easy to use and adopt well to provided semantic colors and can be operated very easily as there would be documented guidelines.
  • We should be able to cross-reference components and their implementation via a Catalog app which would be ever-growing with new and improved versions as the library evolves.
  • The library should be clear and concise with easy solutions for components.
  • Have I mentioned that there is plenty of documentation for every tiny bit?
  • Add tools / CLI companion helpers to enable library users to scale the design and customize to need and build a brand with their own typography/colors/illustrations and/or icons representing the Tokens in this Design system.
  • Top-level or, in other words, first-class citizens of the library can be basic stateful composable components that just work out of the box and alongside them are more complex and compound components that are use case/feature specific.
  • Reusability and well-defined structure are vital.

I will be covering more such details in the following posts in this Blog series, which will help detail the thought process on how I managed to build a custom design system for Jetpack Compose. Stay tuned, in the meanwhile check this out a snapshot of the Catalog app :

Building a Design System with Jetpack Compose - Andromeda
Catalog app showcasing a circular reval and different components working closely with custom theme.

Adit Lal

  • No results for your search, please try with something else.
    

Adit Lal © 2022  •  Published with Ghost

JavaScript license information

Building a Design System with Jetpack Compose - Andromeda

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>
<![CDATA[Learning from failures at scale]]>https://aditlal.dev/learning-from-failures-at-scale/696a519f5bc9cc89ec009d73Sat, 07 Nov 2020 12:22:01 GMT

Learning from failures at scale

Learning from failures at scale

testing•Nov 07, 2020

Some fun and interesting bits of how do we manage a super app at @gojek- What goes into making and maintaining large projects.- How does a developer cater to new features.- What are some of the troubles they face.- How do junior developers collaborate with senior developers.- What does it take to cut it and work with an amazing team.- How do engineers troubleshoot and some tips to managing tech debts and finding balance.- Avoiding burnout and enjoy the process.Watch it here

Adit Lal

Recommended for you

No posts found

    Apparently there are no posts at the moment, check again later.
  

  No results for your search, please try with something else.

Adit Lal © 2020  •  Published with Ghost

JavaScript license information

Learning from failures at scale

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>
<![CDATA[Ad-hoc polymorphism in JSON with Kotlin]]>https://aditlal.dev/polymorphic-json/696a519f5bc9cc89ec009d71Sun, 02 Jun 2019 10:18:00 GMT

Ad-hoc polymorphism in JSON with Kotlin

Ad-hoc polymorphism in JSON with Kotlin

json•Jun 02, 2019

For a long time now JSON is a de facto standard for all kinds of data serialization between client and server. Among other, its strengths are simplicity and human-readability. But with simplicity comes some limitations, one of them I would like to talk about today: storing and retrieving polymorphic objects.

The need to parse JSON and also convert objects to JSON is pretty much universal, so in all likeliness, you are already using a JSON library in your code.

First of, there is the Awesome-Kotlin list about JSON libraries. Then, there are multiple articles like this one, talking about how to handle Kotlin data classes with JSON.

We want to use Kotlin data classes for concise code, non-nullable types for null-safety and default arguments for the data class constructor to work when a field is missing in a given JSON. We also would probably want explicit exceptions when the mapping fails completely (required field missing). We also want near zero overhead automatic mapping from JSON to objects and in reverse. On android, we also want a small APK size, so a reduced number of dependencies and small libraries. Therefore:

  • We don’t want to use android’s org.json , because it has very limited capabilities and no mapping functionality at all.
  • To my knowledge, to make use of the described Kotlin features like null-safety and default arguments, all libraries supporting Kotlin fully use kotlin-reflect , which is around 2MB in size and therefore might not be an option.
  • We might not have the ability to use a library like Moshi with integrated Kotlin support, because we already use the popular Gson or Jackson library used in the project.

This post describes a way of using the GSON and Mosi library with Kotlin data classes and the least amount of overhead possible of achieving a mapping of JSON to Kotlin data classes with null-safety and default values with Polymorphic JSON data.

First we need to understand the  polymorphism on the data we are trying to parse:

Polymorphism by field value, aka discriminator - help's detect the object type, an API can add the discriminator/propertyName keyword to model definitions. This keyword points to the property that specifies the data type.

Discriminator embedded in object -polymorphic classes

[
   {
      "type":"CIRCLE",
      "radius":10.0
   },
   {
      "type":"RECTANGLE",
      "width":20.0
   }
]

In this case , because only Circle has radius field, first object from list will be deserialized in Circle class.

One solution for this could be :

sealed class Shape
class Circle(val radius: Double) : Shape
class Rectangle(val width: Double) : Shape

Discriminator is external -  polymorphic fields

[
   {
      "type":"CIRCLE",
      "data":{
         "radius":10.0
      }
   },
   {
      "type":"RECTANGLE",
      "data":{
         "width":20.0
      }
   }
]

In this case , since the discriminator is external , we need a mechanism to decide the data type and deserialize our JSON to our respective data classes.

GSON performs the serialization/deserialization of objects using its inbuilt adapters. It also supports custom adapters.

Imagine the API returns a list of family members, which have few different types of members. There are a few dogs, cats and some humans , an there is no particular order.

{
   "family":[
      {
         "id":"5c91012fdbd7835c6720a578",
         "members":[
            {
               "id":"5c91012f57e3c8f1f54499be",
               "type":2,
               "data":{
                  "photo":"http://placehold.it/32x32",
                  "name":"sit",
                  "tag":{
                     "id":"5c91012fb0ae1089c92057a4",
                     "city":"Manchester"
                  }
               }
            },
            {
               "id":"5c91012fb79ec88645ad7f69",
               "type":3,
               "data":{
                  "photo":"http://placehold.it/32x32",
                  "name":"tempor",
                  "color":"black"
               }
            },
            {
               "id":"5c91012fb2e05582cbb207da",
               "type":1,
               "data":{
                  "photo":"http://placehold.it/32x32",
                  "name":"magna",
                  "sex":"male"
               }
            },
            {
               "id":"5c91012fa77bba8d3a2f7e1a",
               "type":1,
               "data":{
                  "photo":"http://placehold.it/32x32",
                  "name":"aliqua",
                  "sex":"female"
               }
            }
         ]
      }
   ],
   "total_count":4
}

To parse this , we would :

Parse the JSON and break down each type into subtypes , we have 3 subtypes - Dog , Cat , and Human.

const val HUMAN_TYPE = "human"
const val DOG_TYPE = "dog"
const val CAT_TYPE = "cat"

We need to register our JSON sub types with GSON

private fun initGSON() {
    GsonBuilder().registerTypeAdapterFactory(getTypeAdapterFactory())
            .create()
}

private fun getTypeAdapterFactory(): RuntimeTypeAdapterFactory<DataT> {
    return RuntimeTypeAdapterFactory
        .of<DataT>(DataT::class.java, "data_type")
        .registerSubtype(DogDataT::class.java, DOG_TYPE)
        .registerSubtype(CatDataT::class.java, CAT_TYPE)
        .registerSubtype(HumanDataT::class.java, HUMAN_TYPE)
}

For each of these subtypes we have few parameters common and can be abstracted in our base type class :

sealed class Data(
    @SerializedName("name")
    val name: String = "",
    @SerializedName("photo")
    val photo: String = "",
    @SerializedName("type")
    val type: Int
)

This sealed class becomes our base to extend functionality for classifying our types

name 
photo

name and photo are common for all 3 types , type becomes our discriminator that our JSON library can parse.

The following classes are extending functionality from Data

data class CatData(val color: String) : Data(type = CAT_TYPE)
data class DogData(val tag: Tag) : Data(type = DOG_TYPE)
data class HumanData(val sex: String) : Data(type = HUMAN_TYPE)

To deserialize our family response JSON , our call site would be :

familyResponse.members.forEach { familyMember ->
            when (familyMember.data.type) {
                DOG_TYPE -> Log.d(
                    "TYPEConverter",
                    "${familyMember.data.name} is dog ${(familyMember.data as DogData).tag}"
                )
                CAT_TYPE -> Log.d(
                    "TYPEConverter",
                    "${familyMember.data.name} is cat ${(familyMember.data as CatData).color}"
                )
                HUMAN_TYPE -> Log.d(
                    "TYPEConverter",
                    "${familyMember.data.name} is human ${(familyMember.data as HumanData).sex} "
                )
            }
        }

Similarly with Moshi :

private fun initMoshi() {
    Moshi.Builder()
   .add(PolymorphicJsonAdapterFactory.of(DataT::class.java, "data_type")
       .withSubtype(DogDataT::class.java, DOG_TYPE)
       .withSubtype(CatDataT::class.java, CAT_TYPE)
       .withSubtype(HumanDataT::class.java, HUMAN_TYPE)
      )
   //if you have more adapters, add them before this line:
   .add(KotlinJsonAdapterFactory())
   .build()
}

private fun parseData() {
    val adapter = moshi.adapter<FamilyResponse>(FamilyResponse::class.java)
    val familyResponse = adapter.fromJson(jsonData)
}

The value offered by ad-hoc polymorphism is very closely tied to the language you’re using it in. In other words, it’s not a universal tool but one that’s heavily dependent on how well supported it is in your language. Ad-hoc polymorphism is obviously a critical component of Haskell and it has given rise to high amounts of reuse and elegant abstractions in that language but I’m not sure Kotlin would benefit as much from it.

Adit Lal

Recommended for you

dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

a year ago•3 min read](/web/20200814174329/https://www.aditlal.dev/kotlin-dsl-part-5/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

a year ago•2 min read](/web/20200814174329/https://www.aditlal.dev/kotlin-dsl-part-4/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

a year ago•3 min read](/web/20200814174329/https://www.aditlal.dev/kotlin-dsl-part-3/)

  No results for your search, please try with something else.

Adit Lal © 2020  •  Published with Ghost

JavaScript license information

Ad-hoc polymorphism in JSON with Kotlin

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>
<![CDATA[Kotlin DSL - let's express code in "mini-language" - Part 5 of 5]]>https://aditlal.dev/kotlin-dsl-part-5/696a519f5bc9cc89ec009d70Sat, 23 Mar 2019 12:50:16 GMT

Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

dsl•Mar 23, 2019

In this post, we take a look at building our test cases by using a simpler language-like DSL.

Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

I wanted a “simple” low overhead way of setting up, expressing and testing many combinations of inputs and outputs.

The goal is simple , create DSL for expressing the tests clearly and to find a concise way of writing tests that makes creating new cases a breeze.

An example of a DSL test case :

@Test
fun logsInWhenUserSelectsLogin() {
    ...
    resetLoginInPref() //sets login pref key as false

    instrumentation.startActivitySync(loginIntent)

    onView(allOf(withId(R.id.login_button), withText(R.string.login)))
          .perform(click())

    val expectedText = context.getString(R.string.is_logged_in, "true")
    onView(AllOf.allOf(withId(R.id.label), withText(expectedText)))
          .perform(ViewActions.click())
}

With this , its still pretty good test case , we could improve the syntax to take advantage of a DSL

Benefits : Converting Tests to DSL

  • Correctness: Fix tests that are not exercising the intended target.
  • Build Speed: Remove Robolectric and PowerMock where they are not needed.
  • Cruft Clean Up: Clean up test code , annotations and throws that are unnecessary or unneeded.
  • Readability: Further enhancement of the test style such as increased readability of tests in either // Given or // Then sections by use of method extraction.
  • Readability Again: Restating the tests in sentences revealed missing assumptions in their names.

Simple RxTest is an example of how internal DSL support can build specific domain grammar to test.

// Example of RxTest
Observable.just("Hello RxTest!")
 .test {
        it shouldEmit "Hello RxTest!"  
        it should complete() 
        it shouldHave noErrors() 
}

Let's break down our test case

  • Update preferences to make sure that the user is logged out before the test starts
  • The user launches the app
  • The user clicks on “Log In”
  • We assert that the user sees the logged in text

Setup, actions and assertions.

.redText
{
    color:red;
    font-size: 15px;
    font-weight:bold;
}
.blueText
{
    color:blue;
    font-size: 15px;
    font-weight:bold;
}
.greenText
{
    color:green;
    font-size: 15px;
    font-weight:bold;
} 
.blackText
{
    color:gray;
    font-weight:italic;
    font-size: 15px;
}

Given the user is logged out
When the user launches the appWhen the user clicks “Log In”Then the user sees the logged in text

DSL implementation of the same is as follows :

@Test
fun logsInWhenUserSelectsLogin() {
    
    given(user).has().loggedOut();
    
    when(user).launches().app();
    when(user).selects().login();

    then(user).sees().loggedIn();

}

For this we would use :

infix fun Any.given(block: () -> Unit) = block.invoke()

infix fun Any.whenever(block: () -> Unit) = block.invoke()

infix fun Any.then(block: () -> Unit) = block.invoke()

We have an infix & extension function which accepts a function block block :()->Unit and executes it.

What this lets us do is chain the ‘given, when, then’ calls like a sentence and gets us one step closer to our DSL.

Next we have our User class which is an object class

An object class is not "a static class per-say", but rather it is a static instance of a class that there is only one of, otherwise known as a singleton.

Perhaps the best way to show the difference is to look at the decompiled Kotlin code in Java form.

Here is a Kotlin object and class:

object ExampleObject {
  fun example() {
  }
}

class ExampleClass {
  fun example() {
  }
}

In order to use the ExampleClass, you need to create an instance of it: ExampleClass().example(), but with an object, Kotlin creates a single instance of it for you, and you don't ever call it's constructor, instead you just access it's static instance by using the name: ExampleObject.example().

object User {
  infix fun selects(block: SelectsActions.() -> Unit): User {
    block.invoke(SelectsActions)
    return this
  }
}

Similarly this is an infix function on User. It takes a function with SelectsActions as the receiver, letting us call functions on SelectsActions in the lambda passed in. We invoke the function and return the User so that we can chain actions. The whole function is an infix function so that we can have spaces and makes the call read more like a sentence.

This just leaves the actions and assertions of the test. This is where the actual Espresso code lives, as below :

object SelectsActions {
    fun logout() {
        onView(allOf(withId(R.id.login_button), withText(R.string.logout)))
        .perform(click())
    }

    fun login() {
        onView(allOf(withId(R.id.login_button), withText(R.string.login)))
        .perform(click())
    }
}

When you put all of the above pieces together you can write nice, human-readable tests.

This concludes this series , Part 5 of 5 - Thanks for sticking around till the end.

Summary

[On Kotlin] A general language with lambda receivers and invoke conventionmeans the ability to support internal DSL’s. Internal DSL’s give the ability for higher level readability and understandability through the use of structured grammar, but the additional benefit — that declared languages cannot provide easily — is type safely through compilation.

Extras :

Some great libraries available to us that provide a DSL interface are :

Spek

KotlinX.Html

Anko

Adit Lal

Recommended for you

json[

        Ad-hoc polymorphism in JSON with Kotlin

a year ago•4 min read](/web/20200807205117/https://www.aditlal.dev/polymorphic-json/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

a year ago•2 min read](/web/20200807205117/https://www.aditlal.dev/kotlin-dsl-part-4/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

a year ago•3 min read](/web/20200807205117/https://www.aditlal.dev/kotlin-dsl-part-3/)

  No results for your search, please try with something else.

Adit Lal © 2020  •  Published with Ghost

JavaScript license information

Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>
<![CDATA[Kotlin DSL - let's express code in "mini-language" - Part 4 of 5]]>https://aditlal.dev/kotlin-dsl-part-4/696a519f5bc9cc89ec009d6fSat, 16 Mar 2019 11:58:01 GMT

Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

dsl•Mar 16, 2019

In this post, we take a look at building a simpler API to work with Broadcast receiver, which will automatically unregister itself when the Activity pauses, and register again when it resumes.

Broadcast Receiver

To build this , we first create a class which observes  lifecycle events

class BroadcastReceiver<T>(
  context: T,
  constructor: Builder.() -> Unit
) : LifecycleObserver where T : Context, T : LifecycleOwner {

  
  @OnLifecycleEvent(ON_START)
  fun start() {
    appContext.registerReceiver(broadcastReceiver, filter)
  }

  @OnLifecycleEvent(ON_DESTROY)
  fun stop() = appContext.unregisterReceiver(broadcastReceiver)
}

To attach lifecycle events to our custom class BroadcastReceiver , we extend our class with LifecycleObserver and LifecycleOwner

context.lifecycle.addObserver(this)

During initialize, we init our Builder :

init {
    val builder = Builder()
    constructor(builder)
    filter = builder.filter()
    instructions = builder.instructions()

    context.lifecycle.addObserver(this)
  }

Builder is another class

class Builder internal constructor() {

  private val filter = IntentFilter()
  private val instructions = mutableListOf<Instructions>()

  fun onAction(
    action: String,
    execution: Execution
  ) {
    filter.addAction(action)
    instructions.add(OnAction(action, execution))
  }

  fun onDataScheme(
    scheme: String,
    execution: Execution
  ) {
    filter.addDataScheme(scheme)
    instructions.add(OnDataScheme(scheme, execution))
  }

  fun onCategory(
    category: String,
    execution: Execution
  ) {
    filter.addCategory(category)
    instructions.add(OnCategory(category, execution))
  }

  internal fun filter() = filter

  internal fun instructions() = instructions
}

in which we have 3 main functions : onAction , onDataScheme andonCategory where we do operation on the filter for this Broadcast receiver

Instructions are set of data classes where the checks for action , category are handled.

typealias Execution = (Intent) -> Unit

sealed class Instructions {

  abstract fun matches(intent: Intent): Boolean

  abstract fun execution(): Execution

  data class OnAction(
    val action: String,
    val execution: Execution
  ) : Instructions() {

    override fun matches(intent: Intent): Boolean {
      return intent.action == action
    }

    override fun execution() = execution
  }

  data class OnDataScheme(
    val scheme: String,
    val execution: Execution
  ) : Instructions() {
    override fun matches(intent: Intent): Boolean {
      return intent.data?.scheme == scheme
    }

    override fun execution() = execution
  }

  data class OnCategory(
    val category: String,
    val execution: Execution
  ) : Instructions() {
    override fun matches(intent: Intent): Boolean {
      return intent.hasCategory(category)
    }

    override fun execution() = execution
  }
}

Here , typealias is allowing us to refer to type (Intent)-> Unit as Execution

Now to connect it all :

class BroadcastReceiver<T>(
  context: T,
  constructor: Builder.() -> Unit
) : LifecycleObserver where T : Context, T : LifecycleOwner {

  private val appContext = context.applicationContext
  private val filter: IntentFilter
  private val instructions: List<Instructions>

  init {
    val builder = Builder()
    constructor(builder)
    filter = builder.filter()
    instructions = builder.instructions()

    context.lifecycle.addObserver(this)
  }

  private val broadcastReceiver = object : BroadcastReceiver() {
    override fun onReceive(
      context: Context,
      intent: Intent
    ) {
      for (ins in instructions) {
        if (ins.matches(intent)) {
          ins.execution()
              .invoke(intent)
          break
        }
      }
    }
  }

  @OnLifecycleEvent(ON_START)
  fun start() {
    appContext.registerReceiver(broadcastReceiver, filter)
  }

  @OnLifecycleEvent(ON_DESTROY)
  fun stop() = appContext.unregisterReceiver(broadcastReceiver)
}

Our Broadcast receiver DSL can now be called in the following way :

override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    // Unregister when onPause, and register again when it resumes.
    BroadcastReceiver(this) {
      onAction("app.SOME_ACTION") {
        // Do something
      }
      onCategory("messages") {
        // Do something
      }
      onDataScheme("file://") {
        // Do something
      }
    }

(credits : Aidan Follestad https://goo.gl/Mi7Z9x)

In Part 5 of this series , we will take a look at how to apply DSLs in writing easier to understand tests for android.

Adit Lal

Recommended for you

json[

        Ad-hoc polymorphism in JSON with Kotlin

a year ago•4 min read](/web/20200807213717/https://www.aditlal.dev/polymorphic-json/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

a year ago•3 min read](/web/20200807213717/https://www.aditlal.dev/kotlin-dsl-part-5/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

a year ago•3 min read](/web/20200807213717/https://www.aditlal.dev/kotlin-dsl-part-3/)

  No results for your search, please try with something else.

Adit Lal © 2020  •  Published with Ghost

JavaScript license information

Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>
<![CDATA[Kotlin DSL - let's express code in "mini-language" - Part 3 of 5]]>https://aditlal.dev/kotlin-dsl-part-3/696a519f5bc9cc89ec009d6eSat, 16 Mar 2019 09:51:37 GMT

Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

dsl•Mar 16, 2019

In this third post of this series , we take a look at the some of the use cases of DSLs in Android.

Spans

Custom spans builders for Android Text

Kotlin DSL - let's express code in "mini-language" - Part 3 of 5
Our Textview example with multiple spans

In this example , we have a simple screen where some text is displayed, each line has some words wrapped up in spans - Bold , italics , colored text

To achieve this , we would have to write the following :

val spannable1 = SpannableString("some formatted text")

spannable1.setSpan(StyleSpan(Typeface.BOLD), 0, 4, SPAN_EXCLUSIVE_EXCLUSIVE)
spannable1.setSpan(StyleSpan(Typeface.ITALIC), 6, 15, SPAN_EXCLUSIVE_EXCLUSIVE)
spannable1.setSpan(ForegroundColorSpan(COLOR.RED),17,21, SPAN_EXCLUSIVE_EXCLUSIVE)

val spannable2 = SpannableString("nested text")

spannable.setSpan(StyleSpan(Typeface.BOLD), 0, 6, SPAN_EXCLUSIVE_EXCLUSIVE)
spannable.setSpan(StyleSpan(Typeface.ITALIC), 0, 6, SPAN_EXCLUSIVE_EXCLUSIVE)
spannable.setSpan(URLSpan(url), 8, 12, SPAN_EXCLUSIVE_EXCLUSIVE)

val spannable3 = SpannableString("no wrapping")

spannable.setSpan(StyleSpan(Typeface.BOLD), 0, 42, SPAN_EXCLUSIVE_EXCLUSIVE)
spannable.setSpan(SuperscriptSpan(), 4, 12, SPAN_EXCLUSIVE_EXCLUSIVE)

What if we could simply this with DSLs

First we have :

fun spannable(func: () -> SpannableString) = func()

As you can see spannable  has a parameter of a function type, which it can call in its method body. If we now want to use this higher order function, we can make use of lambdas, also referred to as “function literal”

Next , we have a function span , which takes a charsequence param and another param which accepts Any

private fun span(s: CharSequence, o: Any) =
  (if (s is String) SpannableString(s) else s as? SpannableString
   ?: SpannableString(""))
   .apply { setSpan(o, 0, length, SPAN_EXCLUSIVE_EXCLUSIVE) }

This adds the span to the charsequence and returns a SpannableString

Next , we declare

operator fun SpannableString.plus(s: SpannableString) =
    SpannableString(this concat s)
operator fun SpannableString.plus(s: String) =
    SpannableString(this concat s)

When you use operator in Kotlin, it's corresponding member function is called. For example, expression a+b transforms to a.plus(b) under the hood.

For creating bold , italics span , we could declare it like this :

fun bold(s: CharSequence) =
    span(s, StyleSpan(android.graphics.Typeface.BOLD))
fun italic(s: CharSequence) =
    span(s, StyleSpan(android.graphics.Typeface.ITALIC))

more Spans could be declared :

fun sub(s: CharSequence) =
    span(s, SubscriptSpan()) // baseline is lowered
fun size(size: Float, s: CharSequence) =
    span(s, RelativeSizeSpan(size))
fun color(color: Int, s: CharSequence) =
    span(s, ForegroundColorSpan(color))
fun url(url: String, s: CharSequence) =
    span(s, URLSpan(url))

Finally , we could call it :

val spanned = spannable{ bold("some") +
  italic(" formatted") +
  color(Color.RED, " text") }

val nested = spannable{ bold(italic("nested ")) +
  url("www.google.com", “text")
}
      
val noWrapping = bold("no ") + sub(“wrapping")

Intents

Simplifying Android Intents

var intent = Intent(myActivity, TargetActivity::class)
    intent.putExtra("myIntVal", 10)
    intent.putExtra("myStrVal", "Hello String")
    intent.putExtra("myBoolVal", false)
    myActivity.startActivity(intent)

myActivity.launchActivity<TargetActivity> {
        putExtra("myIntVal", 10)
        putExtra("myStrVal", "Hello String")
        putExtra("myBoolVal", false)
    }

Let's simplify this with a custom DSL

launchActivity<UserDetailActivity> {
        putExtra(INTENT_USER_ID, user.id)
        addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP)
    }

Its implementation would be as follows :

inline fun <reified T : Any> Activity.launchActivity(
        requestCode: Int = -1,
        options: Bundle? = null,
        noinline init: Intent.() -> Unit = {}) {

    val intent = newIntent<T>(this)
    intent.init()
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) {
        startActivityForResult(intent, requestCode, options)
    } else {
        startActivityForResult(intent, requestCode)
    }
}

and an extension function for Context

inline fun <reified T : Any> Context.launchActivity(
        options: Bundle? = null,
        noinline init: Intent.() -> Unit = {}) {

    val intent = newIntent<T>(this)
    intent.init()
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) {
        startActivity(intent, options)
    } else {
        startActivity(intent)
    }
}

And to start newIntent

inline fun <reified T : Any> newIntent(context: Context): Intent =
        Intent(context, T::class.java)

When using lambdas, the extra memory allocations and extra virtual method call introduce some runtime overhead. So, if we were executing the same code directly, instead of using lambdas, our implementation would be more efficient.

In Part 4 of this series , we will take a look at how to make Broadcast receiver react to lifecycle events.

Adit Lal

Recommended for you

json[

        Ad-hoc polymorphism in JSON with Kotlin

a year ago•4 min read](/web/20200807204023/https://www.aditlal.dev/polymorphic-json/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 5 of 5

a year ago•3 min read](/web/20200807204023/https://www.aditlal.dev/kotlin-dsl-part-5/)dsl[

        Kotlin DSL - let's express code in "mini-language" - Part 4 of 5

a year ago•2 min read](/web/20200807204023/https://www.aditlal.dev/kotlin-dsl-part-4/)

  No results for your search, please try with something else.

Adit Lal © 2020  •  Published with Ghost

JavaScript license information

Kotlin DSL - let's express code in "mini-language" - Part 3 of 5

cookie-bar {background:#090a0b; height:auto; line-height:24px; color:#fff; text-align:center; padding:3px 0;}

cookie-bar.fixed {position:fixed; top:0; left:0; width:100%;}

cookie-bar.fixed.bottom {bottom:0; top:auto;}

cookie-bar p {margin:0; padding:0;}

cookie-bar a {color:#ffffff; display:inline-block; border-radius:3px; text-decoration:none; padding:0 6px; margin-left:8px;}

cookie-bar .cb-enable {background:#26a8ed;}

cookie-bar .cb-enable:hover {background:#26a8ed;}

cookie-bar .cb-disable {background:#26a8ed;}

cookie-bar .cb-disable:hover {background:#26a8ed;}

cookie-bar .cb-policy {background:#26a8ed;}

cookie-bar .cb-policy:hover {background:#26a8ed;}

]]>