TechYourChance https://www.techyourchance.com/ Android Freelancer Blog Mon, 07 Jul 2025 06:01:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.5 https://www.techyourchance.com/wp-content/uploads/2020/08/icon.svg TechYourChance https://www.techyourchance.com/ 32 32 I’m Going All-In on Kotlin Multiplatform, Here is Why https://www.techyourchance.com/kotlin-multiplatform-here-is-why/ https://www.techyourchance.com/kotlin-multiplatform-here-is-why/#comments Sun, 06 Jul 2025 16:27:12 +0000 https://www.techyourchance.com/?p=12642 My take on why Kotlin Multiplatform is the best long-term choice for mobile development, how it stacks up against other frameworks, and why both JetBrains and Google are heavily invested in its success.

The post I’m Going All-In on Kotlin Multiplatform, Here is Why appeared first on TechYourChance.

]]>
Long story short, I decided to jump into Kotlin Multiplatform development after actively ignoring it for years, because I believe KMP is the best long-term investment in the mobile tech stack today.

My Long-Standing Skepticism Towards Multiplatform Frameworks

For more than a decade, I watched the multiplatform wars, revolutions, and failures in mobile development from a distance. I built toy projects with some of these technologies, but never used them professionally. This strategy worked well, as it saved me and my clients a lot of time and frustration.

Kotlin Multiplatform used to be just another framework I chose to ignore. In fact, in some ways, it seemed like the riskiest bet, because it was the youngest option, used a unique and complex architecture, and Google was heavily invested in promoting Flutter.

In September 2020, I invited Aleksey Mikhailov, the CTO of IceRock Development, for a long conversation about KMP. Aleksey is one of the top KMP experts in the world and had been using this technology to build apps for clients since the early days. However, even though Aleksey was confident in KMP and shared real success stories, I left that discussion feeling that KMP was still too immature for me. [Summary of that conversation here].

Fast-forward a few years, and it feels like we’ve reached a tipping point with multiplatform frameworks and they are finally ready to go mainstream. In my 2025 annual review of the Android development ecosystem, I said that it’s time for me to pick and master one of these tools. But which one? The options were React Native, Flutter, and KMP. So I spent time researching and comparing them.

Then, a few months ago, I had the pleasure of hosting Aleksey Mikhailov again. We talked in detail about the state of Kotlin Multiplatform in 2025, covering both the core tech and the ecosystem around it. That conversation sealed my decision to adopt KMP.

Kotlin Multiplatform is Strategic to JetBrains

Mastering a new framework is a big investment. That’s why I want to see real strategic interest and long-term support from respected and reliable companies. For me, this is more important than any technical detail, like the programming language or architecture of the framework.

KMP is built by JetBrains, one of the best tech companies in the world. JetBrains has technical excellence in its DNA and is known for dogfooding its own tools.

For JetBrains, the rise of other multiplatform solutions, most of which promote the free VSCode as their “main” IDE, puts pressure on their position in the IDE market. On top of that, the market for AI tools (like Cursor) might become even bigger than the IDE market, so JetBrains obviously wants to get its fair share. In this situation, attracting developers into their own technology stack, which is free to use, is a great way for JetBrains to promote their premium products.

So, the success of Kotlin Multiplatform is strategic for JetBrains. In fact, if KMP fails, it could even threaten the company’s long-term future.

Kotlin Multiplatform is Strategic to Google

For Google, KMP is a bitter-sweet pill. As I explained several years ago, after Google won the lawsuit against Oracle, both Fuchsia and Flutter lost their importance as strategic hedges in case Oracle had won. For Fuchsia, this was probably the end of the road. Flutter, however, could still be useful as a hedge against React Native gaining significant developer mindshare.

But Flutter didn’t succeed in that role. It mostly attracted individual Android developers and agencies, while React Native remained more popular among product companies and institutions, especially in the US. When Shopify adopted React Native, it gave the framework even more credibility and support. On top of that, React Native benefited the most from the AI revolution, since AI tools tend to work better with web technologies.

So, Google found itself at real risk of losing control over the developer ecosystem to Meta. What seems to have happened next is that Google chose Kotlin Multiplatform to serve the same goal: give developers a solid multiplatform solution, which is at least partially under their control, and prevent them from switching to React Native. This explains the much stronger focus and investment in KMP by Google over the past year or so.

Therefore, Kotlin Multiplatform is strategic to Google, because they now face a serious threat of Meta taking over the mobile development ecosystem. That would make it easier for Meta to launch a competing operating system or hardware, if they ever choose to.

Kotlin Multiplatform Offers the Best Migration Path for Existing Projects

The authors of KMP chose to play on hard mode when they started the project. While Flutter and React Native try to abstract the details of the underlying platforms (using different methods), KMP allows for relatively smooth integration with both Android and iOS native frameworks. This architectural choice makes building the framework more difficult and raises the entry barrier for new developers, but I believe it will prove to be better in the long run, for several reasons.

First of all, while most multiplatform discussions focus on building new apps, the existing native apps, developed over the past 15 years, make up a much larger part of the market and employ more developers. So, offering older projects a low-risk and convenient migration path to multiplatform is a big advantage. KMP is great at this because its native-friendly design supports truly incremental adoption. Whether you want to share just some complex algorithms, or networking layer, or business logic, or user interface code, Kotlin Multiplatform supports all of these, as well as their combinations. On top of that, KMP produces standard library formats for each platform (.aar for Android, .xcframework for iOS), so you won’t need another toolchain and sharing non-UI logic won’t add tens of megabytes to your app’s binary size.

Kotlin Multiplatform Supports Both Multiplatform And Native User Interfaces

The user interface story in Kotlin Multiplatform deserves a mention on its own. Basically, KMP lets you choose whether to share the UI logic or not. In fact, until recently, it didn’t even provide a multiplatform UI option, so almost all KMP apps were built with separate native UIs, such as Jetpack Compose for Android and SwiftUI for iOS.

Going forward, with Compose Multiplatform maturing, I expect more and more new apps will share almost 100% of their code. For example, IceRock Development reached this level in one of their recent projects:

While it might seem like Compose Multiplatform makes the native UI option obsolete, that’s not the case at all. For example, when migrating native apps to KMP, being able to keep the existing UIs while refactoring the rest of the codebase is crucial for reducing risk. Another reason to use native UIs is when projects need to meet specific accessibility targets. Despite all their benefits, multiplatform UI frameworks often fall short in accessibility specifically.

So, by supporting both shared multiplatform and native UI frameworks, KMP can meet a much wider range of business needs and constraints.

Kotlin Multiplatform is Almost Native on Android

When it comes to tooling and developer experience, KMP feels very similar to native Android development. There are some small differences in how projects are organized, and some libraries need to be swapped for multiplatform analogs, but, overall, Android developers can switch to KMP with little effort. [Unfortunately, the iOS experience in KMP is still far from ideal due to less mature tooling, longer build times, etc., so a lot of infrastructure work is still required in this regard.]

The fact that KMP is almost native on Android is a big deal. First, it means that about half of the mobile developers in the world today can start using KMP and become productive very quickly. Second, for businesses, it means that even if KMP adoption doesn’t work out, the effort isn’t wasted. Instead, it becomes the foundation for their native Android app. That’s not the case with any other multiplatform framework I know of, where a failed attempt usually means throwing all the work away.

Gradual and Steady Evolution of Kotlin Multiplatform

Unlike many other new technologies, KMP’s growth and adoption weren’t driven by hype or big PR budgets. Instead, it matured slowly over the years, steadily improving its foundations and developer experience. For example, JetBrains launched KMP without a multiplatform UI framework for Android and iOS, at a time when Flutter and React Native were already well established. That was an almost absurdly bold move. It took years to close this gap, and Compose Multiplatform only reached a stable milestone on iOS in 2025.

Some developers may see this slow pace as a downside, especially if they tried KMP too early and got burned by it. But to me, it shows serious and thoughtful leadership, which again highlights the strategic importance of KMP for JetBrains. If you’re building the foundation for long-term success, then taking time to do it right and allowing space for mistakes and rework makes sense.

This steady progress also makes it easier to look ahead and predict where KMP will be in the next year or two, both in terms of its technical capabilities and community adoption.

AI Poses The Biggest Risk to Kotlin Multiplatform

I don’t want to give the impression that KMP is perfect, because it’s not. From immature iOS support, to the lack of third-party libraries in the ecosystem, to limited documentation and tutorials, there are plenty of challenges. Still, I feel confident that most of these issues will be solved in the next two to three years.

In my view, the biggest risk KMP faces today is limited support and coverage by LLMs.

Software development has changed in recent years. Today, using LLMs to generate working code is becoming the norm, and this trend will continue into the future. React Native has benefited the most from this AI revolution, as these tools are much better at working with web technologies, probably because there’s more training data available for them.

On the other hand, there’s been a sharp drop in community-generated content like StackOverflow posts, blogs, and video tutorials. This means that training data for newer technologies has already shrunk and will likely keep shrinking in the future.

While Kotlin Multiplatform isn’t brand new framework, its adoption still lags behind React Native or Flutter. You can see this clearly using tools like Google Trends.

This situation could lead to a positive feedback loop, where existing solutions, like React Native, keep attracting more and more developers, gaining better LLM support in turn, with the whole industry gradually converging on those options. As a result, new technologies might face a very high, possibly unsurmountable barrier to entry.

Since developers are less motivated to create content in the age of LLMs (I feel like a dinosaur writing this post myself), JetBrains and Google may need to step up their efforts. They should invest more in documentation, tutorials, and community-generated content. This would help ensure that future LLMs have plenty of high-quality, up-to-date training data to work with.

Conclusion

Long-time readers of this blog will remember that for years, I’ve been pouring cold water on the hype around Kotlin and Jetpack Compose. In 2021, in my post titled Kotlin vs Java in Android, Four Years Later, I wrote:

Finally, said all the above, it’s clear that Kotlin, Coroutines, Compose and, maybe, even Kotlin Multiplatform are the future of Android development. Therefore, whether I like them or not, I’ll be forced to learn them one day (or have already been forced). However, this doesn’t mean that I need to jump on every hype train the moment it arrives. Just like with Kotlin, there is no rush to adopt any of these technologies. And if you do decide to become an early adopter, then it’s totally fine. After all, we, developers, like playing with new ideas and learning new ways to work. That’s what makes our industry innovative. However, be honest with yourself and your employer about the motivations behind becoming early adopters and the most probably outcomes.

Today, I’m happy to say that JetBrains delivered on their vision and built the best multiplatform solution for mobile development. That’s why I no longer see KMP as a risky choice, but rather as a smart investment in the long-term success of your mobile projects. It still comes with some challenges and risks, but they’re much smaller than they were a few years ago. In addition, JetBrains has shown that they are committed to KMP for the long term and won’t be discouraged by tough challenges or short-term distractions.

All in all, in my opinion, Kotlin Multiplatform really is the future of Android development. And, probably, iOS too.

P.S. It looks like Apple has also started to worry about multiplatform taking over their developer ecosystem. They recently launched the Swift Android Workgroup. It feels like we’re about to witness the grand finale of the multiplatform wars in the next couple of years.

The post I’m Going All-In on Kotlin Multiplatform, Here is Why appeared first on TechYourChance.

]]>
https://www.techyourchance.com/kotlin-multiplatform-here-is-why/feed/ 9
The State of Android and Cross-Platform Development in 2025 https://www.techyourchance.com/the-state-of-android-and-cross-platform-development-in-2025/ https://www.techyourchance.com/the-state-of-android-and-cross-platform-development-in-2025/#comments Mon, 27 Jan 2025 20:53:52 +0000 https://www.techyourchance.com/?p=12614 This article continues my tradition of yearly reviews of the Android development ecosystem. So, let’s reflect on 2024 and discuss the trends we might expect in 2025.

The post The State of Android and Cross-Platform Development in 2025 appeared first on TechYourChance.

]]>
This article continues my tradition of yearly reviews of the Android ecosystem. This year, I added “cross-platfom” to the title because, as you’ll see shortly, we’ll discuss these technologies extensively. So, let’s reflect on 2024 and make some predictions for 2025.

Last Year’s Predictions Review

As usual, I’ll start by reviewing my predictions from one year ago.

In 2023, Google Play introduced a closed testing phase requiring at least 20 testers as a prerequisite for publishing new apps by individuals. Reflecting on this policy change, I wrote:

The stated reason for this change is to increase the quality of new applications on Google Play. However, I tend to think that the real reason is that Google just wants to reduce the overall number of new apps. All those apps that Android devs and aspiring indies upload to Google Play and then abandon cost Google money to host and manage, so introducing a higher barrier for entry will immediately reduce their overhead.

I wonder whether this change will mark the end of the era of Android enthusiasts.

I feel that Google succeeded in its objective to pour a bucket of cold water on Android development enthusiasts. In fact, it seems they overshot, as the number of required testers was reduced to 12 last year. I see this reduction as an encouraging sign, because it shows that Google acknowledges the importance of indie developers to the ecosystem.

I predicted that Compose Multiplatform would be in the spotlight in 2024:

One of the technologies I’m going to watch in 2024 is Compose Multiplatform. As far as I understand, it’s pretty much JetBrains’ solo initiative which aims to build a UI framework along the lines of Jetpack Compose, but for multiplatform use. This tech is in its infancy, so, just for the record, I wouldn’t dare to actually use it in a production setting.

This prediction turned out to be accurate, and I have much more to say about Compose Multiplatform this year.

Then I followed up with some thoughts on Kotlin Multiplatform:

There are numerous reasons to be skeptical about KMP, but the main one for me is the lack of strategic business alignment with either Apple or Google. When Google had been fighting a legal battle against Oracle over Android, it made sense for them to invest into a backup plan, so they collaborated on KMP. But once Google prevailed in that lawsuit, I just can’t see a reason for them to share the development ecosystem with JetBrains, especially given they already have their own multiplatform framework – Flutter. Apple is even less interested in KMP than Google. This leaves JetBrains to do all the heavy lifting by themselves, facing potential road blocks from Google and Apple.

That said, JetBrains is an amazing company with strong leadership and very technical DNA. Furthermore, Kotlin Multiplatform is not just a side project for them, but a strategic initiative, so they’re heavily invested into it. With the addition of Compose Multiplatform into the toolbox, maybe KMP will finally break out in 2024.

Did KMP break out in 2024? I’ll share my thoughts later in this post.

Flutter received a neutral rating:

All in all, looks like Flutter gained a considerable market share, but isn’t going to replace either native Android development or even React Native any time soon.

This prediction turned out to be accurate.

Unsurprisingly, last year’s edition included AI discussion:

What I’m looking forward to in 2024 is the rise of on-device AI. There are already free models that can be run locally, so I guess the question now is performance and power optimizations to allow their widespread adoption for specific use cases.

Progress in this area has been underwhelming. At the very least, I expected AI-enabled APIs to allow Android apps to be controlled by user voice commands. Such features could significantly enhance accessibility and unlock “hands-free” experiences. Unfortunately, this hasn’t materialized yet.

I also shared a personal dream:

On the tooling side of AI, I hope we’ll get a tool to convert Figma mockups into UI code automatically. Every time I build UIs, it feels like copying a book by hand before the invention of a print.

Still waiting for a magical AI-enabled UI generator. Some startups are working on this, so I wish them luck.

HarmonyOS NEXT

Last year, Huawei, the Chinese behemoth OEM, launched a new version of their HarmonyOS operating system called HarmonyOS NEXT. Unlike previous versions, NEXT doesn’t include any AOSP code and isn’t compatible with existing Android apps. Therefore, while it retains the HarmonyOS title, NEXT is a completely new operating system.

Launching a new OS might sound like a suicide mission, but HarmonyOS NEXT is a calculated bet. Beyond business considerations, major geopolitical interests are at play here. China aims to develop its software and hardware capabilities, and Huawei’s move is likely backed by the Chinese government.

This development is worth watching. While HarmonyOS NEXT may not affect Android immediately, it has the potential to become the world’s third mainstream mobile platforms in the long run.

Kotlin Multiplatform

In 2024, Kotlin Multiplatform gained traction. Google Trends data shows that interest in KMP nearly tripled during the year:

Plotting Compose Multiplatform on the same chart shows that, indeed, the lack of a cross-platform UI framework was the missing piece for KMP:

Moreover, it seems that KMP received increased support from Google this year. This surprises me, as I still don’t see a clear business incentive for Google. Nonetheless, the newly found support is evident.

Is Kotlin Multiplatform the future of Android development? It increasingly appears so, though it’s still a niche technology (especially Compose Multiplatform), so I wouldn’t fully commit to it yet.

React Native

Last year I noted the surprising fact that Flutter doesn’t seem to be cannibalizing React Native’s market share. What I didn’t predict is the surprising renaissance of React Native.

At least two major React Native milestones were achieved in 2024:

  • Rearchitecture of the React Native framework, which removed the notorious “bridge”, has been completed.
  • Shopify finalized the rewrite of their main application in React Native, resulting in better performance, fewer crashes, and 86% code reuse between Android and iOS.

This “behind the scenes” story of React Native at Shopify is very interesting, as it shows how carefully these folks approached the pivot to cross-platform development.

As surprising as this might sound, looks like React Native is back in the race of cross-platform frameworks. In fact, React Native might actually be the current favorite in that race.

Flutter

The biggest Flutter story in 2024 was the Sonos fiasco: a publicly-traded, respected company, released a buggy rewrite of their application to all customers. Worse yet, they couldn’t re-release the old application to mitigate the damage, so the company and its customers got stuck in this unpleasant situation.

Frankly, I don’t think that Flutter played a major role in this story. All the available information points to a surprising level of incompetence among the company’s leadership, so they’d get in trouble regardless of which framework they’d use. However, since Sonos had been very vocal about their use of Flutter, this business disaster got associated with the framework.

There were also layoffs at Google that affected the Flutter team. Even though other teams had been affected as well, a Flutter doomsday narrative emerged on the spot.

My biggest issue with Flutter remains the fact that I don’t see a compelling business case for it for Google. After Oracle lost their lawsuit against Google, I thought that Flutter might serve as a strategic hedge against React Native. However, since Flutter doesn’t seem to hurt React Native that much, this is very weak motivation.

I think that 2024 wasn’t a good year for Flutter, and I don’t see a reason to expect a breakthrough in 2025 either.

Native Android Development

Where does native Android development stand? Donn Felker summed it up well:

If a company needs iOS and Android applications, it doesn’t make sense to roll out fully native apps in 2025, unless there are special constraints. In this sense, native Android development loses its ground.

Said that, I think that ignoring native development and going all-in on one of the cross-platform frameworks is not a good career move. Sure, cross-platform seems to be approaching the critical tipping point, but native will remain the mainstream for a long time. Most of the jobs, especially the best-paying ones, will be in native development in 2025. Furthermore, even if you’re hired as a cross-platform developer, you still want to be able to drop down to the platform level when issues arise (in at least one platform).

I still see native Android development as my main area of expertise and focus in 2025.

Mobile Web

I don’t think that mobile web will ever become a mainstream option because app stores are multi-billion businesses for Apple and Google, so they’ll do everything possible to maintain their dominance. Furthermore, given the current state of cross-platform frameworks and access to the global talent pool, rolling out simple mobile application(s) has never been easier and cheaper.

AI

I’m still enthusiastic about on-device AI and its applications for accessibility and user experience. I also remain bullish about AI development tools, especially UI code generators.

Conclusion

2025 might turn out to be the tipping point for cross-platform frameworks in mobile development. While these technologies remain niche, momentum is building.

Therefore, I’ve decided to master a cross-platform framework myself. Surprisingly, the hardest part of this plan is choosing which framework to focus on. This meme I created sums up my dillema, with React Native as a third option.

You might wonder why, as an Android developer, I don’t immediately choose Kotlin Multiplatform. After all, it’s the closest to my existing skillset, and I greatly admire JetBrains, the company behind this technology. However, KMP (along with CMP) remains the least popular and least mature option among the available frameworks, as is reflected in this Google Trends chart:

So, deciding which cross-platform framework to adopt is tough. I’ll keep you updated on my direction, once I make up my mind, and further progress.

That’s it for this year’s predictions. Thank you for reading! Feel free to leave your comments and questions below.

The post The State of Android and Cross-Platform Development in 2025 appeared first on TechYourChance.

]]>
https://www.techyourchance.com/the-state-of-android-and-cross-platform-development-in-2025/feed/ 2
Reactive Programming Considered Harmful https://www.techyourchance.com/reactive-programming-considered-harmful/ https://www.techyourchance.com/reactive-programming-considered-harmful/#comments Fri, 10 Jan 2025 18:12:40 +0000 https://www.techyourchance.com/?p=12604 Reactive programming, while powerful, brings complexity, a steep learning curve, and architectural lock-in, making it a poor choice for most use cases.

The post Reactive Programming Considered Harmful appeared first on TechYourChance.

]]>
Reactive programming has gained significant popularity over the past decade. Frameworks and libraries such as RxJava, Project Reactor, and even JavaScript’s RxJS have become cornerstones of many developers’ toolkits. In the Android realm, modern reactive programming is represented by the Kotlin Flow framework. While reactive programming is undoubtedly a powerful technique, there are compelling reasons to question whether its drawbacks outweigh its benefits.

Reactive Construct vs Reactive Programming

Consider this example:

class Observable {
    private val _sharedFlow = MutableSharedFlow<String?>()
    val sharedFlow: SharedFlow<String?> get() = _sharedFlow
    ...
    private suspend fun emitValue(value: String?) {
        _sharedFlow.emit(value?)
    }
}

class Observer(observable: Observable) {
    init {
        CoroutineScope(Dispatchers.Default).launch {
            observable.sharedFlow.filterNotNull().collect { value ->
                ...
            }
        }
    }
}

Here, I use SharedFlow to establish communication between two classes. SharedFlow is a reactive construct, but I would argue that this example doesn’t represent the reactive programming paradigm. Essentially, this code illustrates a classical Observer design pattern with some syntactic sugar on top. In my view, this is a completely valid way to implement your Observers. Therefore, that’s not what I’ll be discussing when addressing the downsides of reactive programming in this article.

Now, let’s extend our example:

class Observable(service1: Service1, service2: Service2, service3: Service3) {

    private val _sharedFlow = combine(service1.flow1, service2.flow2, service3.flow3, ::ObservableIntermediate)
        .filter { data -> data.intValue > 0 && data.boolValue }
        .flatMapConcat { data ->
            flowOf("${data.stringValue}-${data.intValue}-${data.boolValue}")
        }
        .distinctUntilChanged()
        .shareIn(CoroutineScope(Dispatchers.Default), SharingStarted.Eagerly)

    val sharedFlow: SharedFlow<String> get() = _sharedFlow
    
    private data class ObservableIntermediate(val intValue: Int, val stringValue: String, val boolValue: Boolean)
}

class Observer(observable: Observable, service4: Service4) {

    private val combinedFlow = observable.sharedFlow
        .combine(service4.flow4) { observableValue, doubleValue ->
            ObserverIntermediate(observableValue, doubleValue)
        }
        .flatMapLatest { intermediateData ->
            processIntermediateData(intermediateData)
        }

    init {
        CoroutineScope(Dispatchers.Default).launch {
            combinedFlow.collect { result ->
                ...
            }
        }
    }

    private fun processIntermediateData(data: ObserverIntermediate): Flow<String> {
        return flowOf(...)
    }

    private data class ObserverIntermediate(val observableValue: String, val flow4Value: Double)
}

This is what we’ll call reactive programming and discuss in this article.

Clearly, the two examples are related: they represent two points on a continuous reactive programming “scale.” Therefore, when I say that I don’t mind the first approach but will advocate against the second, I suggest that, up to a certain point on that scale, you can use “reactive constructs.” Beyond that point, you’re in the realm of reactive programming.

I can’t formally define the exact threshold of reactive programming, but, in my experience, problems begin when you start combining flows, using flatMap and other advanced operators, or accumulate long reactive chains spanning multiple components.

Benefits of Reactive Programming

Reactive programming is a powerful paradigm with numerous strengths.

  • The operators provided by reactive frameworks are concise and allow you to handle many standard programming tasks with ease. From handy shortcuts like filter, map, and distinctUntilChanged to versatile flatMap and others, you can implement complex requirements with just a few lines of code.
  • Reactive frameworks are well-suited to dealing with real-time streams and managing backpressure.
  • Frameworks like Project Reactor and Spring WebFlux can leverage non-blocking I/O, reducing load in highly concurrent, I/O-bound systems (such as high-throughput servers).

These are real benefits, although the last two are irrelevant to most software systems in general, and to the vast majority of Android applications specifically. Yet, I’d argue that the downsides of reactive programming still outweigh the benefits.

Increased Complexity

Martin Fowler famously said:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Martin Fowler

Code written by good developers reads like well-written prose. It is expressive, straightforward, and leverages properly named abstractions, allowing the reader to focus on individual features in isolation. When junior developers see such code, they are not impressed and are fully convinced they could write it themselves.

The best developers write the simplest code.

Reactive programming introduces a paradigm shift that requires developers to think in terms of streams, observables, operators, and event propagation. While this may be second nature to some, for most developers, this shift represents a steep learning curve. And I’m not talking about junior developers— even experts can’t understand reactive code without formal training in the technique. Reactive programming isn’t something you can pick up from context.

In essence, the reader of fully reactive code must have the same level of skill as its author, which inverts the standard author-reader skill relationship. This coupling of skill levels may not be an issue for a single-developer project, but on any sufficiently large project with multiple developers, it leads to problems due to skill mismatches. It also makes onboarding new developers harder and more costly.

In my opinion, this drawback alone outweighs all the benefits of reactive programming. But it’s not the only one.

Architectural Lock-In

Many developers don’t perceive reactive programming as part of their app’s architecture, but it is.

According to Grady Booch:

Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change.

Grady Booch

Reactive programming is an alternative, complex, and very different programming paradigm. Adopting reactive programming essentially couples everything in your application to a specific framework. Since it is so different and intrusive, reversing this decision becomes prohibitively expensive. In practice, migrating a fully reactive application back to a traditional approach often amounts to a complete rewrite.

To emphasize this point further, contrast reactive programming with another architectural pattern: dependency injection.

If you implement proper dependency injection in your app, you generally don’t see it or need to think about it. If you encounter a bug in your logic, you can investigate and fix it without interacting with the dependency injection infrastructure. In contrast, if all components in your app are reactive, you face the reactive programming framework at every step.

Switching dependency injection frameworks can be challenging but manageable. These frameworks typically live on the periphery of your code, so they aren’t deeply coupled to your application’s logic. On the other hand, swapping one reactive programming framework for another likely requires refactoring all the reactive classes in your project. This is a massive undertaking.

In summary, reactive programming introduces a very strong architectural lock-in.

Debugging and Tooling Challenges

Reactive programming heavily relies on powerful operators, which can be a double-edged sword. While these operators let you implement complex requirements with ease, they also obscure details, making debugging exceptionally difficult. You can no longer place breakpoints wherever you want, and logging becomes cumbersome. Errors in reactive streams often surface far from their origin, and stack traces frequently lack actionable information.

Additionally, the tooling ecosystem for reactive programming is still catching up. While progress has been made, most IDEs and debugging tools struggle to offer the same level of support for reactive code as they do for traditional code.

Conclusion

Reactive programming is not the silver bullet it is often made out to be. Its complexities and trade-offs make it a poor fit for most common use cases, and its benefits are frequently overshadowed by the challenges it introduces. While there are specific use cases where reactive programming can shine, adopting this paradigm in a “standard” project, or across entire codebase, rarely makes sense.

In most cases, traditional approaches are more than sufficient to achieve the desired results without the downsides of reactive programming. I believe that developers should consider the long-term maintainability, readability, and simplicity of their code before allowing reactive programming to gain a foothold in their codebases.

The post Reactive Programming Considered Harmful appeared first on TechYourChance.

]]>
https://www.techyourchance.com/reactive-programming-considered-harmful/feed/ 3
SharedPreferences Commit vs Apply Performance Benchmark https://www.techyourchance.com/android-sharedpreferences-performance-commit-vs-apply/ https://www.techyourchance.com/android-sharedpreferences-performance-commit-vs-apply/#comments Sat, 06 Jul 2024 10:22:54 +0000 https://www.techyourchance.com/?p=12575 Review of a benchmark that compares the performance of SharedPreferences commit and apply write modes in Android applications.

The post SharedPreferences Commit vs Apply Performance Benchmark appeared first on TechYourChance.

]]>
SharedPreferences is a persistence mechanism that provides a simple way to store and retrieve key-value pairs in your Android application. Since it deals with file system storage, SharedPreferences framework can have negative performance impact, especially if used incorrectly or abused.

To better understand the performance profile of SharedPreferences, I wrote a benchmark to measure and compare the speeds of SharedPreferences commit and apply operations on real Android devices. In this article, I’ll share and discuss the results of this benchmark.

SharedPreferences Commit vs Apply

When you write data into SharedPreferences, you can choose between commit‘ing your changes, or apply‘ing them:

val sharedPrefs = context.getSharedPreferences(SHARED_PREFS_NAME, Context.MODE_PRIVATE)

sharedPrefs.edit().putString("key1", "value1").commit()

sharedPrefs.edit().putString("key2", "value2").apply()

Commit operation writes the changes to the file that backs your SharedPreferences object right away. In technical terms, we say that commit is a blocking file system write operation. This type of operations is known to be relatively time-consuming, so there is a valid concern about the performance impact of this approach.

Apply operation doesn’t write to the file system right away, but only stores the changes in the internal in-memory cache. Then, at a later time, these changes will be persisted by a worker thread managed by the SharedPreferences framework. Since there is no blocking interaction with the file system, apply is faster than commit. That’s why the official guidelines recommend using apply over commit, and there is a default lint warning in Android Studio that will pop up if you use commit in your code.

The potential downside of apply is that since it doesn’t persist the changes right away, there is a slight change of losing this state. For example, if the application crashes in the middle of a flow that modifies SharedPreferences using apply, then, in theory, the SharedPreferences framework might not have enough time to transfer the changes from the in-memory buffer to the file system, effectively corrupting the application’s state. This can be a major issue, but, fortunately, this is a very unlikely edge case.

Performance Benchmark of Commit vs Apply

In general, the recommendation to use apply over commit sounds reasonable. However, I’ve never seen any performance profiling of these operations. Therefore, I decided to build a benchmark to compare these approaches in my open-sourced TechYourChance application.

The flow of the benchmark is:

  1. Clear SharedPreferences.
  2. Perform multiple consecutive edits, each time putting an entry of the form keyN=X into SharedPreferences (where N is the iteration number and X is a constant string).
  3. Measure the duration of each edit operation.
  4. Execute steps 1-3 M times to obtain multiple measurements for averaging.
  5. Execute steps 1-4 for commit and apply methods, independently.
  6. Compute the results.

If you install the application and run this benchmark, you’ll get a results screen like this:

The results reported by the benchmark:

  1. Averaged durations of each incremental edit operation.
  2. The coefficients of a linear fit to the averaged durations.
  3. The maximal duration of an edit operation.

The chart shows the averaged durations as a function of the number of entries in SharedPreferences, and it becomes immediately clear that apply operation is indeed much faster than commit.

In addition to the chart, we also use the linear fit’s coefficients to estimate the constant overhead associated with the respective operation, and the additional overhead for each incremental edit. The last “max” data point shows “how bad it can get” at the extreme.

Benchmark Results

I ran the benchmark on several Samsung Galaxy devices that, in my estimation, correspond to three performance profiles of Android users: average, low and very low.

The results of a single benchmark’s invocation on each device:

Commit constant [ms]Commit increment [ns]Commit max [ms]Apply constant [ms]Apply increment [ns]Apply max [ms]
Samsung Galaxy S22 [average perf]0.6112002.630.071110.27
Samsung Galaxy S20FE [low perf]0.625005.120.05600.21
Samsung Galaxy S7 [very low perf]7.621000018.740.039843.1

The results of the benchmark on each device varied considerably between invocations. In theory, increasing the number of iterations should help in smoothing out this variance., but, in practice, I didn’t observe this effect. I suspect that the variance is caused mostly by the factors outside of benchmark’s control, like the activity of other applications and Android OS itself.

Discussion

As expected, the performance of apply is much better than commit: it’s an order of magnitude faster. No surprise there.

More interestingly, the incremental overhead of each additional entry when using commit is relatively low. Much lower than I expected, actually. Even on the weakest device, writing 100 entries of approximately 105 characters each (key + value lengths), results in addition of ~1ms to the average write duration, which constitutes about 13% of the constant overhead. This means that, for SharedPreferences of reasonable sizes (say, less than 10,000 characters), write time is dominated by the constant overhead (probably related to file system access) and not by the amount of data you store in there.

We can also see that the max write durations can be considerably longer than the average values.

Conclusion

As for the main question that I wanted to answer with this benchmark, whether the performance of commit is acceptable for production use, my personal takeaway is that I can use commit in my code. The reasons are as follows:

  • Commit yields deterministic results.
  • I mostly use SharedPreferences to store configuration entries that don’t change often.
  • Most of the time, I perform just one commit operation.
  • At least some of the edits of SharedPreferences will be performed on background threads, in which case the performance difference doesn’t matter.
  • On modern devices, the average performance of commit is acceptable even at 120Hz on the UI thread.
  • If the usage of commit will cause a skipped frame once in a while, users won’t notice that.
  • On very low end devices, commit can have a more pronounced effect, but these devices are relatively rare. Furthermore, users of these devices suffer from poor performance all the time, so skipping several frames once in a while won’t make much difference to their experience.

Sure enough, that’s just my personal conclusion, derived from my personal assumptions. If I’d use SharedPreferences much more often, or store megabytes of data in there, I’d probably change my mind. Therefore, I invite you to run the benchmark on devices that your users use and derive your own conclusions from it (please share your results in the comments if you do).

As an anecdotal evidence, I’ve been using commit for years in my Settings Helper library (which is an Object-Oriented wrapper around SharedPreferences) and, so far, I haven’t had any issues with this approach. Now I finally have quantitative data to support my decision. Glad the results didn’t come in differently, because that would be embarrassing 🙂

As usual, thanks for reading and subscribe to my email list if you’d like to get notifications about new articles.

The post SharedPreferences Commit vs Apply Performance Benchmark appeared first on TechYourChance.

]]>
https://www.techyourchance.com/android-sharedpreferences-performance-commit-vs-apply/feed/ 2
Bottom Bar Navigation in Android with Compose Navigation https://www.techyourchance.com/bottom-bar-navigation-android-compose-navigation/ https://www.techyourchance.com/bottom-bar-navigation-android-compose-navigation/#respond Fri, 03 May 2024 08:40:08 +0000 https://www.techyourchance.com/?p=12542 Complete guide to implementing Bottom Bar Navigation in an Android app using Jetpack Compose and Compose Navigation frameworks

The post Bottom Bar Navigation in Android with Compose Navigation appeared first on TechYourChance.

]]>
In this article I’ll show you two alternative ways to implement the bottom bar navigation pattern in Android with Jetpack Compose and Compose Navigation library.

Bottom Bar Navigation Pattern

This pattern helps you divide your application into multiple high-level screen groups, and let the users switch between them quickly. That’s how it looks in the tutorial implementation that we’ll implement in the next section:

Standard Implementation of Bottom Bar Navigation

You can find a full working code example of this approach in my open-sourced TechYourChance application. Below I’ll highlight and explain the main parts.

First, define application’s navigation routes:

// Navigation routes for screens
sealed class Route(var route: String, var title: String) {
    data object HomeRoot : Route("home", "Home root screen")
    data object HomeChild : Route("home/{num}", "Home child screen")
    data object SettingsRoot : Route("settings", "Settings root screen")
    data object SettingsChild : Route("settings/{num}", "Settings child screen")
}

Then define bottom tabs:

// Bottom tabs (note how each tab has a root route)
sealed class BottomTab(val title: String, val icon: ImageVector?, val rootRoute: Route) {
    data object Home : BottomTab("Home", Icons.Rounded.Home, Route.HomeRoot)
    data object Settings : BottomTab("Settings", Icons.Rounded.Settings, Route.SettingsRoot)
}

Now we need to establish the navigation hierarchy for screens:

// Navigation hierarchy (i.e. mapping routes to screens)
@Composable
fun MainScreenContent(navController: NavHostController) {
    val navigateToNextScreen: (String) -> Unit =  { destinationRoute ->
        val currentScreenNum = navController.currentBackStackEntry?.arguments?.getString("num") ?: "0"
        val nextScreenNum = currentScreenNum.toInt() + 1
        navController.navigate(destinationRoute.replace("{num}", "$nextScreenNum"))
    }
    NavHost(navController, startDestination = Route.HomeRoot.route) {
        composable(Route.HomeRoot.route) {
            SimpleScreen(
                title = Route.HomeRoot.title,
                onNavigateToNextScreenClicked = { navigateToNextScreen(Route.HomeChild.route) }
            )
        }
        composable(Route.HomeChild.route) { backStackEntry ->
            val screenNum = backStackEntry.arguments?.getString("num") ?: "0"
            SimpleScreen(
                title = "${Route.HomeChild.title} $screenNum",
                onNavigateToNextScreenClicked = { navigateToNextScreen(Route.HomeChild.route) }
            )
        }
        composable(Route.SettingsRoot.route) {
            SimpleScreen(
                title = Route.SettingsRoot.title,
                onNavigateToNextScreenClicked = { navigateToNextScreen(Route.SettingsChild.route) }
            )
        }
        composable(Route.SettingsChild.route) { backStackEntry ->
            val screenNum = backStackEntry.arguments?.getString("num") ?: "0"
            SimpleScreen(
                title = "${Route.SettingsChild.title} $screenNum",
                onNavigateToNextScreenClicked = { navigateToNextScreen(Route.SettingsChild.route) }
            )
        }
    }
}

Lastly, this is the implementation of the bottom bar and the tabs navigation logic:

// Bottom bar UI element and the associated tabs navigation logic
@Composable
fun MyBottomAppBar(
    navController: NavController,
) {
    val currentRoute = navController.currentBackStackEntryFlow.map { backStackEntry ->
        backStackEntry.destination.route
    }.collectAsState(initial = Route.HomeRoot.route)

    val items = listOf(
        BottomTab.Home,
        BottomTab.Settings
    )

    var selectedItem by remember { mutableIntStateOf(0) }

    items.forEachIndexed { index, navigationItem ->
        if (navigationItem.rootRoute.route == currentRoute.value) {
            selectedItem = index
        }
    }

    NavigationBar {
        items.forEachIndexed { index, item ->
            NavigationBarItem(
                alwaysShowLabel = true,
                icon = { Icon(item.icon!!, contentDescription = item.title) },
                label = { Text(item.title) },
                selected = selectedItem == index,
                onClick = {
                    selectedItem = index
                    navController.navigate(item.rootRoute.route) {
                        navController.graph.startDestinationRoute?.let { route ->
                            popUpTo(route) {
                                saveState = true
                            }
                        }
                        launchSingleTop = true
                        restoreState = true
                    }
                }
            )
        }
    }
}

The “magic” of this approach happens inside onClick lambda of NavigationBarItem:

  • When a bottom tab is clicked, pop the entire backstack, up to the start destination of the graph.
  • Before popping the backstack, save its current state.
  • After the backstack is popped, navigate to the desired route and restore its state.

This is basically a clever hack: we store the backstacks associated with the root routes of all the tabs, and automatically restore the respective backstack when the user navigates to a tab.

Limitations of the Standard Implementation

The above implementation works great when you switch tabs from the bottom bar. Unfortunately, I noticed that it breaks when you navigate back using either system’s gesture or a custom button in the top bar.

For example, imagine that you have tabs A and B, and screens A1, A2 shown in tab A, and B1 shown in tab B. If you navigate: A1 -> A2 -> B1 and then invoke back navigation, you’d expect to switch to tab A and see screen A2, but, instead, you’ll see A1. You can reproduce this scenario in my open-source application.

I haven’t found a simple solution for this problem. As far as I can tell, the fundamental root cause here is that you can’t specify restoreState = true for either gesture navigation or navController.popBackStack() calls. Therefore, after the system navigates back to the root route of the previous tab, the saved backstack isn’t restored.

Bottom Bar Navigation with Nested NavHost’s and Multiple Backstacks

The solution that I ended up with involves using multiple NavHost’s.

You can find a working implementation of this approach in the tutorial application for my Android Architecture Masterclass course here. Please note that this link will take you to one of the first commits in the course, before we refactor the app to a clean state, so the code is a bit dirty.

The first core part of this implementation is nesting of NavHost’s:

@Composable
private fun MainScreenContent(
    padding: PaddingValues,
    parentNavController: NavHostController,
    stackoverflowApi: StackoverflowApi,
    favoriteQuestionDao: FavoriteQuestionDao,
    currentNavController: MutableState<NavHostController>,
) {
    Surface(
        modifier = Modifier
            .padding(padding)
            .padding(horizontal = 12.dp),
    ) {
        NavHost(
            modifier = Modifier.fillMaxSize(),
            navController = parentNavController,
            enterTransition = { fadeIn(animationSpec = tween(200)) },
            exitTransition = { fadeOut(animationSpec = tween(200)) },
            startDestination = Route.MainTab.routeName,
        ) {
            composable(route = Route.MainTab.routeName) {
                val nestedNavController = rememberNavController()
                currentNavController.value = nestedNavController
                NavHost(navController = nestedNavController, startDestination = Route.QuestionsListScreen.routeName) {
                    composable(route = Route.QuestionsListScreen.routeName) {
                        QuestionsListScreen(
                            stackoverflowApi = stackoverflowApi,
                            onQuestionClicked = { clickedQuestionId, clickedQuestionTitle ->
                                nestedNavController.navigate(
                                    Route.QuestionDetailsScreen.routeName
                                        .replace("{questionId}", clickedQuestionId)
                                        .replace("{questionTitle}", clickedQuestionTitle)
                                )
                            },
                        )
                    }
                    composable(route = Route.QuestionDetailsScreen.routeName) { backStackEntry ->
                        QuestionDetailsScreen(
                            questionId = backStackEntry.arguments?.getString("questionId")!!,
                            stackoverflowApi = stackoverflowApi,
                            favoriteQuestionDao = favoriteQuestionDao,
                            onError = {
                                nestedNavController.popBackStack()
                            }
                        )
                    }
                }
            }

            composable(route = Route.FavoritesTab.routeName) {
                val nestedNavController = rememberNavController()
                currentNavController.value = nestedNavController
                NavHost(navController = nestedNavController, startDestination = Route.FavoriteQuestionsScreen.routeName) {
                    composable(route = Route.FavoriteQuestionsScreen.routeName) {
                        FavoriteQuestionsScreen(
                            favoriteQuestionDao = favoriteQuestionDao,
                            onQuestionClicked = { favoriteQuestionId, favoriteQuestionTitle ->
                                nestedNavController.navigate(
                                    Route.QuestionDetailsScreen.routeName
                                        .replace("{questionId}", favoriteQuestionId)
                                        .replace("{questionTitle}", favoriteQuestionTitle)
                                )
                            }
                        )
                    }
                    composable(route = Route.QuestionDetailsScreen.routeName) { backStackEntry ->
                        QuestionDetailsScreen(
                            questionId = backStackEntry.arguments?.getString("questionId")!!,
                            stackoverflowApi = stackoverflowApi,
                            favoriteQuestionDao = favoriteQuestionDao,
                            onError = {
                                nestedNavController.popBackStack()
                            }
                        )
                    }
                }
            }
        }
    }
}

Since we have nested NavHost’s, we have nested NavHostController’s as well. Therefore, the code makes an explicit distinction between “parent” and “current” (i.e. current nested) controllers:

val parentNavController = rememberNavController()

val currentNavController = remember {
    mutableStateOf(parentNavController)
}

Parent NavHostController, for example, is used when switching tabs:

BottomAppBar(modifier = Modifier) {
    MyBottomTabsBar(
        bottomTabs = bottomTabsToRootRoutes.keys.toList(),
        currentBottomTab = currentBottomTab,
        onTabClicked = { bottomTab ->
            parentNavController.navigate(bottomTabsToRootRoutes[bottomTab]!!.routeName) {
                parentNavController.graph.startDestinationRoute?.let { startRoute ->
                    popUpTo(startRoute) {
                        saveState = true
                    }
                }
                launchSingleTop = true
                restoreState = true
            }
        }
    )
}

Current nested NavHostController, for example, is used to navigate within the same tab and in the implementation of back navigation:

onBackClicked = {
    if (!currentNavController.value.popBackStack()) {
        parentNavController.popBackStack()
    }
}

This implementation with nested NavHost’s and NavHostController’s is more versatile than the previously described “standard” hack. I believe that you can implement any navigation pattern with it. Unfortunately, it’s also more complex. Said that, you need to deal with most of this complexity just once, when you set up the navigation logic initially. Adding new tabs and screens becomes relatively straightforward afterwards.

In my course about Android Architecture I demonstrate how you can encapsulate the navigation logic in a standalone ScreensNavigation class. This allows you to “hide” most of the complexity behind a single abstraction and expose simple navigation interface, like this:

// Switch tab
screensNavigator.toTab(bottomTab)
// Switch screen and pass arguments
screensNavigator.toRoute(Route.QuestionDetailsScreen(clickedQuestionId, clickedQuestionTitle))

If you need bottom bar navigation in your production Android app, I highly recommend going through this material.

Conclusion

Implementing proper bottom bar navigation pattern with Jetpack Compose and Compose Navigation frameworks is challenging. Hopefully, after reading this article and reviewing my open-sourced examples, you’ll have much easier time integrating it into your own applications.

As always, thank you for reading and please leave your comments and question below.

The post Bottom Bar Navigation in Android with Compose Navigation appeared first on TechYourChance.

]]>
https://www.techyourchance.com/bottom-bar-navigation-android-compose-navigation/feed/ 0
Robolectric Tests in Android: Benefits and Drawbacks https://www.techyourchance.com/robolectric-android-benefits-and-drawbacks/ https://www.techyourchance.com/robolectric-android-benefits-and-drawbacks/#comments Mon, 25 Mar 2024 11:23:47 +0000 https://www.techyourchance.com/?p=12521 This article explains the role of Robolectric framework for Android and discusses its benefits and drawbacks.

The post Robolectric Tests in Android: Benefits and Drawbacks appeared first on TechYourChance.

]]>
In this article I’ll explain what Robolectric framework for Android is, why automated tests that use this framework aren’t unit tests and highlight the issues caused by over-reliance on Robolectric in your tests suite.

Unit Testing

Unit testing is a practice of writing code that verifies the correctness of another part of code. Unit tests fall within a broader category of automated tests and are characterized by the following properties:

  • Unit tests exercise the smallest logically cohesive parts of the code in the codebase, known as units.
  • Unit tests don’t depend on external resources.
  • Unit tests can be executed locally, in isolation.
  • Unit tests are fast.

Unfortunately, there is no canonical definition of what constitutes a unit. Therefore, the scope of the code that a unit test can exercise is up for a debate (and, boy, there are debates about that). However, the isolated nature and the speed are generally accepted and uncontroversial characteristics of unit tests.

Android SDK Stubs

To understand what Robolectric is, we shall start with the Android SDK. That’s a software bundle that we install on our computers to develop Android apps.

If you browse the directory where you installed a specific Android SDK, you’ll notice a file named android-stubs-src.jar. Unzip this file, and you’ll find a hierarchy of directories inside it, containing .java files that correspond to various Android components.

For example, that’s the content of android/app/Activity.java from this unzipped jar:

package android.app;

@SuppressWarnings({"unchecked", "deprecation", "all"})
public class Activity extends android.view.ContextThemeWrapper implements android.view.LayoutInflater.Factory2, android.view.Window.Callback, android.view.KeyEvent.Callback, android.view.View.OnCreateContextMenuListener, android.content.ComponentCallbacks2 {

public Activity() { throw new RuntimeException("Stub!"); }

public android.content.Intent getIntent() { throw new RuntimeException("Stub!"); }

public void setIntent(android.content.Intent newIntent) { throw new RuntimeException("Stub!"); }

... more code ...

}

What we see here is a class that has all the methods of Android Activity object, but, for some reason, its methods have weird implementations and just throw RuntimeException. What’s going on here?

To answer this question, we shall recall that:

  • To verify that your Android code uses the Android framework correctly (i.e. perform static type checks), the compiler needs to know about Android framework’s classes.
  • The compiler is only interested in the high-level public API because that’s the only part that your code uses directly.
  • The full Android framework is very complex and heavyweight, and depends on external components that it expects to find on real devices (e.g. SQLite database).

In summary, the compiler needs to “know” the public APIs of Android classes, but these classes contain lots of implementing code and depend on various components not present in your local environment.

A clever hack was employed to optimize the performance, limit the amount of code that the compiler and other tools need to deal with and remove external dependencies: create a representation of the Android SDK that has the same public APIs, but strip all implementing code. This eliminates all the aforementioned issues and enables the tools to do their part. It is this alternative representation that’s packaged into android-stubs-src.jar.

[Technically speaking, the tools use android.jar file from the same directory, but the distinction isn’t important for our discussion here]

Testing with Calls to Android APIs

Even though our source code is compiled against empty stubs of the Android SDK, when you deploy your app to a real device, it’ll use the full Android SDK installed on it. But what happens if you write a test for a method that calls to an Android API and execute this test on your machine?

When you run tests locally, the stubs that we saw earlier will be used to represent the Android SDK. Since all the methods in these stubs throw exceptions, the test will fail the moment it encounters an Android API call.

There are several ways to work around this issue:

  • Remove calls to Android APIs from the tested code.
  • Replace Android APIs with your custom test-doubles (i.e. “mock them”).
  • Use Robolectric framework.

The last option is the topic of this article, so let’s understand what it does.

Robolectric

At a high level, Robolectric framework is a test-double of type fake of the entire Android SDK. It re-implements the Android APIs to simulate Android’s feature set even when executed on your own machine. When you use Robolectric, all the calls to the Android APIs, which would normally reach the stubs, are redirected to the respective Robolectric’s implementations.

Robolectric is a simple way to work around the inability to use the Android APIs in local tests. It is also an amazing tool for testing code that depends on location, filesystem, SQLite, and many other “external” dependencies. For example, I leveraged Robolectric to test my SettingsHelper library, which is a wrapper around SharedPreferences. Furthermore, Robolectric allows you to run your tests against the test-doubles of multiple versions of the Android SDK, so you can even capture “fragmentation” bugs with it.

All in all, Robolectric is a very powerful framework that opens new amazing opportunities for local automated testing. The only caveat is that tests that use Robolectric aren’t unit tests.

Integration Testing with Robolectric

Here is an interesting question: when you test code that uses Android APIs, what do you test fundamentally?

Well, since calls to Android APIs are the points where your application integrates with Android, tests that exercise this code are integration tests, by definition.

Another way to arrive at the same conclusion is to realize that, unlike other third-party libraries and frameworks that your app might use, Android SDK doesn’t become part of the distributable artifacts during a build process. It is an external dependency from the point of view of your application.

Therefore, even if you replace the stub implementation of the Android SDK with Robolectric, fundamentally, these tests will still be integration tests, not unit tests.

Issues with Robolectric Tests

In my experience, there are two main issues with Robolectric tests: speed of execution and reliability of the results.

Let’s start with the smaller of the problems: reliability of the results.

Since Robolectric is a complex test-double of the entire Android SDK, there can be small deviations in its behavior as compared to the original. These situations are rare, but if this happens, your passing tests can give you a false positive indication and a bug will slip into production. Furthermore, after the bug is discovered, a green test covering the feature will make it more challenging to identify the issue because, naturally, you assume that you can trust your tests.

The bigger problem with Robolectric tests is their speed.

It’s only natural for integration tests, which are larger in scope than unit tests, to take more time. Robolectric tests are actually relatively quick in the context of the general integration testing, but they are much slower than unit tests. The exact numbers will vary between setups, of course, but just to give you a perspective, consider the following example.

In this configuration, tutorialTest executes in less than 1 millisecond on my machine:

@FixMethodOrder(value = MethodSorters.NAME_ASCENDING)
class TutorialTest {
    @Test
    fun aWarmupTest() {
        1.shouldBe(1)
    }

    @Test
    fun tutorialTest() {
        1.shouldBe(1)
    }
}

If I run the same tests with Robolectric, tutorialTest takes 6 milliseconds to complete:

@FixMethodOrder(value = MethodSorters.NAME_ASCENDING)
@RunWith(RobolectricTestRunner::class)
class TutorialTest {
    @Test
    fun aWarmupTest() {
        1.shouldBe(1)
    }

    @Test
    fun tutorialTest() {
        1.shouldBe(1)
    }
}

Please note how the execution time skyrocketed, even though the test case doesn’t actually call any Android APIs. If it would, the overhead could be even higher.

In the context of a small tests suite, an average overhead of tens of milliseconds per test case doesn’t sound like a big issue. However, once your project reaches hundreds and then thousands of test cases, milliseconds add up to seconds, then to minutes. At some point, this overhead becomes a considerable drag on developers’ productivity.

I was involved in one project where developers did a great job with test coverage, but most of the tests used Robolectric. Even though the project was of moderate size (less than 40k lines of code), it took more than 4 minutes to run the tests locally and even longer on the CI machine. For comparison, executing a comparable number of unit tests would probably take less than 20 seconds. This overhead caused by Robolectric tests became a major productivity issue for me during that engagement, and I’m sure that it similarly affected the other team members as well.

Conclusion

Robolectric is a powerful integration testing framework for Android apps and libraries that opens new and exciting possibilities for automated testing.

Unfortunately, Robolectric is often seen as a unit testing framework and, consequently, gets overused. This leads to a major increase in the execution time of larger test suites, which, in turn, translates into productivity loss.

Therefore, I recommend avoiding Robolectric as much as possible and use it only when the benefits clearly justify its cost, and there is no simple alternative that you can use instead. In my experience, you can eliminate most of the use cases for this framework by writing decoupled, testable code, and then using simpler test-doubles to isolate the tests from individual Android API calls.

As usual, thanks for reading and please leave your comments and questions below.

The post Robolectric Tests in Android: Benefits and Drawbacks appeared first on TechYourChance.

]]>
https://www.techyourchance.com/robolectric-android-benefits-and-drawbacks/feed/ 3
How to Refactor an Android Application https://www.techyourchance.com/how-to-refactor-android-application/ https://www.techyourchance.com/how-to-refactor-android-application/#comments Sat, 16 Mar 2024 14:38:07 +0000 https://www.techyourchance.com/?p=12458 A comprehensive framework and checklist for executing a refactoring of an Android application.

The post How to Refactor an Android Application appeared first on TechYourChance.

]]>
Refactoring an Android application represents a substantial challenge, often requiring a commitment ranging from several weeks to many months. Therefore, a solid plan and focused execution are indispensable for the success of any refactoring project.

I’ve been involved in several refactorings of Android applications over the years and learned a great deal about this intricate subject. This article summarizes my experience and insights.

Refactoring

Refactoring is the process of restructuring existing code without changing its external behavior. Simply put, you refactor when the code works, but is no longer optimal in some ways and should be improved.

You can refactor at any level of abstraction, from a single line of code to the entire application. The larger the scope, the more challenging and time-consuming the refactoring task becomes.

In this article, I’ll use the term “refactoring” to refer to an optimization of a relatively large part of an existing Android codebase. For the sake of being quantitative, let’s say that this post applies to refactoring projects that take more than one man-week of effort. For smaller refactoring tasks, you probably don’t need a framework like the one I’ll lay down below.

Refactoring vs Rewrite

Much like refactoring, rewrite of your Android application lets you enhance its source code while preserving the external functionality. However, despite these apparent similarities, refactoring and rewrite are fundamentally distinct processes, each with its own set of goals and methodologies.

The factors affecting the decision whether to refactor or rewrite your Android project are outside the scope of this post. This topic warrants an article on its own. Here, I just want to warn you about “accidental rewrites”. See, it’s surprisingly simple, and even tempting, to set out to refactor your Android application, and end up rewriting it almost from scratch. This happens more often than you’d think.

When refactoring becomes a rewrite? Unfortunately, as far as I know, there is no clear threshold that indicates that you’re in the rewrite territory. What’s certain, though, is that rewrite indication is not as simple as whether you start by creating a new project in an empty directory. You can do that and then copy large chunks of the existing project over to refactor, or you can work in the existing codebase and end up rewriting most of the code.

So, I’ll take the liberty to define a “rewrite threshold” myself:

  • If your activities change more than 50% of the code in the project, then it’s a rewrite.
  • If the code becomes non-releasable during the refactoring project and disassociates from the legacy code, then it’s a rewrite.

An immediate corollary from the above definition is that refactored code should be integrated into the main code branch continuously.

Now, as we’ll discuss below, there are some refactoring tasks that can take considerable time. For example, changing from one database technology to another is a big step that can’t easily be decomposed into smaller ones. So, it’s OK for this step to “live” on a side branch for a while. However, if you find yourself creating a branch called “refactoring”, or keeping many longer-lived side branches “waiting to be merged”, then you’re at risk of losing compatibility with the existing code and unintentionally shifting to rewrite. Watch out for these warning signs.

Identify Reasons for Refactoring

You must have very good reasons to launch a large refactoring project. Otherwise, you’ll risk wasting a lot of time for very little gain, if any.

In this context, “modernizing the codebase”, “migrating to a new framework”, “adopting the latest architecture”, etc. ARE NOT valid reasons. These are refactoring goals. Reasons are pain points that you experience right now and want to fix. Valid refactoring reasons might include: slowdown in release cadence, degradation in the overall quality of the application, repeated bugs related to specific features in the app, developers are afraid to touch parts of the codebase, and more.

There is also developers’ desire to use new technologies. Nothing bad or shameful about this, but this aspect is rarely ever discussed explicitly as a reason for refactoring. I believe that’s because most developers realize that there is little business value in migrating to newer technologies, so they can’t make a good case for this position. Unfortunately, not discussing this aspect explicitly doesn’t make it go away. It just becomes an implicit bias affecting many other decisions. Therefore, I recommend just addressing this heads on and discussing how much effort can be reasonably allocated to new technologies, independently of any specific pain points.

Once you have a list of reasons for refactoring written down, you’re ready to set refactoring goals.

Set Refactoring Goals

Refactoring project must have a set of end goals. These goals are the desired state of the codebase after refactoring completes, that will address the pain points identified earlier.

If you wouldn’t go through the trouble of identifying and writing down the reasons for refactoring, you’d be at risk of setting unrelated or unimportant goals. But you have your reasons in a list after the previous step, so now you just need to plan how to address each pain point.

After the actual refactoring activities will start, you might be tempted to broaden the scope and “clean up that other part since I’m already here”. Fight off these temptations and only work on tasks that directly correspond to the defined project’s goals. Otherwise, refactoring projects can become much longer and harder than anticipated.

Naming

In my opinion, good naming is the most important feature of a source code. Poorly written code with good names is much simpler to read and work with than a clean code with unfortunate and/or inconsistent naming.

Therefore, I suggest adding “improve naming in the codebase” to the list of your refactoring goals. I make this universal recommendation because, whatever reasons motivated your refactoring project, better naming will likely address at least part of them.

To be clear, by “better naming” I primarily mean using proper business domain terminology. Sure, standardizing your “managers”, “helpers”, “use cases”, “repositories”, etc. can be great, as well as getting rid of kitchen-sink “utils”, but it’s nowhere as important as making sure that the business terms are used correctly and consistently across the codebase.

To align all team members on naming, I recommend starting by compiling a glossary of business domain terms which are used in the existing codebase. Discuss these terms with the team and see which of them need to change. Then write down a new glossary that you want to adopt, so that team members could use it as a reference when carrying out refactoring tasks.

Democracy During Refactoring Project

New developers don’t have the experience and the insight of an experienced tech lead, so software engineering shouldn’t be a democracy.

That said, since refactoring projects aim to resolve the pain points of all team members, I do believe that most of the project’s goals should be up to vote. Whether it’s the choice of a programming language, an architecture, a framework – let the team decide. Tech leads should have a veto right in case they strongly disagree, though.

Once you are done with setting the refactoring goals, there should be no more democracy. Appoint a technical lead for the refactoring project. This can be the tech lead of the entire application, or one of the more senior developers. The important part is that there should be a single team member who has full authority over the refactoring activities. Sure enough, you want someone in that position who is open to feedback from others, but, at the end of a day, they should have the final say on all refactoring-related technical topics.

Not having a “refactoring tech lead” can lead to team members spending much time on arguments over little details. Furthermore, on bigger projects involving multiple teams, “refactoring tech lead” is crucial to ensure that all developers follow the same practices and guidelines.

Allocate Ongoing Effort to Refactoring Instead of Setting Deadlines

Almost all the refactoring project that I was involved in had strict deadlines. Unfortunately, refactoring is a highly unpredictable activity. Therefore, deadlines resulted in either incomplete refactoring and frustrated developers, or deadline misses and tensions with the management.

I think the better model for refactoring project is ongoing effort allocation. For example, the team can decide that they’ll spend 40% of their time, approximately two days a week, working on dedicated refactoring activities. This will allow them to be less stressed about the deadline and do a better job. They’ll also be working on other ongoing tasks, so the application will evolve and external stakeholders will see some progress, instead of a complete halt.

There is a caveat here, though: if you need to accomplish a big monolith refactoring task, splitting the work across multiple weeks is a bad idea. Therefore, if, for example, you want to set up dependency injection in the codebase, at least one developer will need to work on this full-time until it’s done. That developer shouldn’t have a deadline either, of course, because they can’t estimate how long this task will take (unless they did it several times in the past).

Less is More in Refactoring

The most important technical tip in the context of a big refactoring is to make small changes. Sounds simple, but it’s very complicated in practice.

You probably set to refactor your codebase because it’s somewhat messy: excessive coupling, classes with hundreds or even thousands of lines of code, circular dependencies, “clever” abstractions and other issues. In such codebase, you can start refactoring a small piece of code and then discover that it’s coupled to many other parts of the app that require refactoring as well. So, you proceed to refactor those parts and, after a while, you’re buried under a pile of changes, the code is broken and you forgot where you started. This happens all the time.

So, before you make the next refactoring step, spend a bit of time planning it. Draw a mental boundary around the area you’re going to refactor and commit to not going outside of it. Identify inter-dependencies with other parts of the code and decide what you’ll do about them. Draw diagrams that reflect the current and the desired designs of the affected code. Review your plan with the refactoring tech lead.

If you start a refactoring session and it gets messy and long, or the app breaks and you aren’t sure why, don’t sweat it. Just drop all your current changes using Git and start over. In most cases, this will be more efficient than trying to salvage a derailed refactoring step.

Isolated vs Wide-Scope Refactoring

Some refactoring tasks on your project will be relatively small, affecting isolated parts of the codebase. For example, even though migrating the entire app from one MVx architectural pattern to another can be very time-consuming, you can usually do that screen-by-screen. This is a natural level of isolation, so it’s relatively straightforward to break this big refactoring goal of “migrate to MVx” into smaller steps.

Unfortunately, there are wide-scope refactoring tasks that can’t be decomposed into smaller steps so easily. These usually relate to cross-cutting concerns in the application and can affect larger parts of the source code. For example, setting up a Dependency Injection, replacing one database implementation with another, cleanup of navigation logic, etc.

Wide-scope refactoring tasks are complicated and require long periods of continuous, concentrated effort. Therefore, I recommend assigning them to the most experienced developers. I also found great value in doing wide-scope refactorings as pair-programming sessions because it’s very useful to have someone to consult in real time and watch over your shoulder for mistakes. Though, this means drawing effort from two team members, of course.

A major challenge with wide-scope refactorings is merging the changes with the rest of the team. Whether other team members refactor the code or add new features, wide-scope refactorings tend to introduce conflicting changes. That’s another reason to dedicate concentrated effort to these tasks and get them done as quickly as possible – to reduce the number and the severity of merge conflicts.

Dedicated vs Enabling Refactoring

When refactoring your Android application alongside ongoing product development, you’ll have two main types of refactoring tasks: dedicated and enabling refactorings.

Dedicated refactoring is when you refactor a piece of code as a standalone task, unrelated to the ongoing product development.

In contrast, enabling refactoring is when you perform preliminary refactoring as part of a new feature development. For example, when tasked with adding some elements on a screen, you might start by refactoring that screen (if that’s part of the plan), and then add the required feature into the cleaned up code.

Enabling refactoring is a powerful technique because it lets you integrate refactorings into ongoing product development. This means that, instead of jumping into that part of the code twice, you ramp up once and then perform both tasks. That said, you shouldn’t combine the refactoring and the new feature into a single step. Instead, perform the refactoring first, test the app, merge the code, and only then add the new feature. This separation will spare you a lot of time and energy.

Regression Testing

Probably the most important part of any refactoring project is regression testing, also known as verifying that your changes didn’t degrade the existing functionality. In the ideal case, you’ll have a high-quality suite of automated tests, alongside a dedicated QA team to handle the testing. In the more typical scenario that I observed, either one or both of these components are missing.

Whatever your situation is, I recommend doing regression testing after each refactoring step. This is tedious and time-consuming, but the alternative of introducing a bug can be much worse. For reasons that I can’t explain, refactoring bugs tend to be very challenging to find and fix, especially if you encounter them later in the refactoring cycle.

Communication with Non-Technical Stakeholders

Long refactoring projects can add friction between the R&D and non-technical stakeholders, like product and project managers. Even if they support the refactoring initiative in principle, after a month of invisible activity that consumes a significant part of the R&D time, non-technical stakeholders can become impatient.

The best way to prevent this friction is through communication. Make sure the non-technical stakeholder understand the reasons for the refactoring project and give them the list of the project’s goals that you compiled. Update them each time a goal is completed and removed from that list. Make them “feel” the progress and keep them in the loop, as much as possible.

Don’t Aim for Perfection

Last tip is to remain practical and not aim for perfection.

Sure, you started refactoring project to tidy up the codebase, so you don’t want to make any compromises. Clean code and the latest best practices exclusively, please.

Unfortunately, the real world is complex and messy. In any non-trivial application, you’ll find code that is too challenging or risky to refactor. With few notable exceptions, you can probably leave this code in its current form. If other features depend on it and you want to clean up the interfaces between them, you can use Facade or Adapter design patterns to wrap the problematic code and expose a more convenient API.

The same principle applies at lower levels of abstractions, like a single class. You might want to refactor every class that falls within the scope of the refactoring project, but this wouldn’t be optimal. If the implementation of the class is readable and isn’t excessively coupled to other classes, just refactor its external API. It’s not worth spending two hours refactoring something that’s already encapsulated and decoupled, and can be understood and maintained without much trouble.

Conclusion

Huh, I suddenly realized that there is very little discussion of Android in this article. It ended up containing a checklist for a general refactoring project. Well, in retrospect, this makes total sense because all Android applications are unique, so it’s impossible to give a universal advice related to specific tools or patterns.

In conclusion, I want to share one curious observation. Developers are usually very enthusiastic and eager to kick-off new refactoring projects. However, towards the end of these projects, the same developers will often find themselves exhausted, demotivated and willing to just wrap it up in any shape. So, make sure your reasons for refactoring justify this likely outcome. [If you had a different experience, please share it in the comments section below.]

As always, thank you for reading and don’t forget to subscribe to my newsletter if you want to receive notifications about new posts.

The post How to Refactor an Android Application appeared first on TechYourChance.

]]>
https://www.techyourchance.com/how-to-refactor-android-application/feed/ 2
Continuous Integration for Android Apps with Bitrise: Full Guide https://www.techyourchance.com/continuous-integration-android-bitrise/ https://www.techyourchance.com/continuous-integration-android-bitrise/#respond Fri, 08 Mar 2024 13:16:38 +0000 https://www.techyourchance.com/?p=12355 A comprehensive guide to setting up continuous integration for your Android app using the Bitrise platform.

The post Continuous Integration for Android Apps with Bitrise: Full Guide appeared first on TechYourChance.

]]>
I used Bitrise platform to set up a continuous integration flow for my Android app’s build and release processes. This means that all I have to do now is push new code and tags into my app’s repository, and Bitrise will test the app, build it and publish new releases automatically. This will spare me a lot of manual work going forward and prevent silly mistakes.

In this article, I’ll explain how you can do that yourself in your own Android project.

Sign Up for Bitrise Continuous Integration

The first step, just like with any other service, is to create an account on Bitrise. So, proceed to bitrise.io and sign up using one of the supported methods.

After you sign up and verify your email address, go ahead and select Bitrise CI. At this point, you’ll be enrolled for a free 30 days trial period. If you’re an individual, like me, and just need to build your relatively small project, you can switch to their free “Hobby” plan afterwards.

Generate New Access Token to Grant Bitrise Access to Your Git Repository

In the next step, I’ll be adding a new app project on Bitrise, so I’ll need to either grant Bitrise access to my Git hosting account (through simple integration), or manually specify the repository’s URL and provide a targeted access token. I prefer the later option. Therefore, as a preliminary step, I had to generate an access token for my repo.

Note: this step changes according to which Git hosting provider you use.

I host my application on GitHub, so, to allow Bitrise access the repo’s contents, I generated a new Fine-Grained Personal Access Token (PAT). “Fine-Grained” means that this PAT can be scoped to just one specific repository, which is more secure that opening up the entire account.

At the minimum, the PAT should have “Contents Read-Only” permission to clone the source code. I gave it “Contents Read-Write” permission because I want Bitrise to upload new releases to GitHub after successful builds (explained later).

Add New Android Application to Your Bitrise Workspace

Click on “Add New App” button and proceed to fill in the required information.

When selecting repository, you can either authorize Bitrise to access your Git hosting directly, or just copy-paste the repository’s URL. As I explained earlier, I prefer the later approach. If you’re like me, then, during the authorization step, paste in the GitHub PAT that you generated previously (or an analogous access token from another Git hosting provider).

Bitrise can inspect your app’s repo and infer its build configuration automatically. If this process doesn’t yield correct results, you’ll need to help it a bit with manual configuration. For example, this configuration corresponds to the default Android project layout that uses Groovy in Gradle scripts:

After you provide all the required information, click on a button to add the application to your workspace.

Inspect and Edit the Workflows

Workflow on Bitrise is a sequence of individual steps executed as a standalone flow. When you add a new Android application to Bitrise, it’ll create two default workflows for you: build_apk and run_tests.

To edit workflows, select your application in the dashboard and then click on the Workflows button. Now you can select a specific workflow to inspect and edit its configuration.

Let’s review the layout of the Workflow Editor:

The main parts of this interface are:

  1. Workflow selector: shows the name of the current workflow and lets you switch between different workflows.
  2. Workflow steps editor: this elements shows the workflow’s steps and lets you change them.
  3. Details pane: when no step is selected, it will show the information about the workflow; once you select a step, this pane will show that step’s information and let you change it.
  4. Run Workflow button: runs the selected workflow.
  5. Workflow editor menu: use the entries in this menu to configure the environment for all workflows (more on this later).

Tip: when you’ll be editing your workflows, don’t forget to click that grayed out Save Changes button at the top 😉

Declare Environment Variables and Secrets

If your app’s build files read any environment variables, then you can easily declare them in the Env Vars menu entry of the Workflow Editor. Once you do that, these variables will be added to all workflows.

There is also Secrets menu entry, which is somewhat similar to Env Vars, but can be used for injecting sensitive data into workflows that you want to store securely. For example, you should probably declare passwords and authentication tokens to third-party services that your app might be using as secrets.

Adjust App’s Gradle Configuration for Bitrise’s Environment (Optional)

Note: this step is optional and, depending on your build files configuration, you might not even need it.

In my case, the build scripts expected to find a local file containing passwords and authentication tokens for various services. Since this file isn’t committed to Git (security precaution), Bitrise wouldn’t get it by just cloning the app’s repository. Therefore, my app wouldn’t build on Bitrise out-of-the-box.

There were two straightforward solutions to this problem:

  • Upload the file to Bitrise and change the build files to accommodate its alternative location.
  • Declare a set of Secrets corresponding to the entries in that file and use them as environment variables in the build files.

I chose the later approach. Therefore, I had to edit my app’s build files to first check whether the respective env variables exist, and if not then fall back to looking for a local file.

Switch Your Workflows to Java 17

Starting with version 8.0, Android Gradle Plugin requires Java 17 to build Android apps. Bitrise uses an earlier version by default, so I changed the version by adding “Set Java Version” step into my workflow and selecting 17 as “Java version” input variable:

Adding a new step into a workflow is very simple: just click that small + button in the place where you need that step and start typing the step’s name. You can see in the above screenshot that I added the new step right after the default Git Clone step.

Upload Android Keystore File to Bitrise

To sign the artifacts of the build process, for example APKs, Bitrise will need a valid keystore file. So, you’ll need to provide it.

Go to your app’s main page on Bitrise and then click on the gear button to open its settings. Find “Code signing” menu entry, select it, and then go to “Android” tab. Upload your keystore file and enter its password, key alias and key password.

That’s it, now Bitrise can sing build artifacts on your behalf!

Select the Required Build Artifact: APK or AAB

By default, “Android Build” step is configured to produce APK files. If you want to build AABs, change this step’s configuration accordingly.

Switch to Apksigner Tool (Optional)

“Android Sign” step, which is responsible for signing the artifacts, uses jarsigner tool by default. This is the correct tool to use if you build AABs. However, in my special case, I build APKs, so I modified “Android Sign” step to use apksigner instead, which is a newer tool for signing APKs.

Add Webhook to Trigger Workflows Automatically (Optional)

In most cases, you’ll want Bitrise to run workflows in response to external events: pushes into specific branches, new pull requests, new tags, etc. This means that Bitrise should be notified when these events occur in the app’s repository. That’s where webhooks enter the picture.

Webhooks are basically URLs that Git hosting services (GitHub, GitLab, etc.) can use to send information about various events that occur in the app’s repository. For example, whenever a new commit is pushed into the repo, Git hosting service will send a request to that URL with some info describing what change took place.

So, the first step to set up a webhook is getting its URL. Go to your app’s settings in Bitrise, select “Integrations” menu entry, then go to “Webhooks” tab. Scroll down to “Incoming Webhooks” section. Here you’ll have to choose to either set the webhook automatically, or manually. I prefer the manual route, so I selected this option, chose my Git hosting service (GitHub), and copied the resulting Webhook URL.

Next, I opened my app’s main page on GitHub, went to settings, then selected Webhooks menu entry and added a new webhook using the URL that I copied from Bitrise. To make the life simpler, I chose to be notified of all events.

That’s it, now GitHub will notify Bitrise about new events in my app’s repository.

Configure Webhook Triggers for Workflows (Optional)

After the previous step, Bitrise will receive notifications about all new events in my app’s repository, but I don’t want to run workflows on every possible event. Luckily, you can configure specific triggers that will invoke individual workflows.

In Workflow Editor, select “Triggers” menu entry:

Here you have three types of triggers: Push, Pull Request and Tag.

By default, Bitrise sets up run_tests workflow to run on each push to master and each pull request. I didn’t need these triggers, so I deleted them. Instead, I wanted to invoke build_apk workflow whenever I push new tags into the repo (that’s how I mark releases), and that’s the configuration that you see on the above screenshot.

Auto-Deploying the APK (Optional)

The default configuration of build_apk workflow makes the produced artifacts available for download from Bitrise. I wanted to take it one step further and automate the deployment of new APKs to GitHub releases section.

Turned out that releasing to GitHub is very simple with Bitrise. I appended “Github Release” step to build_apk workflow, configured its variables, and, voila, newly built APKs appear on GitHub. Magic!

Sure enough, releasing APKs to GitHub is not the main deployment strategy. If you want to automate deployment to Google Play, replace “Github Release” step with “Google Play Deploy” step.

Conclusion

Alright, that’s how I set up continuous integration for my Android application using Bitrise.

All in all, it took me several hours to figure out all the details of Bitrise’s platform and adjust my app’s build configuration for CI environment. I’m pretty sure that I’ll get this time back relatively soon, because now, when this flow is automated, I’ll be “earning” time back on each future release. I hope this guide will spare you at least part of the initial setup effort.

In conclusion, I want to thank Bitrise for sponsoring this infrastructure work in TechYourChance application and this article. It’s great to see companies that invest to bring the best resources for their clients.

As usual, thank you for reading and please leave your comments and questions below.

The post Continuous Integration for Android Apps with Bitrise: Full Guide appeared first on TechYourChance.

]]>
https://www.techyourchance.com/continuous-integration-android-bitrise/feed/ 0
Test Firebase Cloud Messaging in Android Using ADB https://www.techyourchance.com/test-firebase-cloud-messaging-in-android-using-adb/ https://www.techyourchance.com/test-firebase-cloud-messaging-in-android-using-adb/#respond Sun, 18 Feb 2024 08:45:03 +0000 https://www.techyourchance.com/?p=12334 Learn how to use ADB for simulating Firebase Cloud Messaging (FCM) pushes, streamlining your development process.

The post Test Firebase Cloud Messaging in Android Using ADB appeared first on TechYourChance.

]]>
Firebase Cloud Messaging (FCM) is the most popular choice for sending push notifications to Android devices. Unfortunately, there is no official way to test the integration of FCM into your app “locally”. Therefore, if you’ll want to test pushes behavior when developing your Android app, you’ll need to send an actual push message using FCM web console. That’s time-consuming, cumbersome and can lead to an accidental “test” push notifications sent in production.

In this article I’ll share a quick hack that will allow you to send test pushes to your app using ADB.

FirebaseInstanceIdReceiver

FCM library for Android uses a BroadcastReceiver called FirebaseInstanceIdReceiver to receive messages inside the application. That’s the declaration of this receiver in FCM library’s manifest:

<receiver
     android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver"
     android:exported="true"
     android:permission="com.google.android.c2dm.permission.SEND" >
     <intent-filter>
         <action android:name="com.google.android.c2dm.intent.RECEIVE" />
     </intent-filter>

     <meta-data
         android:name="com.google.android.gms.cloudmessaging.FINISHED_AFTER_HANDLED"
         android:value="true" />
 </receiver>

Note that this BroadcastReceiver is protected by com.google.android.c2dm.permission.SEND permission. Neither non-Google apps nor ADB can get this permission, so this receiver isn’t accessible “from outside” under normal circumstances.

Replace FirebaseInstanceIdReceiver Configuration

One of the build steps of an Android app is called “manifest-merging”. During this step, the manifests of the third-party libs are merged into your app’s manifest. The final result is a final merged manifest that goes into the APK (or AAB) archive. The declaration of FirebaseInstanceIdReceiver that you saw earlier is merged into the final manifest during this step.

Fortunately for us, Android provides a way to override the merged manifest declarations. So, we’ll take advantage of this capability and alter the declaration of FirebaseInstanceIdReceiver to open it to the entire world. To achieve that, add this to your app’s manifest:

<receiver
    android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver"
    android:exported="true"
    android:permission="@null"
    tools:replace="android:permission"/>

Note how I set the required permission to @null and declare that this permission should replace the one inherited from the FCM library. This will remove the original protection from this receiver, so anyone will be able to send broadcasts to it.

Important: you shouldn’t release your application with this hack because it’s a security hole. So, make sure that the above code doesn’t make it into your release build. [I wonder whether Google Play checks for this kind of vulnerabilities when you upload new artifacts?]

Use ADB to Send Broadcasts to FirebaseInstanceIdReceiver

Once FirebaseInstanceIdReceiver is stripped of its permission protection, this component will start accepting broadcasts from ADB. For example, this command will simulate a push message with title and body string extras (replace com.yourapp.android with your app’s application ID):

adb shell 'am broadcast -a com.google.android.c2dm.intent.RECEIVE -n com.yourapp.android/com.google.firebase.iid.FirebaseInstanceIdReceiver --es "gcm.n.e" "1" --es "gcm.n.title" "Test title" --es "gcm.n.body" "Test message"'

The full list of supported extras can be found in FCM library’s Constants.MessageNotificationKeys class. For your convenience, attaching its current version here:

  /**
   * Keys used by Google Play services in bundle representing a Remote Message, to describe a
   * Notification that should be rendered by the client.
   */
  public static final class MessageNotificationKeys {

    public static final String RESERVED_PREFIX = "gcm.";

    public static final String NOTIFICATION_PREFIX = RESERVED_PREFIX + "n.";

    // TODO(morepork) Remove this once the server is updated to only use the new prefix
    public static final String NOTIFICATION_PREFIX_OLD = RESERVED_PREFIX + "notification.";

    /** Parameter to "enable" the display notification */
    public static final String ENABLE_NOTIFICATION = NOTIFICATION_PREFIX + "e";

    /**
     * Parameter to disable Android Q's "proxying" feature. Notifications with this set will never
     * be proxied.
     */
    public static final String DO_NOT_PROXY = NOTIFICATION_PREFIX + "dnp";

    /**
     * Parameter to make this into a fake notification that is only used for enabling analytics for
     * a control group. No notification is shown, nor any service callbacks. notification nor enable
     * any service callbacks.
     */
    public static final String NO_UI = NOTIFICATION_PREFIX + "noui";

    public static final String TITLE = NOTIFICATION_PREFIX + "title";
    public static final String BODY = NOTIFICATION_PREFIX + "body";
    public static final String ICON = NOTIFICATION_PREFIX + "icon";
    public static final String IMAGE_URL = NOTIFICATION_PREFIX + "image";
    public static final String TAG = NOTIFICATION_PREFIX + "tag";
    public static final String COLOR = NOTIFICATION_PREFIX + "color";
    public static final String TICKER = NOTIFICATION_PREFIX + "ticker";
    public static final String LOCAL_ONLY = NOTIFICATION_PREFIX + "local_only";
    public static final String STICKY = NOTIFICATION_PREFIX + "sticky";
    public static final String NOTIFICATION_PRIORITY =
        NOTIFICATION_PREFIX + "notification_priority";
    public static final String DEFAULT_SOUND = NOTIFICATION_PREFIX + "default_sound";
    public static final String DEFAULT_VIBRATE_TIMINGS =
        NOTIFICATION_PREFIX + "default_vibrate_timings";
    public static final String DEFAULT_LIGHT_SETTINGS =
        NOTIFICATION_PREFIX + "default_light_settings";
    public static final String NOTIFICATION_COUNT = NOTIFICATION_PREFIX + "notification_count";
    public static final String VISIBILITY = NOTIFICATION_PREFIX + "visibility";
    public static final String VIBRATE_TIMINGS = NOTIFICATION_PREFIX + "vibrate_timings";
    public static final String LIGHT_SETTINGS = NOTIFICATION_PREFIX + "light_settings";
    public static final String EVENT_TIME = NOTIFICATION_PREFIX + "event_time";

    /**
     * KEY_SOUND_2: can be null, "default" or the NAME of the R.raw.NAME resource to play. This key
     * has been added in Urda. Before Urda we used "sound" = null / "default"
     */
    public static final String SOUND_2 = NOTIFICATION_PREFIX + "sound2";

    // TODO(dgiorgini): clean SOUND/SOUND_2. Remove old key and rename current one.

    // FOR THE SERVER:
    //  - if sound is not provided : don't send anything
    //  - if sound is provided : send "sound2" = provided-string
    //                           AND send "sound" = "default" for backward compatibility < Urda

    /** DEPRECATED: use SOUND_2. this is used for backward compatibility < Urda */
    public static final String SOUND = NOTIFICATION_PREFIX + "sound";

    public static final String CLICK_ACTION = NOTIFICATION_PREFIX + "click_action";

    /** Deep link into the app that will be opened on click */
    public static final String LINK = NOTIFICATION_PREFIX + "link";

    /** Android override for the deep link */
    public static final String LINK_ANDROID = NOTIFICATION_PREFIX + "link_android";

    /** Android notification channel id */
    public static final String CHANNEL = NOTIFICATION_PREFIX + "android_channel_id";

    /**
     * Activity Intent extra key that holds the analytics data (in the form of a bundle) attached to
     * a notification open event.
     */
    public static final String ANALYTICS_DATA = NOTIFICATION_PREFIX + "analytics_data";

    /**
     * For l10n of text parameters (e.g. title & body) a string resource can be specified instead of
     * a raw string. The name of that resource would be passed in the bundle under the key named:
     * <parameter> + suffix (e.g: _loc_key)
     */
    public static final String TEXT_RESOURCE_SUFFIX = "_loc_key";

    /**
     * For l10n of text parameters (e.g. title & body) a string containing the localization
     * parameters can be specified. This would be present in the bundle under the key named:
     * <parameter> + suffix (e.g: _loc_args)
     */
    public static final String TEXT_ARGS_SUFFIX = "_loc_args";

    // don't instantiate me.
    private MessageNotificationKeys() {}
  }

After you compose your test ADB command(s) and verify that it works, I recommend wrapping it in a shell script and committing to the repo. This way, you won’t need to repeat this task again and will be able to share the test command with your teammates. You’ll also have the history of the evolution of the command if you ever change it in the future.

Conclusion

That’s it, now you can test the FCM logic inside your app locally using ADB, without going through the FCM web console. This can spare you a lot of time, but don’t forget to test the end-to-end flow before the release.

The post Test Firebase Cloud Messaging in Android Using ADB appeared first on TechYourChance.

]]>
https://www.techyourchance.com/test-firebase-cloud-messaging-in-android-using-adb/feed/ 0
The Challenges of Android Development https://www.techyourchance.com/the-challenges-of-android-development/ https://www.techyourchance.com/the-challenges-of-android-development/#comments Wed, 07 Feb 2024 13:59:04 +0000 https://www.techyourchance.com/?p=12312 My summary of the fundamental and accidental challenges that make Android development so damn difficult.

The post The Challenges of Android Development appeared first on TechYourChance.

]]>
Android is one of the most challenging niches in high-level software development. Yet, many developers and managers miss this fact, which can lead to underestimation of projects’ scope and timelines. Therefore, in this article, I’ll list the main sources of Android’s complexity, so you could account for them.

The order of the sections corresponds to the contribution of each factor to the overall complexity, starting with the most prominent ones. I draw on my subjective experience here, so your mileage might vary.

Multiplicity of Complex Lifecycles

Backend devs have it very simple: the framework spins up a “handler” when new request arrives, the handler processes the request and generates the response, the framework sends the response back and then destroys the handler. There are various variations on this theme, but, overall, that’s the lifecycle that backend devs deal with most of the time. There can also be components that outlive individual requests, but they are also very straightforward to implement and manage.

Frontend devs need to deal with more complex lifecycles than backend devs. React components, for example, deal with construction, mounting, rendering, updating and can also have “effects”. Orchestrating these lifecycles is a real challenge.

Now enter Android, with its many components that have unique lifecycles: Activity, Fragment, ViewModel, Service, BroadcastReceiver, etc. Some of these lifecycles are unbelievably complex. For example, here is a diagram of the Activity and Fragment lifecycles. Take a look, it’s hilarious, and keep in mind that this diagram is still missing some methods and doesn’t account for the differences between different Android versions. On top of that, some lifecycles are inter-related, thus making Android devs even more miserable.

All in all, developing Android applications requires a deep understanding of many complex lifecycles and their inter-dependencies. Android devs get a “Lifecycles Stockholm Syndrome” at some point and take this complexity for granted, but if non-Android dev starts learning Android, they are usually very surprised by all that. I mean, what other framework would call for an 8-hours course just about lifecycles?

Inconvenient Distribution Channels

When there is a bug in a backend code, it can be fixed very quickly. Developers will implement the fix, testers will verify it, and a new version can be deployed immediately. That newly deployed version can then sit idly behind a load balancer until the switch is flipped on, at which point all new requests will be routed to it. If it’s the new version that introduced the bug, then the switch can be flipped back and the old version will resume processing the requests until the problem is fixed.

A similar approach can work with web frontends. In fact, it’s even simpler with frontends because they usually don’t store any persistent state that can get corrupted.

When there is a bug in an Android application, you don’t have any rollback mechanism. The only way to resolve the problem is to deploy a new version to users’ devices. The problem is that, unlike with the backend and frontend servers, you don’t have any control over these devices (except for remotely managed devices). So, the best you can do is to put a new version out there and hope that the affected users will update their apps.

Unfortunately, even “putting a new version out there” isn’t as simple as it might sound in some cases. If your app is distributed through Google Play, then you’ll face additional delays. Google Play deserves a section on its own, though, so we’ll discuss it later.

There is one notable exception to the above: WebView. This component can execute raw JavaScript and render websites, so it can be used as a workaround for dynamic delivery in some cases. However, it’s still a hack which increases the overall complexity of the app and you wouldn’t use JavaScript to write an entire Android app (you’d probably use ReactNative instead, but I’m not experienced with this tech).

Google Play

Google Play is the most popular distribution channel for Android apps. It’s pretty much the only option for consumer-oriented products. Dealing with Google Play can be tricky, however.

The first complication is the sheer amount of time it can take for them to approve and roll out an app update. These delays become especially painful when you need to fix a critical bug in the app.

Another problem is that they have their standards and requirements, some of which are kept in secret, while others can be unclear. So, if you receive a warning from Google Play, you (or your company) won’t always know exactly what’s the problem. Your communication is very likely to be answered by bots, and good luck trying to reach a human being at Google to clarify the situation. This state of affairs is especially unfortunate for indie and small developers who depend on Google Play for their entire income.

Fragmentation of Devices and Operating System Versions

Backend devs are free to choose their platform and operating system. Modern cloud providers let them control these parameters when they spin up servers. Furthermore, a tool like Docker can abstract out these details completely.

Frontend developers have to support multiple browsers and browser versions, so they deal with compatibility issues. However, in practice, their compatibility matrix isn’t huge, especially in light of the fact that most mainstream browsers use Chromium under the hood.

Android ecosystem suffers from a bad compatibility problem. There are many different Android devices out there, running different versions of Android OS. So, pretty much all mature applications are riddled with compatibility code. Furthermore, since OEMs can modify many aspects of Android before they flush it onto their devices, what works on one device isn’t guaranteed to work on another. The problems can range from minor stuff like incorrect representation of colors, to major issues like undocumented aspects of power management and mysterious crashes.

We call Android’s compatibility problem “fragmentation”, and it’s a major source of pain for many developers.

CPU, Memory and Battery Optimizations

As I mentioned previously, there are many different Android devices out there. Some of them are lower-end, budget devices, while others can be just very old. Therefore, Android apps can execute in a very resource-constrained environments. Furthermore, in addition to powering mobile phones and tablets, Android can be found in many TVs, set top TV boxes, payment terminals, and other types of special devices that usually have lower specs.

Some Android projects have the luxury of targeting just the richer countries, or specific types of high-end devices. However, in general, Android applications require much better optimizations than even their iOS counterparts. And, of course, unlike backend or frontend servers, you can’t just add resources to your application by visiting you cloud provider’s website.

Connectivity Loss

Most backend developers don’t deal with internet connectivity loss. If the server becomes unavailable, there isn’t much they can do about it from within the application. The situation can be trickier if they can’t reach another server that their app depends on, but, even then, returning an error response to the client will usually suffice.

Frontend devs don’t have to deal with connectivity loss either. Users are pretty accustomed to seeing the generic browser’s error page in that case.

Android developers must think about connectivity loss all the time. No internet connection is not an exceptional condition on mobile devices, but an expected state. Therefore, Android application must handle connectivity loss gracefully. This aspect often leads to many bugs during development and in production, and proper handling of this state can introduce a surprising amount of complexity into the source code.

Offline Work

Offline work is when the application remains functional even if there is no internet connection. Well, assuming that the app requires the internet to begin with, of course.

Most applications don’t support offline work. However, there are many categories where users simply expect the app to work even when offline. Imagine, for example, that your favorite messenger would drop messages just because you’re not connected to the internet. That would be outrageous, right?

Offline work is a very challenging feature to implement. Even the simplest of cases, like backing up your notes to the cloud when internet is available, can introduce a surprising amount of complexity into the project. More complex cases, like supporting offline collaboration (which requires conflicts resolution), are very hard to get right.

Concurrency

You can get surprisingly far in backend development without dealing with manual concurrency. Some frameworks automatically invoke individual request handlers on standalone threads, so you can just execute all the required logic synchronously. Other frameworks, like the popular Node.js, abstract out concurrency almost completely and promote “single threaded” paradigm.

Frontend developers also don’t need manual concurrency much. They use promises (or async/await, which is a wrapper around promises).

Unfortunately, you can’t write any non-trivial Android app without dealing with concurrency. The moment you’ll want to send your first network request, you’ll need a background thread. Sure, you can use a higher-level concurrency framework like Coroutines for that, but, arguably, the learning curve of these frameworks will be even steeper than using a bare Thread class.

Inferior Development and Debug Tools

There are web frontend dev tools built into all major browsers: built-in network traffic inspector, storage inspector, style editor, etc. My favorite tool is UI inspector, which allows you to click on any UI element and see how it’s declared, what properties it has and the hierarchy of CSS applied to it. These features make fixing UI bugs a breeze.

These tools are beyond anything Android developers can dream about. For example, when debugging UI issues, Android devs have to build a debuggable version of the app and attach LayoutInspector to it. It takes a lot of time, and even then LayoutInspector is nothing compared to the web dev UI inspector that I described earlier. Now we have Jetpack Compose, which boasts a preview feature, but the preview is very slow, often fails and is nowhere close to the web dev UI inspector in terms of the features.

On top of these inconveniences, network traffic inspection in Android is harder, automated testing is more challenging, visibility into production code is minimal and more.

In part, this discrepancy in tooling can probably be attributed to the difference in programming languages, platforms, artifacts and distribution model. Still, case in point, Android tooling is inferior compared to web frontend. We probably can’t directly compare Android tooling to backend tooling, because backends don’t have UI and backend artifacts don’t leave your own servers, but I’ll still claim that backend tooling is more convenient and more mature.

Google Dev Ecosystem Lock In

Android developers “live” in Google’s dev ecosystem. There are pros and cons to that, but I find the overall balance to be negative.

The positive is that we get pretty much all of our tools from a single authority. There is the current “best practice” and that’s what you should use.

Unfortunately, that’s also the problem because you basically have to take what Google provides. The “best practices” change quickly and, often, for hardly justifiable reasons. Furthermore, Google’s standards of quality and maturity are pretty low in the context of Android dev experience. So, after each reinvention of the wheel, there is a period of instability. And don’t get me started about Google’s official issue tracker – I stopped submitting tickets to it because it’s just a black hole that sucks a lot of community effort for very little output.

As an Android dev, I constantly chase new trends. I’d prefer to spare time and just stick to the old, good, mature and time tested tools, but, given Google’s influence on the ecosystem, it’s impossible. Sooner or later, they push the latest wheel reinvention into the masses, which forces me to learn and adapt to it.

Conclusion

The combination of fundamental challenges, like dealing with connectivity loss and distribution mechanics, and accidental challenges, like going through cycles of Google’s reinvent the wheel -> deprecate -> reinvent the wheel, makes Android development very challenging. On the positive side, after dealing with these challenges, Android developers are well-prepared to handle any other development tasks because they are tough as boots.

In all seriousness, I like being an Android developer. It’s a very broad niche, with many different types of systems, that encompasses the largest userbase on the planet. It doesn’t get boring, for sure.

As usual, thank you for reading and you can leave your comments and questions below. Let me know if I missed any additional factors.

The post The Challenges of Android Development appeared first on TechYourChance.

]]>
https://www.techyourchance.com/the-challenges-of-android-development/feed/ 3