QuickBird Studios https://quickbirdstudios.com/ Mobile App Solutions Wed, 28 May 2025 14:03:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 https://quickbirdstudios.com/wp-content/uploads/2022/03/cropped-QB_favicon-32x32.png QuickBird Studios https://quickbirdstudios.com/ 32 32 Why Await? Futures in Dart & Flutter https://quickbirdstudios.com/blog/futures-flutter-dart/ Tue, 27 May 2025 08:28:16 +0000 https://quickbirdstudios.com/?p=18125 Futures in Dart might seem straightforward at first glance, but dig a little deeper, and you'll find they can be more intricate than you initially thought. Ever wondered how these asynchronous operations truly play with Dart's single-threaded event loop? Or perhaps you've stumbled upon some of the common pitfalls that can trip up even experienced Flutter developers? Fear not, because we're about to embark on a journey to unravel the mysteries of Futures in Dart and Flutter.

The post Why Await? Futures in Dart & Flutter appeared first on QuickBird Studios.

]]>
dart-futures-image

Why Await? Futures in Dart & Flutter

Futures in Dart might seem straightforward at first glance, but dig a little deeper, and you’ll find they can be more intricate than you initially thought. Ever wondered how these asynchronous operations work with Dart’s single-threaded event loop? Or perhaps you’ve stumbled upon some of the common pitfalls that took you unnecessary hours to debug? Fear not, because we’re about to go on a journey to unravel the mysteries of Futures in Dart and Flutter. We’ll take a look in the inner workings, see how they interact with the event loop, and highlight some of the issues you might encounter along the way.

Get ready to have a clearer vision of your future… just kidding… of Futures in Dart/Flutter.


What is a Future 🔮

Dart runs most of its code on a single-threaded event loop. Think of it as the main highway for your app’s tasks – rendering UI, handling user taps, and running your Dart logic. This single-lane model works great for a lot, but here’s the catch: if any piece of code takes too long to execute on this main thread, it’ll block everything. So, why not just throw every asynchronous call onto a separate thread?

Disclaimer: If you already feel super familiar in general how futures work, don’t hesitate to jump ahead to the section where we explain how they work under the hood.

Why no threads 🧵?

While many languages rely on multi-threading for concurrency, Dart often prefers its efficient single-threaded event loop combined with asynchronous patterns like Futures for handling I/O-bound operations. Although Dart does support true parallelism for CPU-intensive tasks using Isolates, the event loop model often provides lower overhead and complexity for managing lots of concurrent I/O tasks – a common pattern in event-driven architectures. If you’re curious about why the single event loop pattern became so popular, there are plenty of articles out there that dive into the details.

Getting to know your Future(s)

At its core, a Future<T>in Dart is an object representing the eventual result of an asynchronous operation. Think of it not as the value itself, but as a promise or a placeholder for a result (or an error) that will become available at some point later.

The <T> part is a generic, specifying the type of value you expect the Future to produce upon successful completion. So, a Future<String> promises eventually to yield a String, and Future<void> signifies that the asynchronous operation completes but doesn’t produce a specific value. You can still wait for it to complete, if one of your following tasks relies on it to be finished.

A Future can have two states:

  1. Uncompleted: When you first receive a Future from an asynchronous call, it’s typically in this state. The operation has started, but the result (or error) is not yet available.
  2. Completed: Eventually, the operation finishes, and the Future transitions to a completed state. This completion can occur in two distinct ways:
    • Completed with a value: The operation finished successfully, and the Future now holds the resulting value of type <T>.
    • Completed with an error: The operation failed for some reason (e.g., network error, file not found, exception thrown), and the Future now holds an error object (often some kind of Exception) and potentially a Stacktrace.

But how does this work under the hood with Dart’s event loop?

When an asynchronous operation (like network I/O handled by the system) finishes its work, it doesn’t immediately interrupt your currently running Dart code. Instead, it signals the Dart runtime, which then typically adds a task to the event queue. When this task is processed, Dart uses the microtask queue to schedule the actual completion of the Future (like running .then() callbacks or resuming code after an await).

The event loop is an invisible process that manages the execution of code events and callbacks. It’s a crucial part of the Dart runtime ensuring that all the calls in your Dart code run in the correct order. The Dart event loop works by continuously checking two queues, the event queue and the microtask queue.

Event queue

The event loop’s job is to handle external events like user interactions (mouse clicks, taps), I/O operations (network requests, file access), timers, and messages between Dart isolates. These events are added to the event queue

Microtask queue

The micro task queue is for short, asynchronous internal actions that originate from your Dart code. Microtasks have a higher priority than events and are processed before the event loop looks at the event queue. Think of them for tasks that need to happen right after the current synchronous code finishes but before the next event is processed.

Zones

For more advanced scenarios, especially around error handling in asynchronous code, Dart provides the concept of Zones.Zones allow you to create isolated execution contexts and handle errors that might occur within them without crashing the entire application.

When to use a Future and when a Thread 🔮 vs.🧵

Most of the time you automatically know when to use a Future. E.g. the http package will only return Futures, so you are forced to use them. But when is actually the right time to start an Isolate?

The fundamental rule is: any operation that has the potential to take a non-trivial amount of time should be performed asynchronously. While “non-trivial” can be subjective, in the context of a smooth user interface aiming for 60, (or even 120) frames per second, you only have about 16 (8) milliseconds per frame for all work (building widgets, layout, painting, running Dart code). Any single operation taking longer than a few milliseconds on the main isolate can contribute to dropped frames (jank) or, in worse cases, a completely frozen UI.

When to use an Isolate

For CPU-bound tasks – operations primarily limited by processing speed rather than waiting for external resources – the correct approach is to use Isolates. Isolates are independent Dart execution contexts running on separate threads, each with its own memory heap and event loop. This allows true parallelism, ensuring heavy computations don’t block the main UI isolate.

Join us as a Flutter Developer!

Pitfalls and Misconceptions 🪤

There are more misunderstandings and potential pitfalls around Futures than you might think. Let’s take a look at some of the most crucial ones to keep you safe in the future.

await vs then

  • await: Suspends execution at the await line, letting other tasks run. Once the Future completes, execution resumes. It offers clean, readable, and sequential async code.

  • .then(): Registers a callback that runs when the Future completes, without pausing the current function. Useful outside async functions, but nesting and error handling can get messy.

Recommendation: Use await for clearer, more maintainable async code. Use .then() only when necessary.

await is scoped

The awaitkeyword is scoped to the async function it’s used in, meaning it only pauses execution within that specific function—not the entire program. When await is called, control is returned to the event loop, allowing other tasks to run while the Future completes. This scoped suspension keeps the rest of your app responsive and enables writing asynchronous code that looks and behaves like synchronous logic within that function.

Consider this example:

void main() {
  print('1. Start main');

  doAsyncTask();

  print('4. End main');
}

Future<void> doAsyncTask() async {
  print('2. Start async task');

  await Future.delayed(Duration(seconds: 2));
  print('3. End async task');
}

The output clearly shows that main continues to execute even while doAsyncTask is paused waiting for the Future.delayed() to complete:

1. Start main 
2. Start async task 
4. End main 
3. End async task

The alternative: FutureOr?

FutureOr<T> in Dart is a union type, meaning a function can return either a direct value of type T or a Future<T>. This provides flexibility for APIs, making it easier to override methods that might return results synchronously or asynchronously. The await keyword handles both cases smoothly.

A Future that never ends

A Future that never ends is an asynchronous operation that might never complete. While most asynchronous tasks are expected to eventually finish, there’s no guarantee within the Future contract itself that they will. This can happen if an operation gets stuck due to bugs, deadlocks, or waiting on a resource that never becomes available (like an unresponsive server without a timeout).

If you awaita Future that never resolves, your async function will stay paused at that point forever. Similarly, if you use .then(), the callback will never be called. This can cause parts of your app to become unresponsive and lead to resource leaks if cleanup code (like in whenComplete() or a finally block) never gets to run.

Most of the Futures you encounter already have a build-in timeout and will return an error at some point. For other cases to avoid this, you can use Future.timeout(). This method takes a Future and a Duration. If the original Future doesn’t complete within the given time, timeout() will throw a TimeoutException (or trigger an onTimout callback). This way, you can handle situations where an operation might hang and keep your app responsive. For more details, check out the Future.timeout() documentation.

Catching Errors: Handling Future Failures 🫴

Proper error handling is crucial in asynchronous programming. Unhandled errors from Future can cause your application to crash or silently fail, making it essential to catch and manage them effectively.

Awaited Futures (try-catch)

When using await, error handling is straightforward with a try-catchblock. If the awaited Future completes with an error, it will throw that error, which can be caught by the surrounding catch block.

Future<void> handleAwaitError() async {
  try {
    var data = await potentiallyFailingOperation();
    print("Success: $data");
  } catch (e, s) { // Catch the error
    print("Caught error during await: $e");
    print("Stack trace: $s");
    // Handle the error...
  }
}

Future.then() Futures (.catchError)

When using .then(), you handle errors with the catchError() method. This method registers a callback to catch errors that occur either from the original Future or from exceptions thrown inside any preceding .then() callback in the chain.

void handleThenError() {
  potentiallyFailingOperation()
   .then((data) {
      print("Success: $data");
      // if (somethingIsWrong) throw Exception("Problem in.then");
    })
   .catchError((error, stackTrace) { // Catch errors from Future or.then()
      print("Caught error in chain: $error");
      // Handle the error...
    });
}

The Critical Pitfall: Unawaited Futures

A major source of bugs is when you call a function that returns a Future, but you neitherawait its result nor attach a catchError() handler. What happens if the Future completes with an error?

The error becomes an uncaught asynchronous error. Depending on the context (e.g., in debug vs. release mode, or a specific Zone configuration), this can either crash your application (common in release builds) or silently log the error to the console, hiding critical failures in your app logic.

Why does this happen? The error surfaces in the event loop, but there is no registered handler to catch it for that specific Future instance.

Rule: Always ensure that a Future’s potential error is handled. You can do this by:

  • await the Future inside a try-catch,

  • Using .catchError() to attach an error handler, or

  • Ensuring errors are managed internally within the async function if you intentionally use a “fire-and-forget” pattern (though this should be used cautiously).

By proactively handling errors, you prevent unforeseen crashes and ensure your app behaves predictably, even in the face of asynchronous failures.

Some neat Lints for your future

The Dart analyzer includes helpful lint rules. Ensure you have these enabled in your analysis_options.yamlfile:

  • unawaited_futures: Warns when you call a function returning a Future and don’t do anything with the result (no await, .then(), .catchError(), or assignment to a variable). This is the primary rule for catching potentially unhandled Futures.
  • discarded_futures: A related rule, specifically warning when a Future is obtained in a context where its value cannot be used (like a void expression), often highlighting fire-and-forget calls that might need attention.

Important Caveat for unawaited(): Using unawaited() does not magically handle errors. It merely suppresses the lint. If the Future you pass to unawaited() completes with an error, that error will still be an uncaught asynchronous error unless it’s handled inside the async function or you are consciously accepting the risk for specific, non-critical operations. Use unawaited() with caution and only when you are certain that not handling the result or potential error is acceptable or handled elsewhere.

Conclusion: Embrace the Future!

Hopefully, this deep dive has shed some light on the intricacies of Futures in Dart and Flutter. Understanding how they work with the event loop, the nuances of async/await, and the importance of error handling will undoubtedly make you a more effective Flutter developer. So go forth and embrace the Future!

Another topic that seems more obvious at first glance, but can be more complicated in detail, is class modifiers in Dart. You can learn more about them in this article

Thanks for reading!

Do you search for a job as a Flutter Developer?
Do you want to work with people that care about good software engineering?
Join our team in Munich

The post Why Await? Futures in Dart & Flutter appeared first on QuickBird Studios.

]]>
R.I.P. build_runner: A Deep Dive Into Macros in Dart & Flutter https://quickbirdstudios.com/blog/macros-dart-flutter/ Mon, 04 Nov 2024 12:33:03 +0000 https://quickbirdstudios.com/?p=17438 Dart macros bring a whole new way to handle code generation for Flutter, offering a simpler, faster alternative to build_runner. No more slow builds or manual updates—macros generate code instantly during compilation, helping you save time and reduce repetitive code. In this guide, we’ll show you how to set up and use macros, build custom examples, and understand their pros and cons, so you can make the most of this powerful new feature in your Flutter projects.

The post R.I.P. build_runner: A Deep Dive Into Macros in Dart & Flutter appeared first on QuickBird Studios.

]]>
Dart Macros

R.I.P. build_runner: A Deep Dive Into Macros in Dart & Flutter

Dart macros are here to change the game for Flutter and Dart developers. If you’ve ever been frustrated by the complexities of using build_runnerfor code generation—like slow build times, constant manual regenerations, and synchronization headaches—macros offer a powerful, integrated alternative.

Unlikebuild_runner, which requires external dependencies and can feel cumbersome, Dart macros enable code generation directly at compile time, eliminating the need for a separate build step. This means faster builds, reduced boilerplate, and more time focused on writing the code that truly matters.


What do we need 🛠️

Macros are a built-in language feature in Dart, meaning all the core functionality is already available. However, there are several helpful extensions that can simplify development—we’ll cover those later.

Since the macros feature hasn’t yet reached the stable channel, we currently need to work within the beta channel or, for those willing to be on the cutting edge, the main channel of Flutter. The Flutter team has announced that the first macro feature (@JsonCodable) is expected to arrive in the stable channel later this year (2024), with support for custom macro development anticipated in Q1 2025.

Enable Macros

Just run flutter channel beta followed by a flutter upgrade (change beta to main if you want to use the latest build from Github). Since macros are a language feature, you also need to change your project’s minimum required Dart version. Macros are supported since Dart 3.5.0-152. You can set this in your pubspec.yaml under environment. E.g you can use this configuration:

environment:
  sdk: '>=3.5.0-152 <4.0.0'
  flutter: ">=2.0.0"

So the linter does not complain we also need to enable the macros experiment in our analyzer file. In your analysis_option.yaml add:

enable-experiment:
- macros
With these steps, you’re all set to start developing macros!

What are Macros? 📸

Macros are a Dart feature that enable static meta-programming, allowing the language to analyze code and automatically generate additional functionality based on it. While similar tobuild_runner , macros run instantly during compilation rather than as a separate task, making them much faster.

As mentioned above there is already one macro that works out of the box called @JsonCodable. Simply add this annotation to a data class, and it will generate everything needed for JSON serialization and deserialization. In most IDEs, you can even jump directly to the generated code. For example, in VSCode, you’ll see a “Go to augmentation” option—click it to view the code created by the macro.

And yes it’s that lightning-fast. Unlike build_runner combined with JsonSerializable or Freezed, macros generate code instantly with no need to run extra commands in the terminal. 🤯

Here are some additional benefits of macros in Dart that make Flutter development easier:

  • Code Generation: Macros automatically create repetitive code, like getters, setters, and boilerplate methods, saving you time and reducing errors.
  • Goodbye to build_runner Tasks: Macros handle code generation at compile-time, eliminating the need for build_runner tasks, which speeds up development.
  • Compile-Time Optimizations: Macros add optimized code before runtime, which boosts performance without any added runtime overhead.
  • Improved Consistency: By automating patterns, macros bring consistency to your codebase, minimizing bugs from human error.
  • Custom Annotations: Macros let you create powerful, specific annotations that add functionality—like validation or logging—without cluttering your main code.

Ready to build your own macro? Let’s start with a simple example and then we dive into the details!

The toString Macro

Let’s start with an easy example. We want to create a macro that adds a custom toString method to a class. The custom toString should always output “Macros are awesome”.

Macros are created by adding a modifier to a class called macro. In our case, we want to call our macro Awesome. I typed out the complete class first and then we will go into detail about what everything does.

import 'dart:async';

import 'package:macros/macros.dart';

final _dartCore = Uri.parse('dart:core');

macro class Awesome implements ClassDeclarationsMacro {
  const Awesome();

  @override
  Future<void> buildDeclarationsForClass(
    ClassDeclaration clazz,
    MemberDeclarationBuilder builder,
  ) async {
    final print = await builder.resolveIdentifier(_dartCore, 'print');

    builder.declareInType(
      DeclarationCode.fromParts([
        '@override\\n',
        'void toString() {\\n',
        print,
        '("Macros are awesome");}',
      ]),
    );
  }
}

Now the only thing that’s left is to use our macro. To use a macro you need to annotate the class with the macro.

@Awesome()
class Bird {
  final String name;

  const Bird(this.name);
}

Let’s use our class now and see if it worked:

const bird = Bird('QuickBird');
print(bird.toString());

// Output: Macros are awesome

Wow, it worked 🎉. OK, this looks complicated! We now step through the code and check what every declaration does. We start with the first line (skipping the imports for now)

The Details 🔍

macro class Awesome implements ClassDeclarationsMacro

To create a macro we need to annotate the class with the macro identifier. In our case we want to implement the macro on the class-level, that’s why we need to implement the ClassDeclarationsMacro. There are also other interfaces to implement a macro on the function-level, constructor-level, or others. We take a look at those later.

Most macros have two important methods that we most of the time have to override:
buildDefinitionForClassprovides the mechanism to inspect a class’s structure (its fields, methods, and annotations) at compile-time and allows the macro to inject or modify code based on that class.
buildDeclarationsForClassis used to define or modify the actual implementation of a class (like methods or fields), buildDeclarationsForClass focuses on generating new declarations that aren’t originally part of the class or its members.

In our example, we just wanted to implement a method that is independent of the rest of the class but technically we could have split it up to build the method definition name, parameters, etc., and later do the print statement within the buildDeclarations method. In that case, we decided to go the easier route for better understanding.

final print = await builder.resolveIdentifier(_dartCore, 'print');

Since we want to print out something to the console we also need the print function from the Dart Standard library. Technically we could write out the print statement as a String but we also need to take care of the import statement. By calling resolveIndentifier Dart will take care of it. So it’s always advised to use it when you want to use something that’s not directly part of the language or if you want to use some library.

builder.declareInType(
     DeclarationCode.fromParts([
       '@override\\n',
       'void toString() {\\n',
       print,
       '("Macros is awesome");}',
     ]),
   );
 

To add the method within the class, we need to usedeclareInTypeon the MemberDeclarationBuilder. This requires an additional helper class,DeclarationCode ,which simplifies this process by accepting an array of strings. Although you could technically place everything in a single string, splitting the code into multiple elements improves readability significantly. Also it takes care of formatting the code correctly!

And that’s it! At first glance, macro code might look a bit daunting, but it becomes easier with practice. There are also several useful extensions available to streamline macro development—just search for “macros” on pub.dev to find a variety of helpful packages.

 

As mentioned earlier, there are different types of macros. While some are designed to be applied at the class level, others can be used on functions or constructors. Let’s explore those next.

Join us as a Flutter Developer!

Types of Macros

ClassDeclarationMacro

This type of macro works on class declarations. It gives you the ability to inspect and modify classes, adding new members (fields, methods, constructors), changing existing ones, or even generate entirely new classes based on the structure of the original class. This makes sense e.g. when:

  • Generating copyWith, equals, and hashcode methods for data classes.
  • Adding serialization/deserialization logic (like the @JsonSerializableexample).
  • Creating factory constructors or helper methods based on class fields.

FunctionDeclarationMacro

FunctionDeclarationMacro focuses on functions, enabling analysis and transformation of function signatures, parameters, and bodies. Examples of where this is useful include:

  • Automatically generating documentation for functions.
  • Adding logging or tracing code to functions.
  • Creating wrapper functions with modified behavior.

There are even more types of macros, such as VariableDeclarationMacro, MixinDeclarationMacro, ConstructionsDeclarationMacro. For a comprehensive list, refer to Dart’s macros. documentation. Notably, a single macro class can implement multiple macro interfaces, allowing it to operate across several phases.

Generated Code – the augment keyword

The augment The keyword in Dart is used with macros, allowing developers to introduce code transformations or additions at compile-time. Specifically, it indicates that a macro can modify or “augment” existing code, such as methods or classes. This provides a way to extend or enhance functionality while keeping the source code clean and concise. However, using augment outside of macros is not recommended. Instead, for general purposes, you should rely on traditional methods like inheritance, mixins, and extensions, unless they are leveraging macros.

Order matters!

You can also add multiple macros to a class and they can also depend on each other. They will be executed bottom ⬆️ top.

@Macro2()
@Macro1()
class Order {}

In that case, @Macro1 is executed first, and then @Macro2 is executed.

Resource Access in Macros

Some macros may require loading resources, such as files. While this capability is supported, there are safeguards in place because macros run during analysis and are considered untrusted code. Therefore, macros are restricted from accessing resources outside the program’s designated scope.

Other limitations

Some other limitations of macros are:

  • All macro constructors must be marked as const.
  • Macro classes cannot be generated by other macros.
  • All macros must implement at least one of the macro interfaces.
  • Macros cannot be abstract.

This is the current state at the time of writing this article. Things might change in the future.

Conclusion

Macros in Dart promise to address some of the most tedious, repetitive tasks in Flutter and Dart development. With macros, we can finally move beyond the burdens of build_runner and JSON serialization/deserialization. This opens up exciting possibilities for implementing functionalities that streamline user experiences by concealing complexity, making advanced features like navigation, storage, and modularization more accessible.

However, macros also come with potential pitfalls. They can obscure functionality and lead to hidden behaviors that developers might not anticipate. Building macros is complex work that requires an understanding of various language elements—not the usual development task. Like withbuild_runner plugins, writing macros often involves detecting specific class elements and generating code, necessitating extensive string manipulation.

In short, macros hold promise for enhancing productivity but demand careful use to balance power with clarity.

Thanks for reading!

Do you search for a job as a Flutter Developer?
Do you want to work with people that care about good software engineering?
Join our team in Munich

The post R.I.P. build_runner: A Deep Dive Into Macros in Dart & Flutter appeared first on QuickBird Studios.

]]>
Swift Macros: Understanding Freestanding & Attached Macros https://quickbirdstudios.com/blog/swift-macros/ Mon, 19 Feb 2024 15:34:28 +0000 https://quickbirdstudios.com/?p=17325 Swift Macros are here to simplify your code! This new feature lets you generate code at compile time, reducing boilerplate, saving you time, and making your projects cleaner. We take a look at freestanding and attached macros and take a look at some examples.

The post Swift Macros: Understanding Freestanding & Attached Macros appeared first on QuickBird Studios.

]]>
Flutter Native Code Comic

Swift Macros: Understanding Freestanding & Attached Macros

Have you ever started using a library and you were outraged by all the boilerplate code you need to write to make it work? Have you misused an API, because it wasn’t obvious to use? Have you thought that you just want to make some code just so much easier without all the manual work of writing complex code for all that repetitive logic? Well, there is a new Swift language feature to the rescue: Macros. We take a look at freestanding and attached macros and show you how they work.


Macros allow packages to expose code generation functionality that you can use in your code, just like any other function or type. Maybe you have already used that new @Observablemacro for SwiftUI? Rather than having a type conform to ObservableObjectand then needing to put @Published in front of every single property that needs to be observed, you can now simply annotate a type with @Observable and that’s it. There’s a lot more to figure out about Swift Macros – let’s find out.

Types of Macros

With macros, you have quite a few options to choose from. On a broader level, we can divide them into two categories: freestanding and attached. While attached macros are put in front of a declaration, freestanding macros are placed on their own. Both freestanding and attached macros have knowledge about their context though, so if you use them inside a certain context (say a declaration of a class or an extension), the macro can make use of that in both cases.

You can easily spot the difference between freestanding and attached macros when they are being used since freestanding macros use a # before their name and attached macros need to be used with a @prefix.

Freestanding Macros (#)

Freestanding macros are not attached to an existing declaration, which is why there are only two types of it: the expression macro and the declaration macro.

Expression Macros

As the name expression macro implies, an expression macro generates expressions. Therefore they can be used like methods you are calling to get some return value. Rather than that macro simply being executed though, the macro expands before the code is even compiled into another expression.

One example of an expression macro would be this macro, where a static string is checked at compile time to be a valid URL and if it doesn’t pass the test, it would result in a compile-time error rather than a crash.

let blogURL = #URL("https://quickbirdstudios.com/blog")

That code is expanded by the macro as this, so why wouldn’t we simply write that directly?

let blogURL = URL(string: "https://quickbirdstudios.com/blog")!

Well, while generating that code, the macro can check the string already and make sure that this expression wouldn’t result in anil value.

Declaration Macros

For a few more advanced use cases, where one would like to create declarations, such as adding properties to the enclosing type or creating new types altogether, a declaration macro needs to be used.

Since a freestanding declaration macro can create declarations on the global scope (i.e. without being nested in another type declaration), you will need to adhere to a specific naming scheme of the expanded declarations. You may, for example, choose to start all the expansions with a certain prefix or suffix. This naming scheme is required to be specified for the declaration of the macro.

One scenario where a freestanding declaration macro would be super helpful is to create union types. Since Swift doesn’t allow to specify that a function parameter could be any of multiple types (without creating protocol/type hierarchies), a union type could be generated using an enum with associated objects of the different types. The cases could also be named according to the type of the associated object. For a union type of Int and Double, this macro could be used like this:

@Union<Int, Double>("Number")

This macro could expand into an enum like the following. There could even be computed properties to access the associated objects without the need for pattern matching (i.e. case-let or switch-case constructs).

enum NumberUnion {
    case int(Int)
    case double(Double)
}

Attached Macros (@)

Attached macros are always specified right before a declaration, be it a property, a type, an extension or even a protocol. Depending on the specific type of macro, other expansion options are available.

Accessor Macros

Accessor macros are attached to member property declarations of a type. In many cases, one can easily mistaken accessor macros with property wrappers, since they are indistinguishable from how they are being used. There is, however, one important difference: A property wrapper is a run-time concept, while a macro is performed at compile-time.

Similar to a property wrapper though, you can define the getters and setters of a property, i.e. the body of a computed property. It could also define willSet/didSet observers.

In the following example, we could make sure, that a database update operation is performed whenever a property has been changed:

class DatabaseObject {
    @ObservedProperty var id: UUID
    @ObservedProperty var name: String
}

These properties could now be rewritten to this code:

class DatabaseObject {
    var id: UUID {
        didSet {
            Database.shared.update(self)
        }
    }

    var name: String {
        didSet {
            Database.shared.update(self)
        }
    }
}

Conformance Macros

Conformance macros can add properties and methods to a type to make sure, it conforms to a protocol or super class.

As shown in this example, you can also define a macro that requires parameters.

@JSON(["id": "_id", "companyName": "company"])
struct CompanyResponse: Codable {
    var id: String
    var companyName: String
}

Here, the macro could implement the Codable protocol by adding CodingKeys based on the provided dictionary.

struct CompanyResponse {}

extension CompanyResponse: Codable {
    private enum CodingKeys: String, CodingKey {
        case id = "_id"
        case companyName = "company"
    }
}

Member Macros

To add new members such as properties and methods to a declaration, you can use member macros. As an example one could create a macro to generate type-safe properties from a localizable file SwiftGen or R.swift):

@Localization("Localizable")
enum L18n {}

This macro could generate definitions like the following:

enum L18n {
    enum Welcome {
    		static let title = NSLocalizedString("Welcome.title", comment: "")
    		static let nextAction = NSLocalizedString("Welcome.nextAction", comment: "")
    }
}

Member Attribute Macros

A member attribute macro cannot create new members to an existing declaration. It can, however, change the attributes of members, by e.g. marking them @objc or @nonobjc or adding property wrappers.

A database framework relying on dynamic Objective-C features could use this macro to make sure properties are correctly marked:

class MyDatabaseObject: NSManagedObject {
    @ManagedProperty var id: UUID?
}

Therefore, it could make sure to mark properties both dynamic and @objc:

class MyDatabaseObject: NSManagedObject {
    @objc dynamic var id: UUID?
}

Peer Macros

Peer macros expand new declarations on the same scope as the declaration it was attached to. For example, a macro might generate a protocol from the definitions defined in the attached context.

@Protocolize("MyProtocol")
class MyClass {
    var someProperty = String()
    func someMethod() {}
}

This macro could generate code like this:

protocol MyProtocol {
    var someProperty: String { get set }
    func someMethod()
}

Let’s create a macro!

Let’s build a small macro to get more familiar with what macros can do. In this example, we will build a macro to generate SwiftUI’s EnvironmentKey for us.

How is this useful? Let’s have a look at the code we would normally need to write:

extension EnvironmentValues {
    private enum TertiaryColorEnvironmentKey: EnvironmentKey {
        static var defaultValue: Color { .primary.opacity(0.5) }
    }

    var tertiaryColor: Color {
        get {
        		self[TertiaryColorEnvironmentKey.self]
        	}
        	set {
        	    self[TertiaryColorEnvironmentKey.self] = newValue
        	}
    }
}

Rather than that, we simply want to be able to write this:

extension EnvironmentValues {
    #GenerateEnvironmentKey<Color>("tertiaryColor", default: .primary.opacity(0.5))
}

This macro might not be the most useful in reducing a whole lot of boilerplate code, it does remove the need to create that EnvironmentKey enum manually.

What do we need to do to make this happen? First, we will discuss project setup, then we will define how our macro is called and finally, we will implement the macro.

Project setup

To create a new Swift package that will contain our macro, we will use the macro template in Xcode. Select File > New > Package... to start the creation process of a new Swift Package.

By choosing Multiplatform > Other > Swift Macro (as shown in the screenshot above), Xcode already sets up 4 targets for us:

Target Explanation Questions Compiled for
<name> Macro declaration How can macros be used?” “What parameters will need to be provided” “What names can be specified by a macro?” Client
<name>Macros Macro Expansions “What code does a macro generate?” Development computer
<name>Client A client to use the macros   Client
<name>Tests A test suite for the macros “How would I be able to check whether my expansion implementation is correct?” Development computer

–> The names of the targets depend on user input replaced by <name>.

The macro expansions will be run on the host machine (i.e. the machine the code is compiled on), which is why the extra <name>Macros package is needed. The code contained in it won’t be contained in the executable, but instead it will only be run when compiling.

How is my macro being called?

Now that the project setup is done, we can dive into writing some code, right? Before we get into the depths of the abstract syntax tree, let’s first define how our macro is going to be called. We do this by adding a new declaration in the <name> target, i.e. in the target that a user would import into their code.

@freestanding(declaration, names: arbitrary)
public macro GenerateEnvironmentKey<T>(_ name: StaticString, `default` builder: @autoclosure @escaping () -> T) = #externalMacro(module: "TestMacros", type: "GenerateEnvironmentKeyMacro")

We first declare the category (freestanding vs attached) of the macro followed by the concrete type. Additionally, some macros are required to specify a certain naming scheme. Since the GenerateEnvironmentKey macro is exclusively going to be used inside extensions of EnvironmentValues, we can specify arbitrary names here. After the macro keyword, the signature looks just like every other function signature, right? Yes, but it’s not followed by a body, but rather assigned to this #externalMacro construct where the module of the macros needs to be specified and the type implementing the code expansion. In this example, we will use the <name>Macros target (in the example above called TestMacros) and the type name GenerateEnvironmentKeyMacro.

Where does the macro implementation go?

As you might have noticed, the <name>Macros target doesn’t yet have a GenerateEnvironmentKeyMacro type, so let’s create that and make it conform to the DeclarationMacro protocol.

public struct GenerateEnvironmentKeyMacro: DeclarationMacro {
    public static func expansion(
        of node: some FreestandingMacroExpansionSyntax,
        in context: some MacroExpansionContext
    ) throws -> [DeclSyntax] {
    		return []        
    }
}

To make our macro conform to the DeclarationMacro protocol, we need to implement the expansion function, but before we get there, we still need to do one important but easy-to-miss step: We need to expose that macro to the outside. We do this by adding the type to the providingMacros property of the <name>Plugin struct.

@main
struct TestPlugin: CompilerPlugin {
    let providingMacros: [Macro.Type] = [
        GenerateEnvironmentKeyMacro.self,
    ]
}

With all this set up, we can finally get to implementing the actual expansion code.

Writing macro expansion code

Let’s reiterate how our macro is supposed to be called:

extension EnvironmentValues {
    #GenerateEnvironmentKey<Color>("tertiaryColor", default: .primary.opacity(0.5))
}

As you might have noticed, we don’t just get those parameters in the expansion  function directly. No, we will need to extract them from the node parameter. The node parameter provides us with the syntax elements making up the call site of the macro.

public static func expansion(
    of node: some FreestandingMacroExpansionSyntax,
    in context: some MacroExpansionContext
) throws -> [DeclSyntax] {
    // Firstly, we extract the name parameter as a string literal:
    guard let name = node.argumentList.first?.expression.as(StringLiteralExprSyntax.self)?.representedLiteralValue else {
    	    throw CocoaError(.featureUnsupported) // You might want to replace this error with a more descriptive replacement
    }

    // Secondly, we extract the default value's builder function:
    guard let builder = node.argumentList.dropFirst().first?.as(LabeledExprSyntax.self)?.expression else {
       throw CocoaError(.featureUnsupported) // You might want to replace this error with a more descriptive replacement
    }

    // Thirdly, we also need to know about the generic type being used:
    guard let genericArgument = node.genericArgumentClause?.arguments.first else {
        throw CocoaError(.featureUnsupported) // You might want to replace this error with a more descriptive replacement
    }
    
    return []        
}

With all this code in place, we have access to our two parameters for the name and default value as well as the generic type.

– Warning: The macro makes the assumption that the generic type is specified explicitly. What if the macro is called as #GenerateEnvironmentKey("myNumber", 5) though?

Let’s expand some code, then! We can make use of SwiftSyntax’s awesome string literal conversions:

public static func expansion(
    of node: some FreestandingMacroExpansionSyntax,
    in context: some MacroExpansionContext
) throws -> [DeclSyntax] {
    // .. the previous code setting name, builder and genericArgument variables.

    let keyName = name.capitalized + "EnvironmentKey"
    return [
        """
        private enum \(raw: keyName): EnvironmentKey {
            static var defaultValue: \(genericArgument) { { \(builder) }() }
        }
        """,
        """
        var \(raw: name): \(genericArgument) {
            get { self[\(raw: keyName).self] }
            set { self[\(raw: keyName).self] = newValue }
        }
        """
    ]
}

That’s all that’s needed! We now generate both the EnvironmentKey conforming enum with the defaultValue generated as specified in the macro and add a property with the specified name using that key.

– Note: The code above makes heavy use of string literals with raw strings injected. To write safer code, you may want to check out Swift AST Explorer to see how the individual syntax nodes can be created more explicitly and without the use of raw string injection.

Trying out macros in the Client target

We can use the Client target to see whether our macro works correctly. Let’s just define a tertiaryColor environment key for our app’s color scheme:

extension EnvironmentValues {
    #GenerateEnvironmentKey<Color>("tertiaryColor", default: .primary.opacity(0.5))
}

Does this actually work though? Well, let’s check in a SwiftUI view whether we can access the tertiaryColor property:

struct MyView: View {
    @Environment(\.tertiaryColor) var tertiaryColor

    var body: some View {
        Text("MyView")
            .foregroundStyle(tertiaryColor)
    }
}

Works as expected!

– Warning: Keep in mind that the macro makes many assumptions about its usage context (e.g. that it is only used within EnvironmentValues extensions) or that there is an explicit generic argument clause, so you might want to adapt the code before making use of it.

Testing

As you might want to use the Client target for an actual executable and test more advanced edge cases, let’s check out the testing capabilities of Swift for macros. The SwiftSyntaxMacrosTestSupport module provides us with the assertMacroExpansion function to see whether our macro correctly expands. Let’s change the test in <name>Tests/<name>Tests.swift to actually call our macro:

import SwiftSyntaxMacros
import SwiftSyntaxMacrosTestSupport
import XCTest

#if canImport(TestMacros)
import TestMacros

let testMacros: [String: Macro.Type] = [
    "GenerateEnvironmentKey": GenerateEnvironmentKeyMacro.self,
]
#endif

final class MacroTests: XCTestCase {
    func testGenerateEnvKeyMacro() throws {
        #if canImport(TestMacros)
        assertMacroExpansion(
            """
            extension EnvironmentValues {
                #GenerateEnvironmentKey<Color>("tertiaryColor", default: .primary.opacity(0.5))
            }
            """,
            expandedSource: """
            extension EnvironmentValues {
    				private enum TertiaryColorEnvironmentKey: EnvironmentKey {
        				static var defaultValue: Color { .primary.opacity(0.5) }
    				}

    				var tertiaryColor: Color {
        				get {
        					self[TertiaryColorEnvironmentKey.self]
        				}
        				set {
        	    				self[TertiaryColorEnvironmentKey.self] = newValue
        				}
    				}
    			}
            """,
            macros: testMacros
        )
        #else
        throw XCTSkip("macros are only supported when running tests for the host platform")
        #endif
    }
}

As you might have noticed, there are quite a few compiler directives (including #if). They are needed, since the code relevant to calling the macros can only be executed on the host platform (i.e. the machine the code is compiled on) and not as part of the rest of the code of the Swift Package.

– Note: Now you have seen all that is needed to write your own macro. What do you think about them so far?

Discussion

As with every new concept added to a compiler or language, there are some benefits of using it, but also some limitations or disadvantages. Where do they really make our lives easier and where might they create more issues than they solve? Let’s see.

Benefits

First and foremost: By reducing boilerplate code and complicated, unintuitive patterns, macros can lead to a cleaner and smaller code base of your application. The complexity is moved from a macro user to a macro developer and can be implemented once in a more generalized fashion rather than being handled multiple times over by a macro user. This can especially be useful for libraries containing with rather strict patterns that can easily be misused. With macros, the library maintainer can itself control how the code is written and which modifications are even possible. One example: One might easily forget wrapping properties of an @ObservableObject with a @Publishedproperty wrapper – with the @Observable macro, this cannot happen.

Code generation tools have so far been integrated into the build pipeline using shell scripts or simply executables. There is often a need for an external dependency management system, such as RubyGems, manually placed binaries or programs that need to be installed on the device before it even allows you to build (e.g. using Homebrew). With the use of macros, we can integrate these tools in the dependency management of the application itself and therefore simplify the setup drastically. If not all our tools support Swift Macros though, we might just add yet another way our build pipeline can break and therefore introduce more complexity instead.

Limitations

Swift Macros are not necessarily beginner-friendly. To maintain a Swift macro, one needs extensive knowledge about the Swift AST (Abstract Syntax Tree)  and the SwiftSyntax library. While tools like the Swift AST Explorer can help a developer quickly gain a lot of knowledge on how Swift’s syntax is structured, it is easy to simply miss this one edge case that hasn’t been thought about while developing a macro. Testing can help a lot here, but there can always be that one edge-case or new language feature that one can easily miss during development.

For a developer making use of macros, they seem to “just work like magic”, until they don’t. Then, it will be hard to figure out what is causing code to no longer compile. Being able to dig through the generated, non-compiling code might be helpful, but can easily render a codebase useless, too.

Adding on to this: Macros aren’t very transparent. While you might be able to see the code generated by a macro dependency when you first compile it, are you going to check whether each subsequent version of that macro is going to generate the same code? Of course, this issue exists in dependencies in general, but in contrast to regular source dependencies, a macro interacts with your own code in a more integrated way potentially leading to more severe issues.

Adding Swift macros to an existing package will lead to at least one more target required to be added to the Swift package setup. Yes, it may just be a single target in simple packages. More complex setups might want to think twice about adding this new target though, if they intend to have multiple targets include macros though, since it can easily double the amount of targets needed.

The Swift macro API is not necessarily modularity-friendly. A macro seems to be intended to be written for expanding one or more concrete points in your code. Assuming you have found an issue with a macro and would like to just add this one feature to it, you will most likely have a hard time figuring out how to do this without changing the dependency’s code directly (e.g. by forking it).

Swift macros that generate code on the global scope (i.e. without being nested in a type) need to conform to a specific naming scheme. You may decide to always use a specific prefix or suffix on the definitions generated by a macro, but it might still limit the possibilities to generate certain code.

Conclusion

Swift Macros can be a great tool for libraries currently forcing users into using repeating patterns or writing unsafe code that could already be checked at compile-time. As with every new tool or feature, one shouldn’t overdo it though. While macros come with their own limitations, they can also introduce potential issues that wouldn’t be otherwise possible with “regular” dependencies, such as making your own code no longer compile.

Don’t be afraid to dive into this new fascinating feature of Swift though, a lot of the examples in the SwiftSyntax repository can help you get started!

 

The post Swift Macros: Understanding Freestanding & Attached Macros appeared first on QuickBird Studios.

]]>
Platform Channels are Dead! Objective-C/Swift Interop is Here! https://quickbirdstudios.com/blog/dart-swift-objective-c-interop/ Mon, 11 Dec 2023 15:43:49 +0000 https://quickbirdstudios.com/?p=17127 Dart 2.18 introduced Objective-C/Swift Interop. In this comprehensive guide, we explore how to seamlessly integrate native iOS code into your Flutter app using Dart FFI and C as an interoperability layer.

The post Platform Channels are Dead! Objective-C/Swift Interop is Here! appeared first on QuickBird Studios.

]]>
Flutter Native Code Comic

Platform Channels are Dead! Objective-C/Swift Interop is Here!

In Dart 2.18 the Dart Team introduced Objective-C/Swift Interop. It allows us to directly call native code on iOS from our Dart codebase by using Dart FFI and C as an interoperability layer. Does that mean we don’t have to rely on platform-channels to call native functions on iOS anymore? That’s what we’re going to explore in this article.


The Problem

One of the caveats of Flutter is that we need to learn all the platforms where we are trying to run our app. To be a successful Flutter developer it’s not enough to find your way around Flutter and Dart, from time to time you also have to touch the underlying native platforms iOS and Android. Those platforms are not only different in how they work but also use different programming languages Swift/Objective-C for iOS and Kotlin/Java for Android. To implement something that heavily depends on the device internals like battery, Bluetooth, etc. we always need to work on the native platform.

Most of the time there are already some packages that try to solve that for us, but the issue is they get easily outdated and you have to rely on someone to maintain them. Also, new versions of iOS or Android might introduce new functionality which means packages or your code gets outdated and you have to wait until someone finally implements the new functionality into their package.

Platform-Channels ✉️

Currently, we are relying on using platform channels to communicate with native platforms, which requires us to handle native (iOS, Android) implementation on our own. Managing resources and error handling makes this even more complicated. Also, building platform-channels creates a lot of repeating boilerplate code. Pigeon already tries to solve the boilerplate issue by relying on code generation, but we are still facing the issues mentioned above.

Luckily Google introduced a new way of calling native code on iOS: Objective-C/Swift Interop. There is also an early build for the same mechanism on Android called JNI. It’s still in an early alpha stage but we will take a look at that as well and will take a deeper look in an upcoming article. So feel free to subscribe to our newsletter to get informed when the new article is live!

What is Objective-C/Swift Interop? ⛓

Swift Interop allows us to call Swift/Object-C code directly from Dart code. Now you might think this is impossible since those are completely different languages!? Swift Interop heavily relies on Dart FFI (Foreign-Function-Interfaces) which was introduced to call native C-Code directly from Dart. Swift Interop uses the same mechanism. Since both languages have an interface to communicate directly with C-Code Dart can leverage that. Objective-C/Swift Interop will create C-Bindings from Swift/Objective-C code which then can be called using Dart FFI directly from Dart-Code.

Join us as a Flutter Developer!

Example 🛠️

Let’s start with an example to easily show how everything works. We want to load the current battery state from the device.
Swift Interop also relies on code generation based on the Swift/Objective-C code. For that, we need to set up some things on our project. 

Prerequisites

There are some dependencies that we need. First, we need to add ffigen and ffi to our project dart pub add --dev ffigen + dart pub add ffi.
Since ffigen parses Objective-C header files you also need to install LLVM on your system. You can find the instructions for your system here. LLVM is a set of compilers to abstract away the complexity of different compilation frontends.
Next, we need a configuration file that tells ffigen for which file it needs to generate the bindings. Since we want to load the battery-state we exactly need that file/library.

Where do we get the battery state from?

If you have some familiarity with iOS you probably know it comes from a class named UIDevice. An easy way to find out which file/class is relevant for the functionality you want to implement is to look up the documentation. The good thing is you don’t need to understand the code, just where it is located. If you develop apps for iOS there is already an iOS version installed on your system.

How to call an iOS function?

Now everything is set up? Based on which code do we now generate the bindings? As an example, we decided to load the battery state of the phone. It’s an easy function and is dependent on the device (it cannot be loaded directly from Flutter). Since we already know the function comes from a class named UIDevice we need to locate it on our development device.  If you search for it you will find out it’s located in a file named UIDevice.h. We need the Objective-C header of the code to generate the bindings using FFI. We will later also look at an example of how we get there from Swift-Code.

If you are working on a Mac you can find the file here (The location might differ a little bit on your device): /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/iOSSupport/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h. It is located in the directory where the iOS SDK is located. You can either copy the file into the project or just reference it directly.

To generate the Dart-FFI-Code we need to set up the configuration for FFI. You can either do this in the pubspec.yaml or in a separate file.

ffigen:
  name: UIDevice
  description: Bindings for UIDevice.
  language: objc
  output: './lib/src/uidevice_bindings.dart'
  exclude-all-by-default: true
  objc-interfaces:
    include:
      - 'UIDevice'
  headers:
    entry-points:
      - '/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/iOSSupport/System/Library/Frameworks/UIKit.framework/Headers/UIDevice.h'

If you want to understand every available parameter you can take a look at ffigen. ffigen will also generate the bindings for all necessary dependencies.

Let’s generate some code!

To generate the code, we only have to run ffigen with the following command.

dart run ffigen

If you declared your configuration in a separate file you can link to a custom config file by adding --config config.yaml . If you used the configuration from the above there should be a uidevice_bindings.dart file in lib/src folder now. You might have noticed that the generated file is quite big. In my case over 700k lines of code. What FFI did was to generate bindings to every method, and class inside the UIDevice.h + dependencies. But isn’t FFI there to generate/call C-Code? Yes and no. What ffigen did in the background was to generate C-Header files from the C-Objective-Header files. From the C-Header files the Dart bindings were generated, ffigen just hid the intermediate step from us.

700k lines of code where do we start? Let’s head back to the iOS documentation to find out what we need. There is a property called batteryLevel which returns the current level of the battery. That’s what we need! If you search in the generated file you should find a getter that returns the battery level. The code in the filex3e4 looks complicated but we don’t need to worry about that.

How to use the bindings in Flutter

So this is the last complicated step, that we need to do. We need to load the dylib file (similar to DLL’s on Windows) into the memory, so we can access the battery state function. Normally you would do it like that DynamicLibrary.open('path-to-lib')but since we are developing for iOS the lib is already bundled with our app so we can just callDynamicLibrary.process()and it will automatically pick the correct dylib.
After that, we just have to instantiate the UIDevice class and can then access all the methods that UIDevice provides.

final lib = UIDevice(DynamicLibrary.process());
final device = UIDevice.new1(lib)

Then we can just call device.batteryLevel and get back the current battery state of the device. But if you run this code you might notice it does not work and the batteryLevel is always -1. Let’s jump back to the documentation:

Battery level ranges from 0.0 (fully discharged) to 1.0 (100% charged). Before accessing this property, ensure that battery monitoring is enabled.

We need to enable the battery monitoring first. So before we call the batteryLevel we have to call the setter device.batteryMonitoringEnabled = true and then load the battery level. If you running this on a simulator it should return 1.0 (If you did not change the settings.

Here is the full code:

final lib = UIDevice(DynamicLibrary.process());
final device = UIDevice.new1(lib).
device.batteryMonitoringEnabled = true;

if (device.batteryMonitoringEnabled) {
    final batteryLevel = device.batteryLevel;
    log('Battery level: $batteryLevel');
}

How to generate Code from Swift

You might have noticed we only generated the code from Objective-C headers files, but modern iOS development uses Swift. How do we generate the same thing from Swift code? Sadly at the time of writing this, it only works with a small workaround. A @Objc annotation needs to be added to the Swift code. For some of the iOS Swift libraries, this is already done. For others, you can write a quick wrapper in Swift and annotate those classes yourself. Also, you have to make every method you want to use public and they need to extend from NSObject. After that, you are ready to generate the Objective-C header files by running this:

swiftc -c swift_api.swift           
    -module-name swift_module         
    -emit-objc-header-path swift_api.h
    -emit-library -o libswiftapi.dylib

This will generate the Objective-C header files + the dylib file. With that, you can return to the previous steps and generate the Dart bindings. For more complex use cases you can also have a look at the Swift documentation.

Closed source-code

Those techniques don’t work with code that is not publicly available. This is especially the case for new SwiftUI components. The Dart team is working to find a proper solution and you can track the progress here.

Limitations

You already saw one big disadvantage with the closed source code or with missing Objective-C header files. But I think with the current progress we can expect a good solution here soon!

Other than that there are also some issues with multithreading which can be important for some of the platform tasks. These limitations are due to the relationship between Dart isolates and OS threads, and the way Apple’s APIs handle multithreading:

  • Dart isolates are not the same thing as threads. Isolates run on threads but aren’t guaranteed to run on any particular thread, and the VM might change which thread an isolate is running on without warning. There is an open feature request to allow isolates to be pinned to specific threads.
  • While ffigen supports converting Dart functions to Objective-C blocks, most Apple APIs don’t make any guarantees about on which thread a callback will run.

The callback created in one isolate might be invoked on a thread running a different isolate, or no isolate. This will cause your app to crash. You can work around this limitation by writing some Objective-C code that intercepts your callback and forwards it over a Dart Port to the correct isolate.

  • Most APIs involving UI interaction can only be called on the main thread, the platform thread in Flutter.

Directly calling some Apple APIs using the generated Dart bindings might be thread-unsafe. This could crash your app, or cause other unpredictable behavior. You can work around this limitation by writing some Objective-C code that dispatches your call to the main thread. For more information, see the Objective-C dispatch documentation.

You can safely interact with Objective-C code, as long as you keep these limitations in mind.

JNI is coming

Objective-C/Swift Interop obviously only works for iOS. To replace platform-channels we also need a solution for Android (and other platforms). For Android, there is already an alpha implementation of JNI available which uses the the same C-Interop mechanism as Swift Interop. At the time of this article, it’s still in an alpha stage. We will release an article on how to use it (especially in combination with Swift Interop) when it reaches the beta state.

The future looks bright ✨

At the moment it is still some effort to access native methods. We have to check the native documentation, find the dylib, and generate code. That the generated code looks super complicated and enlarges our lines of code is also a negative side effect. We think that in the future the Flutter/Dart team might deliver those generated functions with the Flutter framework… at least that’s what we are hoping for. By that, they would diminish all the effort that we have to put in right now and only need to check the documentation and find the right methods.

Conclusion

Swift Interop is a new feature that makes it possible to call native functions directly from Dart. It’s still under heavy development, but it has the potential to revolutionize the way we develop Flutter apps and how we access native functions. There are still things to keep in mind:

  • Swift Interop is still in beta: This means that there may be bugs, and the API might change in future versions of Flutter
  • Swift Interop can be tricky to use correctly: If not used correctly, it can lead to memory leaks and other performance problems.
  • Hard-to-debug issues: If you encounter issues inside the generated Dart code it’s not easy to find out where the issue comes from, since the code is generated and uses FFI code.

For now, our advice is to use Swift Interop mostly for small native functionalities that you like to implement in your app. We expect some breaking changes before Swift Interop reaches a stable release. So always keep in mind that with the next update, you might have to adapt your code again.

Thanks for reading! 

Do you search for a job as a Flutter Developer?
Do you want to work with people that care about good software engineering?
Join our team in Munich

The post Platform Channels are Dead! Objective-C/Swift Interop is Here! appeared first on QuickBird Studios.

]]>
Swift Result Builders: Creating Custom DSLs for Binary Formatted Data https://quickbirdstudios.com/blog/swift-resultbuilder-data/ Thu, 03 Aug 2023 14:22:38 +0000 https://quickbirdstudios.com/?p=16596 SwiftUI has revolutionized how we build UI, introducing a more intuitive, declarative approach. Instead of prescribing a series of steps to reach an end goal, we describe the outcome and let the program determine the path

The post Swift Result Builders: Creating Custom DSLs for Binary Formatted Data appeared first on QuickBird Studios.

]]>

Swift Result Builders: Creating Custom DSLs for Binary Formatted Data

SwiftUI has revolutionized how we build UI, introducing a more intuitive, declarative approach. Instead of prescribing a series of steps to reach an end goal, we describe the outcome and let the program determine the path. But can we apply this principle to other areas such as binary data by using custom result builders? Like UI, data transformation often involves moving from a specific starting point to a desired end state. Just imagine, if we could simply define the endpoints and let the program do the heavy lifting! In this article we take a deep dive into the result builders, the magic behind SwiftUI, and how we can apply the same principles to other areas by creating custom result builders to encode binary-formatted data.


With declarative frameworks taking over UI development (including React, Flutter, Compose, SwiftUI, etc), more and more developers become used to declarative programming. When coming from object-oriented, imperative, or functional paradigms, declarative programming can be quite challenging for developers to get used to. Since UI development is an important field of mobile software development though and many developers are now gaining knowledge in declarative programming, let’s take a look at whether it makes sense to use functional programming in even more parts of our apps. How can one make use of Swift’s result builder feature to make it easier to encode binary-formatted data as it may be used for communication via Bluetooth / Internet or in Peer-to-Peer networking?

Note: Alongside this article, we developed a library to easily encode and decode binary-formatted data in Swift – DataKit. With the help of modern Swift language features, it allows developers to easily specify the binary format of their message types using a declarative style. Feel free to check it out and let us know what you think!

Declarative Programming

First of all: What actually is declarative programming? In programming, there are often many different paths to solve a problem – many of these paths can be grouped into different paradigms, including the imperative, object-oriented, functional, or declarative programming paradigm.

Imperative

In imperative programming, your code will most likely look similar to a cooking recipe with a list of ingredients (variables) on top, followed by a set of instructions resulting in some output. The resulting code is often harder to reuse and possibly read, however, it is often used in more performance-critical, low-level portions of a codebase.

Object-oriented

In object-oriented programming, you create different types/objects to solve certain aspects of a program. Each object has its own responsibilities. A programmer usually aims to reduce dependencies between the different objects and keep each object on its own generic to improve the reusability and clarity of the code.

Functional

Functional programming tries to avoid the internal state by specifying instructions as mere transformations of the respective input in a rather mathematical form. Rather than using state variables or objects with internal storage, the output is purely defined by the input itself, often heavily relying on recursion, making the code easy to reason about and prove its correctness. Some algorithms relying on complex internal states might however lack readability and require more boilerplate code.

Declarative

Using declarative programming, code no longer specifies how to achieve a certain solution, but rather simply describes how the result should look like. How the actual solution is achieved is hidden away into different components that are composed to provide an intuitive and easy-to-read interface for developers. For example, in SwiftUI a developer only needs to specify the composition of a view from the current state and SwiftUI takes over the task of automatically updating the user interface – so there is no need to handle long-lasting view objects on your own and keeping state and views in sync.

ResultBuilder

Since declarative programming is based on the premise that describes the desired outcome directly, we first need some mechanism to specify that desired outcome. One common pattern to build such an object is the builder pattern. The builder pattern uses an object or type and its methods to construct a complex object (in our case the description of the desired outcome) by composing different objects. In Swift, this pattern can be implemented using so-called result builders.

ViewBuilder

If you have worked with SwiftUI before, you most likely have already used a result builder called ViewBuilder. ViewBuilder is used to compose different views inside the body property of a view. For everyone not familiar, here is a short example:

struct AppIconView: View {
     let hasText: Bool

     @ViewBuilder var body: some View {
         VStack(spacing: 16) {
             Image("AppIcon")
 
             if hasText {
                 Text("<App Name>")
             }
         }
     }
 }

In this example, we can see a simple SwiftUI view. Many SwiftUI views are defined as a composition of other views inside their body properties. The @ViewBuilder annotation specifies that the ViewBuilder result builder is to be used to construct the result of the body property. Inside the body property, you can see that the AppIconView is composed of a VStack containing an image and possibly also a text (depending on the value of hasText).

Note: The @ViewBuilder annotation does not need to be added for SwiftUI views explicitly, since the View protocol already includes it. If you want to use a ViewBuilder for other properties, you need to specify it directly though.

Basic concept of ResultBuilder

For developers not used to result builders, the example above might not be intuitive. You simply create some objects and then throw them away directly? No, this is not what happens here. Instead, the Swift compiler will rewrite the method above into a representation like this:

var body: some View {
    ViewBuilder.buildBlock(
    		VStack(
    		    spacing: 16,
    		    content: ViewBuilder.buildBlock(
    		    	    ViewBuilder.buildExpression(Image("AppLogo")),
    		    	    ViewBuilder.buildOptional(
    		    	    	    hasText 
    		    	    	    		? ViewBuilder.buildExpression(Text("<App Name>"))
    		    	    	    		: nil
    		    	    )
    		    )
    		)
    )
}

Essentially, result builders are a compiler feature that allows developers to avoid writing boilerplate code to compose different objects. Before we dive into what each of these methods means, let’s get a rough overview first.

As you can see in the example above, a result builder composes the resulting object by using multiple build functions. These functions can be clustered into three categories:

buildExpression functions transform the values that can be fed into the result builder to components. They are super useful for providing many different input value types without unnecessarily overcomplicating the logic of a result builder.
buildFinalResult functions allow for components to be transformed into a final result type after the composition is finished. They are not required for all result builders but might be useful if the component type used during composition should be different from the return type.
– All other functions allow for components to be combined or transformed based on control flow. There are specific functions for if-statements, for-loops, etc.

This graphic shows how the different function categories are called by a result builder. The code specifies either components or expressions that are converted into components right away. These components are then merged or transformed based on control flow. To create the final return value, the resulting component may be converted as well.

Custom ResultBuilder: DataBuilder

Okay, enough theory – let’s build one! DataBuilder will allow us to easily encode binary formatted data. As an example, we imagine a weather station to regularly emit messages containing weather information that needs to be encoded using the new result builder. First, we will need to declare our result builder type:

@resultBuilder
enum DataBuilder {}

A result builder is marked with a @resultBuilder annotation. We use an enum without any cases to avoid instantiation of DataBuilder. The implementation doesn’t build anything yet though – let’s change that!

extension DataBuilder {
     static func buildBlock(components: Data...) -> Data {
     	components.reduce(into: Data()) { $0.append(contentsOf: $1) }
     }
}

With this function, we are already able to compose different Data components into one. But what about values that are not of type Data?

Expressions

What about integer expressions for a start?

extension DataBuilder {
     public static func buildExpression<I: FixedWidthInteger>(_ value: I) -> Data {
     	withUnsafeBytes(of: value) { Data($0) }
     }
}

With this new function, DataBuilder is able to convert all fixed-width integer types (i.e. UInt8, UInt16, UInt32, UInt64, UInt, Int8, Int16, Int32, Int64 and Int) into Data components. Therefore, there is no need to specifically handle integer types in the other build functions used for control flow statements.

Weather Station

Now with a basic result builder in place, we can take a look at what we want to encode: Update messages for a weather station. We have the following description of the message format:

  • Each message starts with a byte with the value 0x02.
  • The following byte contains multiple feature flags:
    • bit 0 is set: Using °C instead of °F for the temperature
    • bit 1 is set: The message contains temperature information
    • bit 2 is set: The message contains humidity information
  • Temperature as a big-endian 32-bit floating-point number
  • Relative Humidity as UInt8 in the range of [0, 100]
  • CRC-32 with the default polynomial for the whole message (incl. 0x02 prefix).

Based on this specification we have already built the following types:

struct WeatherStationFeatures: OptionSet {
    var rawValue: UInt8

    static var hasTemperature = Self(rawValue: 1 << 0)
    static var hasHumidity = Self(rawValue: 1 << 1)
    static var usesMetricUnits = Self(rawValue: 1 << 2)
}

struct WeatherStationUpdate {

    var id: UInt16
    var features: WeatherStationFeatures
    var temperature: Measurement<UnitTemperature>
    var humidity: Double // Range: [0, 1]

}

As you can see, we have not only defined a WeatherStationUpdate type, but also a WeatherStationFeatures option set. Why is an option set? Because it allows us to easily check whether our features have a certain bit set (e.g. features.contains(.hasTemperature)), use an array literal to construct our features ([.hasTemperature, .usesMetricUnits]) or use set semantics(features.insersect([.hasTemperature, .hasHumidity])). All of this functionality is provided to us by conforming to the OptionSet protocol. There is however one catch: Our struct will always only have that one rawValue. This limitation is totally fine in this case though.

To build our data from a WeatherStationUpdate, we will add a data property and add our values as needed:

extension WeatherStationUpdate {
    @DataBuilder var data: Data {
        UInt8(0x02)
        features.rawValue
    }
}

Okay, so the beginning of our message looks good, what about the temperature/humidity information?

Conditionals

Let’s improve DataBuilder to support if-statements!

extension DataBuilder {
    
    static func buildOptional(_ component: Data?) -> Data {
   		component ?? Data()
    }

    static func buildEither(first component: Data) -> Data {
        component
    }

    static func buildEither(second component: Data) -> Data {
        component
    }

    static func buildLimitedAvailability(_ component: Data) -> Data {
        component
    }

}

Wait, what? Four different functions for a simple if-statement? That can’t be right?! They all have slightly different purposes, but we can easily introduce them at once since they are quite similar.

  •  buildOptional is called for simple if-statements without an else block
  •  buildEither(first:) is called for the first block of an if-statements with an else block or for the first case of a switch-case-statement
  •  buildEither(second:) is called for either the `else` block of an if-statement or the remaining cases of a switch-case-statement.
  •  buildLimitedAvailability is used for statements that are wrapped inside a if #available(...) construct.

Good, so now let’s add our temperature and humidity values to the data we are building:

extension WeatherStationUpdate {
    @DataBuilder var data: Data {
        UInt8(0x02)
        features.rawValue
        if features.contains(.hasTemperature) {
        		if features.contains(.usesMetricUnits) {
        		    Float(temperature.converted(to: .celsius).value)
        		    		.bitPattern.bigEndian
        		} else {
        		    Float(temperature.converted(to: .fahrenheit).value)
        		    		.bitPattern.bigEndian
        		}
        }
        if features.contains(.hasHumidity) {
            UInt8(humidity * 100)
        }
    }
}

Note: bitPattern is a computed property on floating-point values that allows us to get an unsigned integer of the same size (i.e. Float16 –> UInt16, Float/Float32 –> UInt32, Double/Float64 –> UInt64). By specifying bigEndian, we can easily convert any integer into its bigEndian representation no matter what the usual endianness of the system is. In our case here, we want all the floating-point numbers to be big-endian encoded.

With this change, we added both the temperature and humidity information. As written in our protocol specification, we only specify the temperature using the Celsius scale when the useMetricUnits flag is set. Further, we need to encode the humidity using a UInt8 value. We are ignoring the possible conversion errors here for simplicity reasons and assume that they have been handled before the instantiation of the WeatherStationUpdate object itself.

Final result

Looking at the documentation, there is still one important thing missing: The CRC checksum at the end of the message. How do we implement that?! We could of course do something along the lines of this, right?

func appendCRC32(@DataBuilder to data: () -> Data) -> Data {
    var data = data()
    let crcValue = CRC32.default.calculate(for: data)
    withUnsafeBytes(of: crcValue) { data.append($0) }
    return data
}

In this approach, we would create a method, that allows for some data to be passed inside. For this data, you would be able to use a DataBuilder to build a result of the closure parameter – note that @DataBuilder annotation there! However, you would need to use another level of indentation when calling it and it’s not really intuitive, right?

Can’t we somehow make it work to just specify that CRC as a component? How could that CRC get to our data? Well, we can make our result builder more powerful by introducing a custom component type rather than Data.

extension DataBuilder {
    struct Component {
        let append: (inout Data) -> Void
    }
}

With this component type, we simply provide a closure that allows us to modify the existing data as it was written by other components. This way, our CRC checksum can easily read the existing data and then append its own value to the data.

We will, however need to rewrite our methods to fit the new component type. Since it is quite straight-forward, here is the result:

@resultBuilder 
enum DataBuilder {

    struct Component {
        let append: (inout Data) -> Void
    }

   static func buildBlock(_ components: Component...) -> Component {
        Component { data in
            for component in components {
                component.append(&data)
            }
        }
    }

   static func buildExpression<I: FixedWidthInteger>(_ expression: I) -> Component {
        Component { data in
            withUnsafeBytes(of: expression) {
                data.append(contentsOf: $0)
            }
        }
    }

    static func buildOptional(_ component: Component?) -> Component {
    		component ?? Component { _ in }
    }

    static func buildEither(first component: Component) -> Component {
        component
    }

    static func buildEither(second component: Component) -> Component {
        component
    }

    static func buildLimitedAvailability(_ component: Component) -> Component {
        component
    }

}

With these changes in place, we can start integrating the CRC now. In one of our previous articles, we have already implemented the CRC algorithm. You can find its implementation here.

To use this CRC algorithm in our result builder, let’s add another buildExpression function:

extension DataBuilder {
    static func buildExpression(_ crc: CRC32) -> Component {
        Component { data in
            let value = crc.calculate(for: data)
            withUnsafeBytes(to: value.bigEndian) { 
                data.append(contentsOf: $0) 
            }
        }
    }
}

Perfect – now, we can simply use it in our data property, right? Let’s try! Since we have migrated all of our build functions, the existing implementation should still work fine, right? Oh no, it’s no longer compiling…

extension WeatherStationUpdate {
    @DataBuilder var data: Data {
        UInt8(0x02)
        features.rawValue
        if features.contains(.hasTemperature) {
        		if features.contains(.usesMetricUnits) {
        		    Float(temperature).bitPattern.bigEndian
        		} else {
        		    Float(32 + (9 / 5) * temperature).bitPattern.bigEndian
        		}
        }
        if features.contains(.hasHumidity) {
            UInt8(humidity * 100)
        }
        CRC32.default
    }
}

DataBuilder is now expecting us to return a DataBuilder.Component value. Since we want it to build Data objects, let’s make use of the buildFinalResult method of a result builder to build a Data from a Component in our DataBuilder.

extension DataBuilder {
    static func buildFinalResult(_ component: Component) -> Data {
        var data = Data()
        component.append(&data)
        return data
    }
}

With this new function, DataBuilder can easily build components using the Component type and then transform that Component in the last step to create the return value. We can now build the data from our WeatherStationUpdate objects using declarative programming! 🎉

Note: Feel free to take a look at the enhanced implementation of DataBuilder (and so much more) in our new Swift library DataKit.

Declarative Programming: Is it really all that useful?

That last version of the data property looks pretty much just like somehow just dumped the specification into some pseudocode, right? But is it all that different from an imperative version like this:

extension WeatherStationUpdate {
    var data: Data {
        var data = Data()
        data.append(0x02)
        data.append(features.rawValue)
        if features.contains(.temperature) {                
        		if features.contains(.usesMetricUnits) {
        		    data.append(Float(temperature).bitPattern.bigEndian)
        		} else {
        			data.append(Float(32 + (9 / 5) * temperature).bitPattern.bigEndian)
        		}
        }
        if features.contains(.humidity) {
             data.append(UInt8(humidity * 100))
        }
        data.append(CRC32.default.calculate(for: data))
        return data
    }
}

Note: This code snippet uses a custom-defined member function Data.append<I: FixedWidthInteger>(_: I) . Below is the code-snippet if you want to use it yourself:

extension Data {
     mutating func append<I: FixedWidthInteger>(_ value: I) {
        withUnsafeBytes(of: value) {
      	    data.append(contentsOf: $0)
       }
    }
}

For this, we didn’t need to introduce that DataBuilder type and not define all those functions at least. Let’s take a deeper look into how imperative code stacks up against this declarative code using Swift result builders!

Readability

Swift result builders help reduce boilerplate code for complex objects that are composed by other objects. DataBuilderhides away all the custom data handling logic and allows us to simply specify what is supposed to be encoded, making the code more clean and focused on what’s really important.

Code Complexity

Result builders do bring quite a lot of code complexity into a code base. There are build functions to convert expressions into components, components into the final result, and all kinds of functions to allow different control-flow statements.

The verbose nature of result builders essentially limits their usage to the most common composition scenarios in our apps and most of the time, a simple, imperative function can be created in a less complex fashion than an entire result builder. In libraries, however, result builders could dramatically simplify the library interface.

Code Composition

Result builders nicely integrate nested compositions. In SwiftUI, the nesting of 2 or 3 ViewBuilders deep across VStack, HStack, etc just feels natural. In imperative programming, however, you would most likely create a new builder object for each nesting, having a unique name and you would need to keep track of which builder object is responsible for each final composed object.

Compiler Support

Result builders can more easily create unexpected compiler errors.

  • If a result builder function is declared in a different module as an extension and that module is currently not imported, the function might fail with an error message that is completely unrelated, because it can find no reasonable mix of functions that matches the given code.
  • Result builders do not necessarily exhaustively search through each possible build function to get to the expected result type. Especially for expressions with generic constraints, a generic type might not be able to be inferred (even though it theoretically could be). Instead, each expression should pretty much already know its type by itself without relying on complex generic constraint chains through multiple levels of builder functions.
  • There are scenarios, where result builders cannot decide which build function to take leading to a compilation error. It is sometimes necessary to annotate build functions as @_disfavoredOverloadjust to get the more generic function to not interfere with another specialized build function.

Lack of Composition Limitations

There are some scenarios in which a result builder is only supposed to contain a certain amount of components of a certain type. In these cases, it would often make more sense to not compose these components in a result builder but rather to provide them as parameters instead.

Code Comprehension

It is often quite simple to decipher what imperative code does. You can simply look up the methods being called and check at their documentation or even implementation. For result builders, the individual composition of build functions may, however, not be obvious. When there are multiple versions of buildBlock functions with different type constraints to the component, it is not entirely clear, which one is called.

Learning Curve

Result builders and declarative programming in general, can present a considerable learning curve for a Swift developer who hasn’t worked with SwiftUI before or for developers transitioning from different platforms or programming languages.

Conclusion

Okay, so there are some upsides and some downsides to using result builders in Swift. When are they most useful though and when should I try to avoid them?

In general, result builders are quite a powerful tool to build domain-specific languages for composition scenarios that make up a large part of the functionality of a library. Sometimes, an object is composed of objects with the same or similar type and sometimes an object might simply be a collection of objects of the same type – or a mix of both, of course. This is where result builders really shine and make for great, easy-to-use code.

When you have additional requirements to the composition that result builders do not (yet) offer, it makes sense to avoid them and use proven other techniques instead. As a rule of thumb: If you only really use result builders, so that your interface is sleek and modern-looking, but you don’t really make use of advanced result builder features or have complex composition, a “traditional” parameter list is probably best.

Once result builders grow complex with many different expression and component types involved though, the compiler support becomes quite limited though and you will need to test your result builder code even more thoroughly to not run into issues where certain code snippets no longer compile.

For more information regarding result builders, you might want to have a look at the swift-evolution proposal that initially introduced them. If you are looking for a library to encode binary-formatted data in Swift, feel free to check out DataKit.

The post Swift Result Builders: Creating Custom DSLs for Binary Formatted Data appeared first on QuickBird Studios.

]]>
Class Modifiers in Dart: Sealed, Interface, Base https://quickbirdstudios.com/blog/flutter-dart-class-modifiers/ Fri, 23 Jun 2023 11:29:43 +0000 https://quickbirdstudios.com/?p=16414 With Dart 3 the class modifiers sealed, interface, final and base were introduced into the Dart language. They also updated how existing ones work. In this article, we show you why this change was made and how those new modifiers work.

The post Class Modifiers in Dart: Sealed, Interface, Base appeared first on QuickBird Studios.

]]>

Class Modifiers in Dart: Sealed, Interface, Base

With the release of Dart 3, class modifiers finally got some changes that try to make them more convenient and powerful. Dart introduced the new class modifiers sealed, interface, final, and base but also made some changes to how existing ones behave. The modifications spawned some confusion among users making class modifiers seem to be more complicated and harder to understand than before.

Are those changes a real improvement for Flutter and Dart? In this article, we will bring some light to this and explore the changes, reasons as well as use-cases to use those new modifiers.


The Problem

Before the Dart 3 update, the usage of class modifiers in Dart presented a few issues:

  • If you come from another OOP language you would not understand how you can define an interface
  • All classes can be “mixed in” into other classes, besides an extra modifier (mixin) exists for that
  • Sealed and final classes don’t exist which makes pattern-matching (also a new feature in Dart 3) impossible

With Dart 3, these problems are addressed with the introduction of new modifiers and adjustments to the existing ones. To developers who have already explored the new Dart 3 class modifiers, they might seem more intricate than the previous ones. This post aims to clarify these new modifiers and provide practical use cases for them. Let’s begin by reviewing what class modifiers are and have a look at the old and new ones.

Class Modifiers 📦

Class modifiers are the keywords that you use to define a class and remove specific characteristics from it. Class modifiers define whether a type can be extended, mixed-in, constructed or implemented into another class.

class

If you don’t apply any modifier to a class it follows the default behavior. It can be constructed as a simple class and can be extended and implemented by any other classes.

abstract class

An abstract class does not require a concrete implementation. It cannot be constructed and can define abstract methods, which also do not need an implementation. If a class (which is not abstract) extends an abstract class it needs to implement the provided interface of the abstract class. Abstract classes cannot be “mixed in” as a mixin with the with keyword.

mixin / mixin class

A mixin declaration defines a mixin that can be “mixed in” into another class. If you need a refresher on what mixins are have a look at our article here. By default mixins do not have the same abilities as a class, therefore you can also combine them and define a  mixin class. This declaration defines a class that is usable as both a regular class and a mixin, with the same name and the same type. Any restrictions that apply to classes or mixins also apply to mixin classes:

  • Mixins can’t have extends or with clauses, so neither can a mixin class.
  • Classes can’t have an on clause, so neither can a mixin class.

base class 🆕

Base classes were introduced to enforce inheritance of a class or mixin’s implementation. Just use the base modifier in front of class or mixin. A base class disallows implementation outside of its own library. This guarantees:

  • The base class constructor is called whenever an instance of a subtype of the class is created.
  • All implemented private members exist in subtypes.
  • A newly implemented member in a base class does not break subtypes, since all subtypes inherit the new member.
    • This is true unless the subtype already declares a member with the same name and an incompatible signature.

Only the base modifier can appear before a mixin declaration. You must mark any class which implements or extends a base class as base, final, or sealed. This prevents outside libraries from breaking the base class guarantees.

interface class 🆕

By default, every class in Dart also creates an interface implicitly. If you were unaware of that check out the documentation from the Dart team about that. Since that’s not obvious for newcomers, Dart 3 introduced the interface modifier. Libraries outside of the interface’s own defining library can implement the interface, but not extend it. This guarantees:

  • When one of the class’s instance methods calls another instance method on this, it will always invoke a known implementation of the method from the same library.
  • Other libraries can’t override methods that the interface class’s own methods might later call in unexpected ways. This reduces the fragile base class problem

Most of the time you will use an abstract interface since that’s the behavior you would expect if you come from another programming language. because by default interfaces in Dart are constructible.

final class 🆕

To close the type hierarchy, use the final modifier. This prevents subtyping from a class outside of the current library. Disallowing both inheritance and implementation prevents subtyping entirely. This guarantees:

  • You can safely add incremental changes to the API.
  • You can call instance methods knowing that they haven’t been overwritten in a third-party subclass.

Final classes can be extended or implemented within the same library. The final modifier encompasses the effects of base, and therefore any subclasses must also be marked base, final, or sealed.

sealed class 🆕

To create a known, enumerable set of subtypes, use the sealed modifier. This allows you to create a switch over those subtypes that is statically ensured to be exhaustive.

The sealed modifier prevents a class from being extended or implemented outside its own library. Sealed classes are implicitly abstract.

  • They cannot be constructed
  • They can have factory constructors
  • They can define constructors for their subclasses to use

Subclasses of sealed classes are, however, not implicitly abstract. The compiler is aware of any possible direct subtypes because they can only exist in the same library. This allows the compiler to alert you when a switch does not exhaustively handle all possible subtypes in its cases.

Here is an example of how you can use the benefits of sealed classes in code:

sealed class Animal {}

class Bird extends Animal {}

class Fish extends Animal {}

void findAnimal(Animal animal) {
  switch (animal) {
    case Bird _:
      print('Bird');
      break;
    case Fish _:
      print('Fish');
      break;
  }
}

We can do an exhaustive switch over the class and don’t have to deal with a default case. We have the guarantee that at any time an Animal is either a Bird or a Fish.

Library 📚

Libraries are a chunk of code and classes that you can (or someone else) can put together that can be used inside your project. There are two ways to define a library.

  1. Create a Dart/Flutter package/plugin that can be imported with the pubspec.yaml. Your code will be bundled in there and e.g sealed classes cannot be extended outside of the original code
  2. You can also define a library inside your own code by using the library keyword. Normally you use barrel files to put together some classes. That way you can easily export some specific classes and they can be imported altogether by this one file. Only classes mentioned inside the barrel file can extend the sealed class; classes from the outside cannot do this.

Combination 🔗

Class modifiers cannot only be used individually but can also be used in combination with one another. abstract base mixin class is an absolute valid modifier and it means that the class cannot be constructed or implemented and can only be used outside the library through inheritance.

Adding other class modifiers (interface, final, sealed) than abstract before the mixin, doesn’t make sense since that would block the mixin from being used with the with keywords. That’s why it’s prohibited.

Redundancies ➰

Since some of the class modifiers share some of their characteristics they can be redundant and you can get rid of some of them or combine them too another one. Here is a list of some of those redundancies. The linter will help you with that and even shows a warning when using the wrong keywords together.

sealed & abstract -> Drop abstract.
interface & final -> Drop interface.
base & final -> Drop base.
interface & base -> Say final instead.
sealed & final -> Drop final.
sealed & base -> Drop base.
sealed & interface -> Drop interface.
abstract & mixin -> drop abstract.
Just interface (Compiler will not allow) -> abstract class.

Join us as a Flutter Developer!

Complexity 🤯

Ok that was a lot and I don’t what you are thinking but to us, this implementation of class modifiers looks overwhelming, and it’s hard to wrap your head around all the possible implementations or even to find the correct list of class modifiers that are needed for your use-case.

BUT because there are redundancies it’s possible to put a list together of all possible class modifier combinations. There are 15 different combinations right now on how class modifiers can be used alone or in combination. We focused on the abilities that the class / mixin modifier gives/removes from a class. For extending and implementing we are thinking as the class would be implemented outside of the library.

Don’t forget there is a difference if the class is used inside or outside a library.

Declaration Construct? Extend? Implement? Mixin? Exhaustive?
class
base class
interface class
final class
sealed class
abstract class
abstract base class
abstract interface class
abstract final class
mixin class
base mixin class
abstract mixin class
abstract base mixin class
mixin
base mixin

(Thanks to the Dart team for creating that list)

Why? 🤔

Let’s get back to our initial question: Did the changes to class modifiers improve the language or did it just make the language more complicated?

In our opinion yes and no! Here’s why:

The new class modifiers help to make your code more concise and especially if you are a package maintainer they help to prevent misuse of your package. By introducing sealed classes, Dart now allows pattern-matching. Class modifiers are so precise that almost any imaginable use-case is mappable, but that is also coming at a cost.

All those class modifiers make the language more complicated and harder to understand. Especially newcomers to the Dart/Flutter platform can feel overwhelmed since there are so many different keywords that can be used in combination and some also don’t even behave the same as their counterparts in other languages. Aswell for experienced Dart and Flutter developers those changes seem to be overwhelming at the first look.

Conclusion

In conclusion, class modifiers are a powerful tool that can be used to improve the readability, maintainability, and safety of your Dart code. By using class modifiers, you can make it clear to other developers what your intentions are for a particular class, prevent accidental modifications to classes, and prevent the creation of unexpected subclasses.

The Dart team offers a migration guide for plugin/package maintainers. Feel free to check it out if you need additional help after this article.

Did you enjoy this article? Tell us your opinion! And if you have any open questions, thoughts, or ideas, feel free to get in touch with us! Thanks for reading! 🙏

Do you search for a job as a Flutter Developer?
Do you want to work with people that care about good software engineering?
Join our team in Munich

The post Class Modifiers in Dart: Sealed, Interface, Base appeared first on QuickBird Studios.

]]>
Restricted TextFields In SwiftUI – A Reusable Implementation https://quickbirdstudios.com/blog/restricted-textfield-swiftui/ Mon, 17 Apr 2023 14:24:20 +0000 https://quickbirdstudios.com/?p=16125 SwiftUI may lack built-in restricted textfield functionality, but our in-depth guide is here to fill the gap! This guide helps you to implement reusable, versatile restricted input fields in SwiftUI.

The post Restricted TextFields In SwiftUI – A Reusable Implementation appeared first on QuickBird Studios.

]]>

Restricted TextFields In SwiftUI – A Reusable Implementation

Forms are hard to get right! When you don’t control user input in a TextField and solely rely on validation upon submission, it often leads to a subpar user experience, as users are forced to revisit and correct each TextField, one by one. Unfortunately, SwiftUI doesn’t inherently offer an easy, reusable solution for limiting input within TextFields, particularly for more complex use cases. This article is here to change that! In the following sections, we’ll provide you with a comprehensive guide on implementing a powerful and flexible restricted input system in SwiftUI that can effortlessly handle a variety of complex form scenarios.


Introduction

Let’s say we want to let the user configure an age between 18 and 30, a shirt size between M and XL, and a floating-point percentage between 0 and 100. Wouldn’t it be cool to let the user just enter (almost) correct values and directly prevent invalid input? And all that while having a clean view free of business logic like this? 😎

struct SettingsView: View {
    @ObservedObject private var viewModel = SettingsViewModel()
    var body: some View {
        List {
            RestrictedTextField(for: $viewModel.settings.age)
                .labelled(label: "Age:")
            
            RestrictedTextField(for: $viewModel.settings.shirtSize,
                                allowedInvalidInput: InputRestriction.allowedInvalidInput)
                .labelled(label: "Shirt size:")
            
            RestrictedTextField(for: $viewModel.settings.percentage)
                .labelled(label: "Percentage:")
        }
    }
}

A first approach ✨

As a first step let’s model the settings with a simple struct holding our three desired values using an unsigned integer, an enum, and a double as the underlying types. Additionally, we define default values, that can be used initially.

// enums can be comparable by default since Swift 5.3 
enum ShirtSize: Comparable, CaseIterable {
    case s
    case m
    case l
    case xl
    case xxl
}
struct Settings {
    var age: UInt
    var shirtSize: ShirtSize
    var percentage: Double
}
extension Settings {
    static let `default` = Settings(age: 25, //Should be between 18 and 30
                                    shirtSize: .m, // Should be between m and xl
                                    percentage: 82.2) // Should be between 0.0 and 100.0
}

We can easily see that not just the values but also their restrictions should be encoded to stay generic. An easy approach for that is to store them as closed ranges next to the values:

struct Settings {
    var age: UInt
    let ageRestriction: ClosedRange<UInt>
    var shirtSize: ShirtSize
    let shirtSizeRestriction: ClosedRange<ShirtSize>
    var percentage: Double
    let percentageRestriction: ClosedRange<Double>
}
extension Settings {
    static let `default` = Settings(age: 25,
                                    ageRestriction: 18...30,
                                    shirtSize: .m,
                                    shirtSizeRestriction: .m ... .xl,
                                    percentage: 82.2,
                                    percentageRestriction: 0...100)
}

This is already quite nice, so let’s use it for our settings view: First, we create string states @State, which we can use as bindings for the input of text fields using init. Then we listen to the changes of these strings in onChange and update the settings in the view model accordingly.

class SettingsViewModel: ObservableObject {
  // Storing the settings persistently
  @Published public var settings: Settings = .default
}
struct SettingsView: View {

  @ObservedObject private var viewModel = SettingsViewModel()

  @State private var ageText: String = ""
  @State private var shirtSizeText: String = ""
  @State private var percentageText: String = ""
  var body: some View {
    List {
      TextField(
        viewModel.settings.ageRestriction.description,
        text: $ageText
      ).labelled(label: "Age:")
      TextField(
        viewModel.settings.shirtSizeRestriction.description,
        text: $shirtSizeText
      ).labelled(label: "Shirt size:")
      TextField(
        viewModel.settings.percentageRestriction.description,
        text: $percentageText
      ).labelled(label: "Percentage:")
    }
    .onChange(of: ageText) { value in
      let age = UInt(ageText).filter { age in
        viewModel.settings.ageRestriction.contains(age)
      }
      guard let age else { return }
      viewModel.settings.age = age
    }
    .onChange(of: shirtSizeText) { value in
      /* same as age, with a conversion of String to ShirtSize */
    }
    .onChange(of: percentageText) { value in
      /* same as age */
    }
  }
}

If you wonder where the filter on Optionalcomes from, look at this cool extension:

extension Optional {
    // Cool 😎
    func filter(condition: (Wrapped) -> Bool) -> Self {
        flatMap { condition($0) ? $0 : nil }
    }
}

The setting values are just simple examples. The age and shirt size for instance could be in this case easily represented by a picker view. But imagine for example a range between 1 and 1.000.000, there nobody would want to scroll through a picker 😅This solution is already acceptable if we are fine with letting the users enter whatever they want and us just storing correct inputs. One important improvement would be to move the parsing logic out of the view since business logic does not belong there. We could either move it into the view model, generating a little boilerplate code, or better use a ParseStrategyor a Formatteras described in this article.

Hint 📚: The article has a small bug in the RangeIntegerStrategy, which does not parse only integers in the specified range, but all of them. It would need to look like this:

struct RangeIntegerStrategy: ParseStrategy {
    let range: ClosedRange<UInt>
    func parse(_ value: String) throws -> UInt {
        guard let int = UInt(value), range.contains(int) else {
            throw ParseError()
        }
        return int
    }
}
private struct ParseError: Error {}

Additionally, we would probably want to indicate invalid inputs to the user, ending up with a solution like this:

struct SettingsView: View {
    @ObservedObject private var viewModel = SettingsViewModel()
    @State private var age: UInt?
    // ...
    var body: some View {
        List {
            TextField(viewModel.settings.ageRestriction.description,
                      value: $age,
                      format: .ranged(viewModel.settings.ageRestriction))
                .foregroundColor(age != nil ? .primary : .red)
                .labelled(label: "Age:")
                .onChange(of: age) { newValue in
                    guard let newValue else { return }
                    viewModel.settings.age = newValue
                }
            // ...
        }
    }
}

Restrictions⛓️

Just as it is a good practice to restrict a function as much as possible taking only valid inputs (We described the reason in our NonEmptyList article) it would be nice to also directly restrict the user’s input. The first step is to use the correct keyboard layout: .numberPad for integers, .decimalPad for decimals, etc. But let’s not stop there: If a user for example enters a percentage over 100, why would we want to even display it? We could simply just stick to the last valid input.

As a first convenience on our way there, we define the following struct to nicely hide the restriction logic:

struct Restricted<Value> where Value: Comparable {
    let value: Value
    let range: ClosedRange<Value>
    
    init?(_ value: Value, in range: ClosedRange<Value>) {
        guard range.contains(value) else {
            return nil
        }
        self.value = value self.range = range
    }

}
extension Restricted {
    static func clamped(_ value: Value, in range: ClosedRange<Value>) -> Restricted<Value> {
        var clampedValue: Value = value if value < range.lowerBound {
            clampedValue = range.lowerBound
        }
        else if range.upperBound < value {
            clampedValue = range.upperBound
        }
        return .init(clampedValue, in: range)!
    }

}

We store a comparable value and a range of the same type as the restriction. Furthermore, we give two possibilities to create such a data type: A failable initializer that returns nil if the restriction is not met and a factory function clamped that clamps the input if needed.The restriction can be easily generalized from ranges to predicates. We will not use the power of the generalization in our example, but it can come in very handy in a lot of use cases (a simple example would be just allowing even numbers) such that we do not want to withhold it from you:

// Generic restriction using a predicate
protocol Restriction {
    associatedtype Value
    func condition(value: Value) -> Bool
}
struct Restricted<Value: Comparable, RestrictionType: Restriction>
    where RestrictionType.Value == Value {
    let value: Value
    let restriction: RestrictionType
    init?(_ value: Value, _ restriction: RestrictionType) {
        guard restriction.condition(value: value) else { return nil }
        self.value = value
        self.restriction = restriction
    }
}
extension Restricted {
    // Would be cool if such copy functions could be auto-generated
    func copy(value: Value? = nil, restriction: RestrictionType? = nil) -> Self? {
        .init(value ?? self.value, restriction ?? self.restriction)
    }
}

We can then use the generic Restricted type to create an again specific version that uses ranges:

struct RangeRestriction<Value: Comparable>: Restriction {
    let range: ClosedRange<Value>
    func condition(value: Value) -> Bool {
        range.contains(value)
    }
}
// Same factory functions as before
extension Restricted where RestrictionType == RangeRestriction<Value> {
    init?(_ value: Value, in range: ClosedRange<Value>) {
        self.init(value, RangeRestriction(range: range))
    }
    static func clamped(_ value: Value, in range: ClosedRange<Value>) -> Self {
        let clampedValue: Value
        if value < range.lowerBound {
            clampedValue = range.lowerBound
        } else if range.upperBound < value {
            clampedValue = range.upperBound
        } else {
            clampedValue = value
        }
        return .init(clampedValue, in: range)!
    }
}
// Shorthand to make the type signature simpler
typealias RestrictedToRange<Value: Comparable> = Restricted<Value, RangeRestriction<Value>>

We can already refine our Settingsdata type to be more concise:

struct Settings {
    var age: RestrictedToRange<UInt>
    var shirtSize: RestrictedToRange<ShirtSize>
    var percentage: RestrictedToRange<Double>
}
extension Settings {
    static let `default` = Settings(age: .clamped(25, in: 18...30),
                                    shirtSize: .clamped(.m, in: .m ... .xl),
                                    percentage: .clamped(82.2, in: 0...100))
}

A cool thing about our generic Restricted implementation is, that we could now easily defer the decision on the restriction type by giving the Settingsstruct some generics.


Evaluating the input ➗

We arrived at the core of our implementation: The parsing and evaluation of the input. For that, we create a type called InputRestriction, which evaluates an input string and the last valid corresponding value to a new valid string value pair. For that, it just needs to know which strings represent a valid value (parseparameter) and how to convert a value back to a string (toStringparameter).

struct InputRestriction<Value: Equatable> {
    private let parse: (String) -> Value? 
    private let toString: (Value) -> String 
    
    struct Result {
        let value: Value let string: String fileprivate init(_ value: Value, _ string: String) {
            self.value = value self.string = string
        }
    }

    init(parse: @escaping (String) -> Value?, toString: @escaping (Value) -> String) {
        self.parse = parse self.toString = toString
    }

    func evaluate(value: Value, string: String? = nil) -> Result {
        // ...
    }

}
InputRestrictionis basically just a bidirectional parser, such that we could easily create it from a FormatStyle or a FormatterThe evaluation checks if the current input string can be parsed to a valid value. If yes, it is simply returned and otherwise we keep the previous input string. Additionally, we need to add the possibility to clear the input field, since it would not be possible to enter all possible values otherwise.
func evaluate(value: Value, string: String? = nil) -> Result {
    guard let string else { return .init(value, toString(value)) }
    func handleInvalidInput() -> Result {
        let currentValueString = toString(value)
        // Making it possible to have an empty input field
        if currentValueString.hasPrefix(string) {
            return .init(value, "")
        } else {
            return .init(value, toString(value))
        }
    }
    if let newValue = parse(string) {
        return .init(newValue, toString(newValue))
    } else {
        return handleInvalidInput()
    }
}

More precisely, we already make it impossible to enter all valid values: In our example we allow ages to be entered between “18” and “25”, meaning “1” would be an invalid input but without temporarily entering “1” we also cannot arrive at “18”. To overcome this issue, we extend the InputRestriction to allow certain invalid inputs:

init(parse: @escaping (String) -> Value?,
     toString: @escaping (Value) -> String,
     allowedInvalidInput: @escaping (String) -> AllowedInvalidInput? = { _ in nil })

Now we can easily create an InputRestriction for numbers like integers or doubles using that they are LosslessStringConvertible and ExpressibleByIntegerLiteral:

extension InputRestriction
    where Value: LosslessStringConvertible,
          Value: Comparable,
          Value: ExpressibleByIntegerLiteral {
    init(restricted: RestrictedToRange<Value>) {
        let range = restricted.restriction.range
        let parse: (String) -> Value? = {
            guard let value = Value.init($0), range.contains(value) else { return nil }
            return value
        }
        let allowedInvalidRange = 0..<range.lowerBound
        let allowedInvalidInput = { (string: String) in
            Value.init(string)
                 .filter { allowedInvalidRange.contains($0) }
                 .map(AllowedInvalidInput.init)
        }
       
        self.init(parse: parse,
                  toString: \.description,
                  allowedInvalidInput: allowedInvalidInput)
    }
}

Note: We just use LosslessStringConvertible for demonstration purposes. Since it is not localized, we recommend to use a localized alternative in production.

What a nice view! 🪟

We now put all the pieces together and build the actual SwiftUI view.

struct RestrictedView<Value: Equatable, Content: View>: View { ... }

The RestrictedView is generic in the value type that can be entered into it and the actual view type. In our case that will be a TextField, but also other views like a TextEditor, Slider or Picker would be possible.To create a RestrictedView we expect a binding of the represented value, which can for example directly be a persistent storage. Like that, every input would be immediately stored. Moreover, we also expect an InputRestriction and a closure that builds a  view with the binding of the input string and an indication of its validity.  The logic of the RestrictedView is quite simple: We evaluate every new input string using the InputRestriction and update value and string bindings with its result.

struct RestrictedView<Value: Equatable, Content: View>: View {
    private let restriction: InputRestriction<Value>
    private let content: (Binding<String>, Bool) -> Content
    @Binding private var value: Value
    @State private var string: String
    @State private var isValid: Bool
    var body: some View {
        content($string, isValid).onChange(of: string, perform: update)
    }
    init(for value: Binding<Value>,
         by restriction: InputRestriction<Value>,
         @ViewBuilder content: @escaping (Binding<String>, Bool) -> Content) {
        self._value = value
        self.restriction = restriction
        self.content = content
        // Evaluate the initial value
        let initial = restriction.evaluate(value: value.wrappedValue)
        self._isValid = .init(initialValue: initial.isValid)
        self._string = .init(initialValue: initial.string)
    }
    private func update(_ string: String) {
        // Call the input restriction
        let result = restriction.evaluate(value: value, string: string)
        // Update state
        value = result.value
        self.string = result.string
        isValid = result.isValid
    }
}

A specialized version of the RestrictedView is the RestrictedTextField which gets its input through a TextField. For that, we simply create a TextField in the view builder closure of the RestrictedView:

struct RestrictedTextField<Value: Equatable, Modifier: ViewModifier>: View {
    private let value: Binding<Value>
    private let restriction: InputRestriction<Value>
    private let prompt: String
    private let onInput: (Bool) -> Modifier
    var body: some View {
        // Easily created using a closure
        RestrictedView(for: value, by: restriction) { $text, isValid in
            TextField(prompt, text: $text)
                .modifier(onInput(isValid))
        }
    }
    init(for value: Binding<Value>,
         by restriction: InputRestriction<Value>,
         prompt: String = "",
         // Modify the view based on valid/ invalid displayed input
         onInput: @escaping (Bool) -> Modifier) {
        self.value = value
        self.restriction = restriction
        self.prompt = prompt
        self.onInput = onInput
    }
}

To support a visual indication for invalid inputs, we can configure it using a ViewModifier. For instance, if we want to make the text red for invalid inputs, we can use a modifier like this:

struct DefaultInputViewModifier: ViewModifier {
    private let isValid: Bool
    func body(content: Content) -> some View {
        content.foregroundColor(isValid ? .primary : .red)
    }
    init(isValid: Bool) {
        self.isValid = isValid
    }
}

Victory 🥇

Finally, we arrived at our goal and can create concisely generic restricted input views. We use it for all kinds of values that can be entered using a string. Also, the input can come from any standard SwiftUI view and even custom ones. And on the way, we saw a lot of cool Swift/ SwiftUI concepts and features.

struct SettingsView: View {
    @ObservedObject private var viewModel = SettingsViewModel()
    var body: some View {
        List {
            RestrictedTextField(for: $viewModel.settings.age)
                .labelled(label: "Age:")
                .keyboardType(.numberPad)
            RestrictedTextField(for: $viewModel.settings.shirtSize,
                                allowedInvalidInput: InputRestriction.allowedInvalidInput)
                .labelled(label: "Shirt size:")
            RestrictedTextField(for: $viewModel.settings.percentage)
                .labelled(label: "Percentage:")
                .keyboardType(.decimalPad)
        }
    }
}

You can find our final code here.

Thanks for reading the article 🙏 If you enjoyed this article you might also want to checkout our article about Passkeys that were introduced in iOS 16.

The post Restricted TextFields In SwiftUI – A Reusable Implementation appeared first on QuickBird Studios.

]]>
End-to-End Encryption: A Modern Implementation Approach Using Shared Keys https://quickbirdstudios.com/blog/end-to-end-encryption-implementation-approach/ Tue, 07 Mar 2023 14:07:27 +0000 https://quickbirdstudios.com/?p=15661 End-to-end encryption, otherwise known as E2EE, is an incredibly important aspect of modern application security. It's not only necessary to make sure that the data in our application is secure, but also to make sure that the data we send and receive is as well. Users rightfully expect that the data they are entrusting you with is kept safe with you and protected from anyone who shouldn't see it as much as possible. End-to-end encryption is one of the best tools in our toolbox to make sure that this happens. In this article, we give you a head start on how to implement a modern way of end-to-end encryption into your app.

The post End-to-End Encryption: A Modern Implementation Approach Using Shared Keys appeared first on QuickBird Studios.

]]>

End-to-End Encryption: A Modern Implementation Approach Using Shared Keys

End-to-end encryption, otherwise known as E2EE, is an incredibly important aspect of modern application security. It’s not only necessary to make sure that the data in our application is secure, but also to make sure that the data we send and receive is as well. Users rightfully expect that the data they are entrusting you with is kept safe with you and protected from anyone who shouldn’t see it as much as possible. End-to-end encryption is one of the best tools in our toolbox to make sure that this happens. In this article, we give you a head start on how to implement a modern way of end-to-end encryption into your app.


Introduction

End-to-end encryption is a broad topic with many different strategies and implementations, but generally speaking, we can narrow the topic down to some form of data that is encrypted at a location and is able to travel to an endpoint location without being decrypted or revealed in the process. The starting point is usually a user’s device and the endpoint is usually the device of that same user or someone they want to share data with. It’s important to note that when we say the device isn’t decrypted en route, we don’t just mean that it manages to get there without being decrypted, we also mean that the tools or keys to decrypt it never leave the devices that are able to do that decryption. Even if I got my hands on data that you had sent to someone else, it would never be more than gibberish to me.

What Kind of App Can Use End-to-end encryption?

Let’s look at an example to illustrate some of the important aspects of end-to-end encryption and walk through some of the problems that it solves and how we can make it work for us.

Let’s imagine that Quickbird is considering developing a shopping list app. It’s important to him that he builds a server component around it so that he can sync his shopping list and add to it from his computer at work and also see it when he gets to the grocery store on his phone or maybe even on his cellular connected watch.

But there’s a problem. He eats a lot of candy and he might have to share it with his roommates in the birdhouse if they knew he had it. That wouldn’t be the worst thing, but he’d certainly prefer to keep it all for himself! As we all should know, if someone else has physical access to a device, it should not be considered secure, and in this case, his roommates live in the same house as his server, so we cannot consider it secure from them.

This means that the best way to make sure they can’t see his shopping list and when he chooses to buy candy is to make sure the server doesn’t know either. We have a great candidate app for implementing end-to-end encryption and keeping his candy – oops, I mean data – secure.

The app should look something like this:

Shopping ListShopping list detail

Problems to Solve

Quickbird does have a few considerations to make when he chooses his encryption strategy. As I said before, it’s important to him that he can still access his encrypted data from multiple devices rather than being locked into the device he created it on.

It’s also important to him that if he were to lose his device or somehow have a problem with it, he can still access my data. Additionally, he should be able to see the benefits that we would normally get from adding end-to-end encryption to an app, no matter what version of it is used. That means that a partial list of the advantages he should see are:

  • Only he can read the data
  • Even if someone has access to the server, the information they can see will be useless to them and protect the user’s privacy, even from Quickbird, the developer.
  • All of his devices are supported (PC, Phone, Watch)
  • A way to recover his shopping list, in case he loses his devices
  • It should be convenient and easy to use.

Let’s start working through these problems one by one.

Pieces of the Process

Passcode

The first instinct that our feathery friend might have is to randomize a password or key of some kind to protect his user’s data. This would have the advantage of giving the recoverability feature that he desires. The user would only need to enter their information again on another device and then the key would be usable in both places. Unfortunately, as our article about passkeys points out, users are inherently bad at remembering and managing passwords, especially when that data is randomized and meaningless to them.

Knowing that he can use something similar. Rather than generating an entirely randomized key, Quickbird can begin the process with a human-readable passcode which can be treated like a password the user is familiar with, maybe even stored in a password manager.

One option here might be to use the password of the user or some other data linked to the account, but most often this kind of data is known to change and humans are really very bad at randomization, even when we do our best.

The candy information will remain much safer by generating a passcode with a proven standard to get the desired results. One standard already exists in the form of the BIP39.

BIP39 originated in the Bitcoin world and is the de facto standard for generating and managing encryption keys in cryptocurrency wallets. It generates randomized data which is then represented by a mnemonic phrase, which, for our purposes, means a human-readable string of words. For example, we might end up with one of these phrases:

narrow swing either holiday own rice nothing guitar fitness carpet public session object ankle kitchen
note fame mother rare uncle join delay toddler collect dove state siege series leaf sample
candy sweets toffee gum caramel marshmallow sugar desert bonbon honey delicious syrupy treat crunchy chocolate

It may not be as convenient as a password might be, but pretty difficult to guess and not the worst thing in the world to re-type if someone gets a new device and want to set up their shopping list app on it. The phrase is created by generating cryptographically secure bits, in this case, 128 bits, calculating a checksum of 4 bits and adding them to the end, giving us a total of 132. The bits are then split up into 12 segments of 11 bits each and those values are used to choose words from a wordlist. This creates a much more random sequence of words (and characters) than the average person or bird roommate would manage on their own.

Master Key

The next step in the process will be to get a key from our passcode. This will be what we call the ‘Master Key’.

The master key has a few requirements we need from it. First of all, it needs to be deterministic. If Quickbird uses the same passcode on another device, he should generate the same master key with it. Beyond that, it should never leave the device it is generated on. We can even take that a step further and avoid storing it on any device at all. There’s no reason, other than an imperceptible performance blip, to ever store the master key on the device rather than just generating it when we need to.

Shared Key

So, if the user has a consistent master key that can be generated on any device that they manually share the passcode with, they should actually be able to encrypt their data, send it to the server, and decrypt it when they receive it again, right? Well, that’s technically true, but there are a couple of not-so-user-friendly problems to deal with here.

One aspect of secure encryption is key rotation. That means re-randomizing and re-generating keys regularly. For instance, to prevent someone who got access to the data from running a very long brute force attack where they have all the time in the world. Right now though, the key isn’t randomized. It’s based entirely on our passcode, which means that Quickbird, as a user, would need to be responsible for creating a new passcode and replacing it on all the devices he uses. To reiterate a point from earlier, human beings and birds are not wonderful at keeping ourselves digitally secure and the odds are that Quickbird would be an unreliable mechanism for this.

So let’s introduce what we’ll call a shared key. This key will be totally randomized, meaning it can be re-randomized. It will be stored on Quickbird’s server and shared with every device a user logs in with so that every device is always using the same key to encrypt and decrypt its data. Since it’ll be leaving the user’s device, Quickbird needs to make sure that it leaves encrypted and that nothing can decrypt it until it reaches its proper destination. That is what he will use as the master key. Its only purpose will be to encrypt and decrypt the shared key on a user’s device.

Now Quickbird has a key that can be decrypted, decrypt its data, regenerated, re-encrypt the data it protects with the new version, and re-encrypted to be shared along with the data it protects.

Bringing It All Together

Encryption diagram

Let’s go back over what Quickbird needed to accomplish with his solution.

  • Only the user can access their information, even if someone intercepts it.
  • Even if someone has access to the server, the information they can see will be useless to them and protect the user’s privacy, even from Quickbird, as the developer.
  • Increased user trust as a result of their data safety.
  • Multiple device support.
  • A recovery pathway, as the result of a lost or stolen device.
  • Very little interference for the user, in spite of all of the advantages they’ve gained. All that is asked of them is to keep record of their passcode and enter it in the event of a device change or other data issue.

Our blue mascot has now guaranteed that only the user has access to their information. If someone were to listen to network traffic to intercept the data, what they might receive would be the shared key, in an encrypted form and the encrypted data. They would be unable to decrypt the shared key and then could not use that to decrypt the data.

Similarly, if one of Quickbird’s roommates were to get access to his server, they would only see a jumble of encrypted data. It’s like staring at a lock with no key in sight.

We can now reasonably assume that Quickbird’s users will trust their data with him more, knowing that he has implemented a system to keep their data out of the hands of anyone besides themselves.

Those users can now use all of the devices they’d like, simply by logging in and entering a passcode to that they have access. The same applies to a user losing their device and wanting to a way to see their shopping (or candy) list. All of this adds very little overhead for a user to understand or work with and that means that the bird has achieved simplicity for his users as well.

Word of warning

There are a few important things to remember here. First, it is nearly always useful to remember that you should almost never implement your own cryptography or try to create your own standard. The best rule is to always use a well-known and trusted library that has been extensively peer-reviewed. There is a lot of expertise, study, and research that goes into topics that you might assume are simple, like cryptographically secure randomization and mathematics.

Conclusion

With everything Quickbird has put in place, he should be able to sneak home candy to feed his sugar addiction, without his roommates knowing to look for it at all. His only concern can now be about them noticing his growing weight.

Like all encryption, none of this is a one size fits all approach. It’s merely one strategy to accomplish a specific set of goals, even if that goal is as important as keeping my candy safe. If the topic is new to you then this can serve as a starting point to begin delving into topics like symmetric vs asymmetric encryption, Diffie–Hellman key exchange, Bob and Alice, and the various types of attacks and vulnerabilities of different strategies.

If you are interested in other ways how to secure your iOS app check out our article about best security practices on iOS.

Did you enjoy this article? Tell us your opinion on Twitter! And if you have any open questions, thoughts, or ideas, feel free to get in touch with us! Thanks for reading! 🙏

The post End-to-End Encryption: A Modern Implementation Approach Using Shared Keys appeared first on QuickBird Studios.

]]>
Understanding Dart Memory: Weak References and Finalizers Demystified https://quickbirdstudios.com/blog/dart-weak-references-finalizers/ Tue, 03 Jan 2023 10:00:35 +0000 https://quickbirdstudios.com/?p=15172 Find out how weak references and finalizers can help you manage memory and improve the performance of your Dart applications. Explore examples and code samples that show you how to implement weak references and finalizers in Dart, and avoid common mistakes.

The post Understanding Dart Memory: Weak References and Finalizers Demystified appeared first on QuickBird Studios.

]]>

Understanding Dart Memory: Weak References and Finalizers Demystified

Have you ever had memory issues with your Flutter application? Whether it is through processing huge data or because you somehow created a memory leak. These are common issues that we as developer face from time to time. To make our developer life easier Dart has some ways to prevent us from making common mistakes: Weak references and finalizers.

Understanding how these work and how they can be used can make your code more efficient and reduce unnecessary memory consumption. This article will explore the features of weak references, and finalizers in Dart. Using them helps you to avoid common pitfalls and write more efficient and scalable code.


How does the Dart memory management work?

Dart uses a garbage collector to automatically manage memory. This means that when an object is no longer being used, the garbage collector will automatically free up the memory used by that object.

This is different from languages like C and C++, where the programmer is responsible for manually allocating and deallocating memory for objects.

To help the garbage collector, Dart has a concept of “ownership” of objects. When an object is created, it is owned by the code that created it. When the owner of an object goes out of scope, the object becomes eligible for garbage collection, unless it is being referenced by another object.

In general, it is not necessary for Dart programmers to worry about memory management, as the garbage collector takes care of it automatically. However, it is still possible to create memory leaks that can crash your app and we need to be aware of how to reduce the chance that they will occur or how to identify them in the first place.

How do memory leaks occur?

Memory leaks can happen in Dart when an object is still being referenced in memory but is no longer being used. This can happen, for example, when a reference to an object is stored in a variable and that variable is no longer needed, but the reference to the object is not removed. As a result, the object will continue to be retained in memory even though it is no longer needed. Memory leaks can also happen when an object has references to other objects, but those objects are no longer needed, resulting in a chain of unused objects that are still being retained in memory. Memory leaks can cause the program to consume more and more memory over time, eventually leading to poor performance or even crashing the app.

Example memory leak in Flutter

In this example, we just add a random number every second to a list and display it in a ListView.

class MyWidget extends StatefulWidget {
  @override
  _MyWidgetState createState() => _MyWidgetState();
}

class _MyWidgetState extends State<MyWidget> {
  List<int> _numbers = [];

  @override
  void initState() {
    super.initState();

    // Start a timer that generates a new number every second
    Timer.periodic(Duration(seconds: 1), (timer) {
      // Add the new number to the list
      _numbers.add(Random().nextInt(100));

      // Tell the widget to rebuild with the updated list of numbers
      setState(() {});
    });
  }

  @override
  Widget build(BuildContext context) {
    return ListView.builder(
      itemCount: _numbers.length,
      itemBuilder: (context, index) {
        return Text('Number: ${_numbers[index]}');
      },
    );
  }
}

Have you already spotted the issue? In this example, it is easy to investigate because you know that something is wrong in the code but normally you would just notice that your app is getting slower or even crashes after some time. The app that you are working on has probably thousands of lines of code, so where should we start?

Devtools to the rescue

Flutter has really nice development tools that make it easy to analyze and investigate the health of your application. It gives you a view of not only the widget tree and its properties, networking, and logging but also of the device usage of CPU and memory. We will focus on the memory dev tools here because that’s what we are interested in.

To open up the dev tools just start your Flutter application in debug mode. Depending on the IDE that you use it is different how you can open the dev tools. Check out the instructions from the Flutter team.

Memory view dev tools

Memory view dev tools

The view might differ on your side, depending on which IDE you are using; the layout should still be the same. We will also not go into super much detail here. The memory view is already well documented by the Flutter team HERE.

How to locate the memory leak

If we just keep the app running without doing anything we can already see that the used memory especially the heap size increases over time. This clearly indicates that something allocates memory that is not released after some time (garbage collected).

What was the heap again?

The heap is a region of memory that is reserved for dynamic allocation of memory blocks at runtime. It is used to store variables and data structures that are created and destroyed dynamically during the execution of a program (e.g Objects).

If we track the allocations and filter them by the created instances you can easily see what is the reason for the memory leak. In this example, we see that the instances inside the _numbers list increase over time. The issue here can be easily spotted because this is a made-up example to make it more obvious. There are more ways to investigate memory issues and there are even packages out there to find them:

In this example, the _MyWidgetState class has a List of numbers that is updated every second. Each time a new number is added to the list, the widget is rebuilt to display the updated list. However, since the _numbers list is a member variable of the _MyWidgetState class, it will never be garbage collected, even if the widget is removed from the tree. This means that the memory used by the list will continue to grow over time, eventually leading to a memory leak.

There are multiple ways to solve this issue! One elegant solution is to somehow make the _numbers list garbage collectable. Dart has something built in to do that and it is called: WeakReference.

Weak references

In Dart, a weak reference is a reference to an object that does not prevent the object from being garbage collected. This means that if the only references to an object are weak references, the object can be garbage collected even if the references are still pointing to it. This can be useful in situations where an object is no longer needed but is still being referenced by other objects.

Weak references are created using the WeakReference class, which takes an object as a parameter in its constructor. For example, to create a weak reference to a List object, you could use the following code:

List<int> numbers = [1, 2, 3];

// Create a weak reference to the numbers list
WeakReference<List<int>> numbersWeakRef = WeakReference(numbers);

It is important to note that the object referenced by a WeakReference can be garbage collected at any time, even if the WeakReference is still pointing to it. This means that you should always check that the object is not null before accessing it.

This is what the above example with Weak Reference looks like:

class MyWidget extends StatefulWidget {
  @override
  _MyWidgetState createState() => _MyWidgetState();
}

class _MyWidgetState extends State<MyWidget> {
  // We will use a weak reference to store the list,
  // so that it can be garbage collected if it is no longer needed.
  WeakReference<List<int>> _numbers = WeakReference(<int>[]);

  @override
  void initState() {
    super.initState();

    // Start a timer that generates a new number every second
    Timer.periodic(Duration(seconds: 1), (timer) {
      // Add the new number to the list
      _numbers.value.add(Random().nextInt(100));

      // Tell the widget to rebuild with the updated list of numbers
      setState(() {});
    });
  }

  @override
  Widget build(BuildContext context) {
    // Create the list of numbers if it does not already exist
    if (_numbers.value == null) {
      _numbers = WeakReference([])
    }

    return ListView.builder(
      itemCount: _numbers.value.length,
      itemBuilder: (context, index) {
        return Text('Number: ${_numbers.value[index]}');
      },
    );
  }
}

In this modified version of the code, the _MyWidgetState class uses a  WeakReference  to store the list of _numbers . This means that the List can be garbage collected if it is no longer being used by the program. This avoids the potential memory leak, while still allowing the List to be updated and accessed by the widget.

Weak vs Strong reference

A strong reference is a normal reference to an object that prevents the object from being garbage collected. This means that as long as there is at least one strong reference to an object, the memory associated with that object will not be reclaimed by the garbage collector, even if the object is no longer in use.

On the other hand, a weak reference does not prevent the object from being garbage collected. This means that if there are only weak references to an object, the garbage collector is free to reclaim the memory associated with that object, even if the weak references are still present.

The main difference between strong and weak references is that strong references guarantee that the referenced object will not be garbage collected, while weak references do not. This means that using strong references can potentially cause memory leaks if the references are not properly managed, while weak references are less likely to cause memory leaks because they allow the garbage collector to reclaim the memory associated with an object when it is no longer in use. However, weak references also have some limitations, such as not being able to directly access the object they reference, which can make them more difficult to use in some cases.

Drawbacks of Weak References

Weak references solve easily some memory issues without a lot of work but it also has some drawbacks. First of all, you need to identify potential memory issues in the development phase to effectively use it and it also creates some annoying checks and reinstations at runtime. So overusing it is super verbose.

Even worse it does not work for situations where you don’t want that the GC to through your references away. Think of a DB-Connection that has to be (worst-case) reopened every time you use it. This is super resource expensive and makes your app slower instead of more performant. We need somehow a concept to keep the reference around and be sure that is always correctly handled and removed from memory when not used anymore.

Join us as a Flutter Developer!

Finalizers

A finalizer is a function that is automatically called when an object is about to be destroyed. This allows you to perform any necessary cleanup before the object is removed from memory. Finalizers are particularly useful when dealing with resources that need to be released when an object is no longer in use. For example, when a file handle is no longer needed, it should be closed to prevent a potential memory leak.

Dart 2.17 solved this by introducing the concept of a Finalizers, which includes a Finalizable marker interface for “tagging” objects that shouldn’t be finalized or discarded too early that can be attached to a Dart object to provide a call-back run when the object is about to be garbage collected.

In Dart, finalizers are created using the Finalizer class. This class takes a single argument, which is a function that will be called when the object is destroyed. This function can be used to perform the necessary cleanup. For example, the following code defines a finalizer that closes a file handle when the object is destroyed:

static final Finalizer<DBConnection> _finalizer = Finalizer((connection) => connection.close());

In addition to being used for resource cleanup, finalizers can also be used to perform other tasks. For example, they can be used to log when an object is destroyed, or to call cleanup functions in other objects.

Here is a complete example of how to use finalizers to safely close the connection to a database:

class Database {
  // Keeps the finalizer itself reachable, otherwise, it might be disposed
  // before the finalizer callback gets a chance to run.
  static final Finalizer<DBConnection> _finalizer =
      Finalizer((connection) => connection.close());

  final DBConnection _connection;

  Database._fromConnection(this._connection);

  factory Database.connect() {
    // Wraps the connection in a nice user API,
    // *and* closes the connection if the user forgets to.
    final connection = DBConnection.connect();
    final wrapper = Database._fromConnection(connection);
    // Get finalizer callback when `wrapper` is no longer reachable.
    _finalizer.attach(wrapper, connection, detach: wrapper);
    return wrapper;
  }

  void close() {
    // User requested close.
    _connection.close();
    // Detach from a finalizer, no longer needed.
    _finalizer.detach(this);
  }

  // Some useful methods.
}

Finalizers are an important feature of Dart, as they provide a way to ensure that resources are properly released when an object is no longer needed. When used properly, they can help prevent memory leaks and other issues that can arise from not properly cleaning up resources.

Calling native code

A huge reason why the finalizer interface was introduced was Dart FFI and calling native code. When deeply integrating with native platforms using Dart FFI, you sometimes need to align the cleanup of memory or other resources (ports, files, and so on) allocated by Dart and the native code.
With the Finalizer also the NativeFinalizer class was added to Dart which can be attached to a Dart object to provide a call-back run when the object is about to be garbage collected. Together these allow for running cleanup code in both native and Dart code. For more details, see the description and example in the API documentation for NativeFinalizer.

Conclusion

In conclusion, weak references and finalizers are powerful features in Dart that can help you prevent memory leaks and optimize the performance of your applications. Understanding how these features work and when to use them allows you to avoid common pitfalls and write more efficient and maintainable code. Another essential concept that is widely underused in our opinion is Mixins.

Did you enjoy this article? Tell us your opinion on Twitter! And if you have any open questions, thoughts or ideas, feel free to get in touch with us! Thanks for reading! 🙏

Do you search for a job as a Flutter Developer?
Do you want to work with people that care about good software engineering?
Join our team in Munich

 

The post Understanding Dart Memory: Weak References and Finalizers Demystified appeared first on QuickBird Studios.

]]>
The Complete Guide to iOS 16 Passkeys – App and Backend Implementation https://quickbirdstudios.com/blog/ios-passkeys/ Thu, 03 Nov 2022 14:29:49 +0000 https://dev2.quickbird.io/?p=7224 With iOS 16 Apple introduced a way to go passwordless called Passkeys. In this article, we are covering what needs to be done on the app side but also what needs to be implemented in the backend

The post The Complete Guide to iOS 16 Passkeys – App and Backend Implementation appeared first on QuickBird Studios.

]]>

The Complete Guide to iOS 16 Passkeys – App and Backend Implementation

Offering authentication always intimidates me, because it’s hard to do correctly and puts my app at risk. If I do mess it up, I know that I’m risking the security of any other service that the user may have used the same password for. It’s a lot of responsibility and incredibly important to get right. Apple’s newest update brought with it the means to revolutionize accessing accounts you have on the internet and keeping your data secure. We’re talking, of course, about passkeys.

In this article, we have a look at how to integrate passkeys in your app and what needs to be done in the backend, especially which API endpoints need to be implemented.

Introduction

Passkeys are Apple’s implementation of the WebAuthn standard. They do this as part of the FIDO Alliance, along with partners like Google and Microsoft.

Most applications today have some form of a backend, and many need to store user credentials. It can ease some fears knowing that if a bad actor gains access to your database there are no passwords that they can access. Hashed or not, those passwords represent an attack vector that someone could use to gain access to your users’ accounts, even on services you don’t own. Personally, if I can make sure I’m not holding onto a key that unlocks the user’s data on my service and possibly on others, I would prefer to do it. That means a lot less pressure and stress for me.

The simple fact is, people are bad at making passwords and randomization, even if they could always remember what they came up with. The WebAuthn protocol solves this by only storing a public key on the server for verification, while the private key lives securely on the user’s device and is never sent as part of authentication. The public key is useless without the private key and is of no interest to an attacker.

It will help to dive into how passkeys work and see an example implementation to show the basic process and explore what data is being held outside of the user’s control.

A note on security

Security, cryptography, authentication, and authorization are all difficult and complex topics. Please, do more research, refrain from writing custom cryptography, and stick to peer-reviewed and well-trusted libraries and frameworks. The code that will be demonstrated is not spec-complete (or even fully accurate) and should not be considered safe to use in production. It is for educational purposes and understanding only.

What can registering and signing in with passkeys look like?

At its simplest, if there’s no need to support traditional passwords as well, registering with a passkey might only require a TextField for the user to enter their username, a register button, and a login button. Maybe you can even imagine versions of this flow that randomly generate an ID for the user and don’t require the TextField at all, but for our purposes, we’ll assume we want to maintain a unique username chosen by the user.

In the sample app that we’ll be looking at today, we’ll use exactly this. The user can type in a username of their choice and tap the register button to attempt to register that username and save their passkey on the device.

 

From there, the user can fill in (or leave filled in) the same username and tap the login button.

 

 

Doing that will allow the user to log in and see the content specific to them. In this case, seeing a screen dynamically shows their username!

 

And if we look in the passwords section of the Settings app, we can see the saved key information along with the user’s other saved passwords.

What the process looks like

We need to request a challenge from the server. The challenge will be a unique series of randomized bytes that we can then sign, using the private key on the device. The server makes a note of this challenge, in our case with the session. After receiving the challenge, the device then signs it using the private key and returns the challenge used, the signature, and an assertion with an optional username. From there, the server can make sure that it got the same challenge as a response that it sent, and it can verify the signature using the public key that was stored for that user. If the user exists, the signature matches and the public key wasn’t changed in transit, it returns the same data it normally would in a password flow (such as an auth token) and the process is complete.

Assertion vs Attestation and what a relying party is

There was a word that you may not be familiar with used in the last paragraph. You may have caught it. One of the things that people may get confused about is what exactly is an assertion and what is an attestation. They can be especially confusing because they sound so similar and, their purposes are also similar, but it is important to remember they are not the same.

An attestation is the collection of data that contains client data in JSON format, such as the request origin or the challenge that was issued. The attestation object contains the public key generated by the authenticator. It is generated using the attestation certificate on the device which never changes. Because this attestation certificate is standard for the device type, it can be checked by the server if the auth flow should change based on a certain device type. Attestation is rarely meaningfully validated since we tend to trust that someone is who they say they are upon registration and when adding another authenticator to an account, they will have already authenticated themselves.

An assertion is similar. It’s a collection of data that once again contains things like the challenge used and the origin. This time though, the authenticator uses a generated private key that matches the public key sent during attestation to sign a combination of authenticator data and a hash of the client data. The result is the signature that the server will verify with the public key.

A relying party is an application that will request interaction with the authenticator. A bit complicated? Well, for our purposes we can just use the word server, but don’t be confused if you see both words after this point.

What technologies will be used and what do you need to know

Going forward into a basic implementation, we’ll assume some basic knowledge of Javascript, how web servers work, running a node server with Express, and, obviously, iOS apps in general. For the most part, the only external libraries we’ll be using will be either extremely common or cryptography specific.

The same process can be achieved with any other language or framework, but we’ll use Javascript due to its popularity and recognizability. In your project, you’d be better served relying on one of the many other options which are more fully-featured, more robust, and more secure. Ultimately, you will be responsible for the code you use though, so it’s important to understand the basics of what that code is doing.

Capabilities and webcredentials

The first thing we’ll need to do is make sure that when our app attempts to access and authenticate via our server, our server acknowledges that our app is approved to do so. To do this, we’ll create a document that looks very much like JSON and lists our app prefix id along with our bundle identifier. To find the app prefix ID, head to this page and log in with your Apple Developer account. From there, go to the app you are working on and look for the App ID Prefix in the upper right corner. Afterward, the apple-app-site-association file can be set up as follows. Note the lack of an extension when creating the file name.

{
  "webcredentials": {
    "apps": [
      "<App ID Prefix>.<Bundle Identifier>"
    ]
  }
}

json { ./public/.well-known/apple-app-site-association }

This file is placed in the public/.well-known folder which means we need this line in our express entry point to use the public folder for static files.

// { ./passkey.js }
app.use(express.static("public"));

Inside XCode, we’ll also need to add the Associated Domains capability with the line webcredentials:<fully-qualified-domain>.

If this looks familiar to you, that’s probably because you’ve written something similar if you’ve ever implemented Sign in with Apple, universal links, or app clips for an iOS app before. That’s because it’s the same file. It also means if you already support one or more of them, the final version of your file may look different from the one above.

The Backend (API)

Alright, on to the good stuff. First, let’s start by looking at our API-endpoints.

The first 2 are the first step of the process for the server. Their main role is to create a shared challenge for the device to sign. They are split into 2 separate endpoints because having our register endpoint separate allows us a chance to fail early and loudly to the user, before the key generation on the device, if the username they are attempting to register with is already taken. We store the challenge in the session to ensure it matches the one that comes back to us during the next step of the process.

/challenge endpoint

Our challenge endpoint is relatively simple.

// { ./routes.js }
/** 
 * Endpoint: /challenge
 *
 * Returns a generated challenge for authentication
 */
router.post("/challenge", (req, res) => {
	...
});

It clears the stored challenge for the session, creates the credential request for the client, and stores the newly randomized challenge to use as verification when the client responds.

// { ./routes.js }
req.session.challenge = null;

  let credentialRequest = {
    status: "success",
    challenge: helpers.randomBase64URLBuffer(32),
    rp: {
      name: "passkey.allisonpoppe.dev",
    },
  };

  req.session.challenge = credentialRequest.challenge.toString();

  res.json(credentialRequest);

/register endpoint

// { ./routes.js }
/**
 * Endpoint: /register
 *
 * Verifies username can be registered and generates challenge to use for attestation.
 */
router.post("/register", (req, res) => {
	...
});

We fail if the username isn’t included in the request or if it is already taken in the database. If it exists in the database and hasn’t completed registration yet, it’s possible an error or something unexpected prevented it from finishing, so there’s no sense in keeping it around and it is removed.

// { ./routes.js }
if (!req.body || !req.body.username) {
    res.json({
      status: "failed",
      message: "Malformed Registration Request",
    });
    return;
  }

  let username = req.body.username;

  if (db.userExistsWith(username)) {
    if (db.getUserWithUsername(username).registered) {
      res.json({
        status: "failed",
        message: "Username Already In Use",
      });
      return;
    } else {
      db.removeUserWithUsername(username);
    }
  }

Like the /challenge endpoint, a credential request is generated and the user is added to the database (by default unregistered) to ensure consistency and that the client does not send us a challenge with a different user.

// { ./routes.js }
  req.session.challenge = null;
  req.session.user = null;

  let user = new User(db.getNextID(), username);
  db.addUser(user);
  let credentialRequest = {
    status: "success",
    challenge: helpers.randomBase64URLBuffer(32),
    rp: {
      name: "passkey.allisonpoppe.dev",
    },
    user: user,
  };

We set up the information we’ll check for to guarantee we are handling the same user and process and then return the credential request to the device.

// { ./routes.js }
req.session.challenge = credentialRequest.challenge.toString();
req.session.user = user;

res.json(credentialRequest);

/finish endpoint

The other route of note is /finish. This route serves as the end of the authentication process for the server for both registration and logging in, and is also our biggest one to go with that extra responsibility. Both processes involve us checking that the challenge hasn’t changed and there isn’t anything out of place with the request we received. Afterward, based on whether we have an attestationObject (registration) or authenticatorData, we can determine whether the request was for registration or authentication and branch from there. If it was for registration, our “database” is updated with the information we’ll need for future authentication requests. If it was a login request, we check that we can verify the device’s signature using the data we have stored about the user. Hopefully, everything works out and we have successfully registered or authenticated the user!

// { ./routes.js }
/**
 * Endpoint: /finish
 *
 * The second part of the registration/authentication process in which
 * the key is verified
 */
router.post("/finish", async (req, res) => {
	...
});

The first step is to validate that our session returns to us everything we expect it to, which helps us trust that we are dealing with the same user and acting on the same process. We begin by checking the user data itself.

// { ./routes.js }
  if (!req.body || !req.body.clientDataJSON) {
    res.json({
      status: "failed",
      message: "Malformed Registration/Authentication Finish Request",
    });
    return;
  }

  let request = req.body;

  let user;
  if (request.userID) {
    user = db.getUserWithID(request.userID);
    if (!user) {
      res.json({
        status: "failed",
        message: "No unregistered user found. API error.",
      });
      return;
    }
  } else if (req.session.user) {
    user = req.session.user;
  } else {
    res.json({
      status: "failed",
      message: "No user found. API error.",
    });
    return;
  }

Next, we validate the challenge and origin.

// { ./routes.js }
let clientData = JSON.parse(
    new Buffer.from(request.clientDataJSON, "base64")
  );
  let oldChallenge = new Buffer.from(clientData.challenge, "base64");

  if (oldChallenge.toString() !== req.session.challenge) {
    res.json({
      status: "failed",
      message: "Returned challenge doesn't match issued challenge",
    });
    return;
  }

  if (clientData.origin !== "https://passkey.allisonpoppe.dev") {
    res.json({
      status: "failed",
      message: "Returned challenge doesn't match issued challenge",
    });
    return;
  }

Our next step is to check if we’ve received an attestation object or authenticator data. If we’ve received an attestation object, it means the device is letting us know about it and giving us its information. In other words, registering.

// { ./routes.js }
let verified;
  let authInfo;
  if (request.attestationObject !== undefined) {
    authInfo = helpers.getAuthInfo(request);
    verified = true;
    user.authInfo = authInfo;
    user.registered = true;
    db.updateUserById(user.id, user);
  } else...

If it has authenticator data, it is part of the login process and our job is to check that the signature we get back matches what it should be, given what we know about the device. We start by checking that there is already a registered username that matches the one that we are attempting to log in with.

  ...} 
  else if (request.authenticatorData !== undefined) {
    if (!db.getUserWithUsername(user.username).registered) {
      res.json({
        status: "failed",
        message: "User does not exist",
      });
      return;
   }

The most challenging chunk of code essentially just parses the data received as part of the login attempt, gets the public key from what we have stored for an authenticator, and verifies the signature using it.

// { ./routes.js }
let signature = new Buffer.from(request.signature, "base64");
    let retreivedPublicKey = new Buffer.from(user.authInfo.publicKey, "base64");
    let hashedCData = helpers.hash(request.clientDataJSON);
    let toSign = new Buffer.concat([
      new Buffer.from(request.authenticatorData, "base64"),
      hashedCData,
    ]);
let pk = await subtle.importKey(
  "raw",
  retreivedPublicKey,
  {
    name: "ECDSA",
    namedCurve: "P-256",
  },
  false,
  ["verify"]
);

verified = subtle.verify(
  { name: "ECDSA", hash: "SHA-256" },
  pk,
  signature,
  toSign
 );
 } else {
  res.json({
    status: "failed",
    message: "Attestation response type is unknown",
   });
  return;
 }

Finally, we check if the user was verified and make sure our session reflects that and return the relevant information to the client.

// { ./routes.js }
if (verified) {
    req.session.loggedIn = true;
    req.session.user = user;
    res.json({ status: "success" });
    return;
  } else {
    res.json({
      status: "failed",
      message: "Can't authenticate signature",
    });
    return;
  }
});

The iOS app

One of the best aspects of passkey implementation is that Apple does a large portion of the work for us. If you have written code that supports Sign in with Apple, you have probably seen very similar code. We’ll start with the login and register methods that are called when tapping on the Login and Register buttons. The request to the API to get the challenge is pretty much the same for both methods, except for including the username for the registration request (as described above). After it returns the relevant information, we create an ASAuthorizationPlatformPublicKeyCredentialProvider (that’s a mouthful) and use that to instantiate an ASAuthorizationController instance. From there we set the AuthManager which conforms to the necessary protocols as a delegate and perform the request to the system to make or sign the challenge.

login(…)

The signature of our login method looks like this.

func login(presentationAnchor: ASPresentationAnchor) async {
		...
}

We set the presentation anchor so the system views have a reference point for displaying themselves. Then we call our API to get the challenge.

        self.presentationAnchor = presentationAnchor
        
        let result = await apiRequest(address: Const.LOGIN_ADDRESS, httpMethod: "POST")
        
        guard let result = result else {
            print("Login attempt failed")
            return
        }
        
        let challenge = Data(result.challenge!.utf8)

Once we have the challenge, we create a request for the system to give us an assertion and pass that to an ASAuthorizationController before calling performRequests() on it.

let platformProvider = ASAuthorizationPlatformPublicKeyCredentialProvider(relyingPartyIdentifier: result.rp!.name)
let assertionRequest = platformProvider.createCredentialAssertionRequest(challenge: challenge)
let authController = ASAuthorizationController(authorizationRequests: [assertionRequest])
authController.delegate = self
authController.presentationContextProvider = self
authController.performRequests()

register(…)

Our register method looks similar to the login one, with the addition of a String to represent the username that the user would like to register.

func register(presentationAnchor: ASPresentationAnchor, username: String) async {
 ...
}

This works similarly to the login request above, only we send the username we’d like to register and parse the ID it gives us back.

self.presentationAnchor = presentationAnchor     
let payload = APIPayload(username: username)
let result = await apiRequest(address: Const.REGISTER_ADDRESS, httpMethod: "POST", bodyData: payload)
guard let result = result else {
   print("Register attempt failed")
   return
}
        
let challenge = Data(result.challenge!.utf8)
let username = result.user!.username
let userId = Data(String(result.user!.id).utf8)
        
let platformProvider = ASAuthorizationPlatformPublicKeyCredentialProvider(relyingPartyIdentifier: result.rp!.name)
        
let platformKeyRequest = platformProvider.createCredentialRegistrationRequest(challenge: challenge, name: username, userID:          user. Id)
let authController = ASAuthorizationController(authorizationRequests: [platformKeyRequest])
authController.delegate = self
authController.presentationContextProvider = self
authController.performRequests()

authorizationController(…)

Once the platform comes back with what we asked it for, it will call one of its delegate methods. If there was an error we’ll see a call to authorizationController(controller:didCompleteWithError), but hopefully, the call will be to authorizationController(controller:didCompleteWithAuthorization:). Based on the credential that is passed into that method, we can determine what type of request it was and direct it to the appropriate “finish” method.

If the authorizationController returns with an error, authorizationController(controller:didCompleteWithError) is called for us to handle.

func authorizationController(controller: ASAuthorizationController, didCompleteWithError error: Error) {
   print("Error AuthManager: \(error)")
}

In a successful case though, we check what type of credential was returned to see if it’s part of the login or registration flow, and we call the required method with the information it needs.

    func authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {
        switch authorization.credential {
        case let credentialRegistration as ASAuthorizationPlatformPublicKeyCredentialRegistration:
            Task {
                await finishRegistration(credentials: credentialRegistration)
            }
        case let assertionResponse as ASAuthorizationPlatformPublicKeyCredentialAssertion:
            Task {
                await finishLogin(credentials: assertionResponse)
            }
        default:
            print("Unknown authorization type received in callback")
        }
    }

Finally our “finish” methods. Using the data we got back from the delegate methods, we can parse the credentials and make a request to our API with the necessary information. Afterward, our implementation uses NotificationCenter to respond to any changes.

finishRegistration(…)

To finalize the registration flow, we call the finish endpoint with the information returned from our authController so the server will know about our device in the future.

func finishRegistration(credentials: ASAuthorizationPlatformPublicKeyCredentialRegistration) async {
        let attestationObject = credentials.rawAttestationObject!
        let clientDataJSON = credentials.rawClientDataJSON
        let credentialID = credentials.credentialID
        
        let payload = APIPayload(attestationObject: attestationObject.base64EncodedString(), clientDataJSON: clientDataJSON.base64EncodedString(), credentialID: credentialID.base64EncodedString())
        
        let result = await apiRequest(address: Const.FINALIZE_ADDRESS, httpMethod: "POST", bodyData: payload)
        
        if result != nil {
            // Notify of registration success
        }
    }

finishLogin(…)

The login flow finishes by sending the client information, the userID, the signature, and the authenticator data so that the server can verify the signature and verify the request.

    func finishLogin(credentials: ASAuthorizationPlatformPublicKeyCredentialAssertion) async {
        let clientDataJSON = credentials.rawClientDataJSON
        let authenticatorData = credentials.rawAuthenticatorData!
        let credentialID = credentials.credentialID
        let signature = credentials.signature!
        let userID = Int(String(data: credentials.userID!, encoding: String.Encoding.utf8)!)!
        
        
        let payload = APIPayload(
            clientDataJSON: clientDataJSON.base64EncodedString(),
            credentialID: credentialID.base64EncodedString(),
            authenticatorData: authenticatorData.base64EncodedString(),
            signature: signature.base64EncodedString(),
            userID: userID
        )

If the result contained an error or failed, it is handled by the apiRequest method, but if it comes back successfully, we use NotificationCenter to inform the relevant parts of our app.

        let result = await apiRequest(address: Const.FINALIZE_ADDRESS, httpMethod: "POST", bodyData: payload)
        
        if result != nil {
            NotificationCenter.default
                .post(name: NSNotification.Name("com.user.login.success"),
                      object: nil)
        }
    }

Conclusion

That’s been a quick tour of the highlights of a passkey implementation, simplified. To dig in further, you’ll likely find it helpful to see that code in context. It’s only a drop in the bucket of what the full WebAuthn spec looks like, but it may give enough of an overview to get a very high-level idea of what is happening and jump-start the learning process.

If you are interested in another new feature of iOS 16 that will make your life as a developer easier, check out our new article on RegExBuilder.

Did you enjoy this article? Tell us your opinion on Twitter! And if you have any open questions, thoughts or ideas, feel free to get in touch with us! Thanks for reading! 🙏

The post The Complete Guide to iOS 16 Passkeys – App and Backend Implementation appeared first on QuickBird Studios.

]]>