Rambo CodesGui Rambo writes about his coding and reverse engineering adventures.https://rambo.codesenTue, 16 Dec 2025 21:42:24 -0300Tue, 16 Dec 2025 21:42:24 -0300250https://rambo.codes/posts/2025-12-15-sherlocked-before-it-was-born-lightbuddySherlocked before it was born: LightBuddyhttps://rambo.codes/posts/2025-12-15-sherlocked-before-it-was-born-lightbuddyTue, 16 Dec 2025 15:00:00 -0300A couple of months ago, I was a few minutes away from joining a video call in my office when I noticed I hadn’t set up my ring light yet.

I use a ring light for video calls because the lighting in my office comes mostly from the top. That lighting is perfectly fine for working, but it casts shadows under the eyes and chin that are exacerbated by the built-in camera in my Studio Display.

I don’t leave the ring light behind my desk all the time because it’s distracting and makes the office look messy. That means that I have to set it up every single time I want to look decent on a video call (#FirstWorldProblems).

So I thought to myself, “hey, I have a big device that’s essentially a programmable soft box right in front of my face, why don’t I use that?”.

The Prototype

It wasn’t the first time that I thought of my display as a lighting fixture for video calls. In the past, I’d sometimes open up about:blank in Safari and leave the window open so that it would illuminate my face during a video call.

Being a Mac developer, the natural next step was to open up Xcode and go “File > New Project”. In about 15 minutes, I had a little prototype called “RingLightBuddy” that displayed an ugly white HDR round rect around the edges of the screen. It would also mask it out when you moved the mouse over it so that it was still possible to interact with the computer when using the ring light.

I was actually really excited about the idea, and thought about turning it into a product, but after using it for a few minutes, I thought “meh, this is a stupid idea, never mind”, and just left it to rot in my projects folder.

This all happened in mid-July.

The Sherlocking

Fast-forward to mid-November, during the beta cycle for macOS 26.2, Apple added the new Edge Light feature, which was basically the same idea, integrated into macOS.

My initial reaction was of disappointment with myself for not having moved forward with the app when I initially had the idea and made the prototype, as I had a chance to launch the feature before Apple did.

After thinking about it for a while, my feelings changed. If I had shipped this app before Apple’s feature, I think it would have felt worse. By then I would have invested significant effort into shipping it, and it would have been much harder not to feel overshadowed.

What would’ve happened if that was the case is what the Apple community calls sherlocking, which is when Apple implements the functionality of an existing third-party app into one of its apps or operating systems.

There are various ways to react to a sherlocking event, but one of the more positive outcomes is validation: if Apple decides to implement a feature in its operating system, then it probably means that they’ve determined the feature to be appealing to a significant number of users.

Other than that, Apple’s implementation is often the most basic version of a feature, which works great for the majority of users. Third-party apps will include additional features and customization options that are not available in Apple’s implementation.

When Apple adds a feature, it increases awareness of that feature to users, some of which might end up looking for third-party solutions that are more comprehensive than Apple’s.

So as a result, this preemptive sherlocking of my abandoned idea made me actually want to ship the app

LightBuddy

Today, I’m releasing LightBuddy. It’s an app that lives in your Mac’s Menu Bar and can render a ring light that shows up on your Mac’s display, making you look better in video calls.

It looks very similar to the built-in feature in macOS 26.2, with a few additions that many users will welcome:

No Apple Silicon and macOS 26.2 Requirement

Unlike Apple’s feature, LightBuddy works on Intel Macs. While some of its more advanced UI effects are limited to Apple Silicon, the ring light itself works on Intel machines. It supports macOS 14 Sonoma and later, so even if you haven’t upgraded — or can’t upgrade — to macOS Tahoe, you can still use it. And because it isn’t tied to the camera system, you can turn it on or off whenever you want.

No Apple Display Requirement

While most of my testing was done on Apple displays, such as the Studio Display and the built-in display in recent Mac laptops, LightBuddy doesn’t prevent you from using the ring light on any display.

As you might expect, the usefulness of the ring light will depend a lot on how good your display is and how bright it can get, but if you have a good third-party display — such as the LG Ultrafine — it works quite well.

Multiple Display Support

The app allows you to have a ring light on every display connected to your Mac.

Of course it works best when you have the ring light on the display that’s right in front of your face, but if you have for example a Studio Display in front of you and your MacBook Pro with its built-in display to your side, you can have an additional light source and create interesting effects by using different color temperatures for each ring light.

HDR Support

If you have a display that supports HDR, LightBuddy offers the option to enable HDR for the ring light, allowing it to become brighter than the rest of your display. This enables lowering the brightness of the display whilst leaving just the ring light area at a higher brightness level.

As disclaimed in the app itself, I do not recommend using the HDR feature for extended periods of time, like several hours. I’ve been using it for 1 to 3-hour video calls, and both my Studio Display and the built-in display on my M2 Max MacBook Pro are perfectly fine.

Custom Colors

Most users will likely want to stick to the color temperature slider, but if you want to go nuts and have a green, pink, blue, or whatever color of ring light your heart desires, LightBuddy has that option.

Future Enhancements

I wanted to ship the app fairly soon while the subject of Edge Light is still top of mind, so I had to cut out some planned features from the 1.0.

Those include keyboard shortcuts and actions in Shortcuts for controlling ring lights, and automatic color temperature compensation to account for changes caused by True Tone and Night Shift on Apple displays.

How Well Does It Work?

When Apple first introduced the Edge Light feature, some people seemed skeptical of its effectiveness, perhaps those who’d never used the about:blank trick I mentioned.

The reality is that it works quite well in my experience, and so does LightBuddy.

Having a virtual ring light can really improve your look in video calls, especially when the ambient lighting is poor.

I had a friend take a FaceTime call with me to produce some marketing materials for the app.

He was using a pre-release build of LightBuddy on an M1 MacBook Air, which has a good display, but definitely not the best display out of the ones I’ve tested.

My favorite test from that call was the one where he turned all the lights off in the room. As you can see in the comparisons below, that’s where LightBuddy made the most difference, although it’s still an improvement on average and good lighting conditions.

Comparing the effectiveness of the ring light under various conditions

Development Tidbits

This is the nerdy section of the post if you’d like to know more about how the app was developed.

Core Animation 🖤

Those who watched my Liquid Glass talk on Swift Connection 2025 will know that I have a passion for Core Animation. It’s probably my favorite technology in Apple’s operating systems, and even though most developers never touch it directly these days, it still powers many things under the hood, and I still use it extensively.

So it won’t come as a surprise that the ring light in LightBuddy was implemented using Core Animation. It also happens to be an efficient way to implement such a feature, since layer updates are predictable, controllable, and most of the work happens in the macOS WindowServer process and the GPU.

The ring light being a CALayer also means I can use my QuartzCapture and QuartzBuddy tools to inspect the layer hierarchy as it’s rendered by the app, which is really helpful for debugging issues (in case you’re wondering, those tools are not currently public).

Inspecting ring light layer hierarchy in QuartzBuddy

I wasn’t a fan of the way Apple implemented the masking of their Edge Light feature when the mouse moves over it. The basic faded mask they use just looks off for some reason.

Apple's Edge Light masked by mouse cursor in macOS Tahoe

For a better result, I decided to generate a mesh that distorts the ring light, making it look like it’s being repelled by the mouse cursor as it approaches. It also does a cute bounce when you move the cursor from one display into another (see below). This is unfortunately one of the features I had to limit to Apple Silicon Macs, as Intel Macs have far too many GPU variations so I couldn’t make sure it would work great on all of them.

Global Event Monitor vs. CPU

Keeping the mask up to date with the cursor position without consuming too much CPU power was actually one of the challenges during development, and I’m sure I can still optimize things further.

Mac developers are probably familiar with the addGlobalMonitorForEvents API, which allows an app to monitor certain types of events — such as the mouse cursor moving — even if the app is not currently active.

That was my first option when I initially implemented the cursor mask, but I noticed that it would always cause the app to consume around 3% of CPU (on my M2 Max MacBook Pro) when moving the cursor, even if the app did absolutely nothing in reaction to the event.

Digging a bit deeper with Instruments, I realized that most of the CPU time was being consumed by a function in the guts of the SkyLight framework that deserializes event data received from the window server.

I think that’s because the event monitor will receive events as frequently as they come from the hardware, so if you have a mouse or trackpad that can produce many events per second, that’s what you’re going to get.

That makes sense if you’re developing a design app or any other tool that requires precise cursor movements. For the purposes of masking the ring light, I didn’t need that amount of precision.

Unfortunately, that NSEvent API has no mechanism for throttling events before they come from the window server, so that CPU cost from SkyLight is always there no matter what.

The solution I came up with was to use a CADisplayLink instead. Unlike the NSEvent API, CADisplayLink allows me to configure the frequency at which the display link callback is invoked, and if I don’t do anything in that callback, the CPU cost is effectively zero.

To optimize things further, I only update the cursor mask when there’s enough movement between frames, and that threshold increases the closer to the center of the screen the cursor is. It’s like foveated rendering, but for mouse events 😛.

Custom Sliders

Had I implemented the ring light itself in SwiftUI, it would probably consume too much CPU to my — and most of my users’ — taste. That’s not to say I don’t use SwiftUI in the app at all, as most of the rest of the UI is implemented in SwiftUI, including the custom sliders in the app’s control panel.

Those custom sliders could be a post on their own, but I can say they required quite a bit of bridging between SwiftUI and AppKit because I wanted them to be controllable by the scroll wheel or swipes on the Magic Trackpad, and as far as I know, there’s no SwiftUI-native way of hijacking such events.

When controlled with swipes, they also produce haptic feedback on supported devices. If you have a Magic Trackpad with haptic feedback, I suggest swiping on the sliders, I’m pleased with the tactile feeling I was able to achieve.

Cute Animations

I had a bit of fun in LightBuddy by adding some animations I’ve been playing with for a while in prototypes. One of them is very similar to the Liquid Glass transitions Apple uses in menus on iOS 26, which is the one I used for the app’s control panel in the Menu Bar.

Some people dislike this sort of UI animation, and I respect that, so the app has a toggle in its settings window to turn them off, and it will also respect the system Reduce Motion preference. That being said, the transition is fully interruptible and interactive, you can start dragging sliders before it finishes, and clicking away always closes the panel.

Even then, the entire transition takes around 650ms to complete, and within the first 200ms you can already see enough of the controls to be able to start interacting with them, so it’s not going to slow you down.

Besides that, I also added a fun intro to the onboarding, and a similar effect is also used for the app’s about window. The about window is shown in the video below. As for the onboarding, grab yourself a copy of the app and see it live 😉

]]>
https://rambo.codes/posts/2025-05-12-a-privacy-mechanism-that-backfiredA Privacy Mechanism That Backfiredhttps://rambo.codes/posts/2025-05-12-a-privacy-mechanism-that-backfiredMon, 12 May 2025 17:20:00 -0300Some bugs are more interesting than others. Last time I mentioned how CVE-2025-24091 was one of my favorite iOS vulnerabilities so far. That was because I wasn’t yet allowed to disclose my actual favorite!

This post is about CVE-2025-31212, the most ironic vulnerability I’ve ever found, and here's why...

Transparency, Consent, and Control

As those who read my reports already know, Apple’s operating systems protect user data from unauthorized access by third parties via TCC (Transparency, Consent, and Control).

That’s the system responsible for the permission prompts you get when an app wants to access Contacts, Photos, Microphone, Camera, and so on. When Apple released iOS 13, it started requiring this consent for Bluetooth Low Energy access.

One of the reasons for why this access requires user consent is because apps that can access Bluetooth LE may use nearby device information for fingerprinting and tracking purposes.

This is not obvious to most users, so in iOS 18 the permission prompt was updated to display information about how many Bluetooth devices the app will potentially be able to access. It also includes a mini-map showing a few of the user's Bluetooth devices.

Screenshot of a Bluetooth Low Energy permission prompt on iOS 18 showing a users nearby devices.

The Vulnerability

Cutting to the chase, I wrote an app that can obtain information about nearby Bluetooth LE devices without a permission prompt. It achieves this by exploiting the very mechanism that was introduced in iOS 18 to raise awareness about the potential privacy implications of giving an app access to Bluetooth.

The issue lies in how data for the new permission prompt is processed by the system. CoreBluetooth has always relied on the client app requesting permission (via TCC) to access Bluetooth LE.

When the new prompt was introduced, this architecture remained, but it came with the side effect of the client app receiving data about nearby devices BEFORE it actually received consent from the user. That’s because the data was being used to assemble the permission request within the client app’s process.

That’s all transparent to the developer, who just has to use the public CBCentralManager API. However, because the Bluetooth daemon (bluetoothd) was sending the data for the prompt to the app, it could get access to that data via runtime shenanigans.

To implement the TCC prompt, CoreBluetooth followed these steps:

1 - Upon instantiation of a CBCentralManager, an XPC connection to com.apple.server.bluetooth.le.att.xpc is created

2 - The central manager sends XPC message 1 (check in)

3 - The daemon responds with XPC message 3, indicating to the central manager that TCC is required and providing information for the central manager to use in the next step

4 - The central manager uses tcc_server_message_request_authorization to communicate with the TCC daemon; the options for this TCC request are configured with information from the previous step, which the central manager configures on the TCC request via tcc_message_options_set_client_dict

The problem was with XPC message 3 sent from bluetoothd to CoreBluetooth's CBCentralManager.

This message included not only the indication that TCC was required, but it also included the data that would be used to assemble the TCC prompt with additional details. These included the number of nearby devices, as well as information about some of those devices:

<dictionary> {
    "kCBMsgId" => <int64>: 3
    "kCBMsgArgs" => <dictionary> {
        "kCBMsgArgTCCLELocalizedCenterLabel" => "More than 50 devices found"
        "kCBMsgArgRequiresTCC" => <bool>: true
        "kCBMsgArgTCCLEDevicesAroundDetails" => <array> {
            0: <dictionary> {
                "seenMinutesAgo" => "0.001230"
                "mapLabelCalloutTitleKey" => "Desk L"
                "mapLabelCalloutPercentageValue" => "100"
                "bucketName" => "MyAppleDevices"
                "rssiValue" => "-54"
            }
            1: <dictionary> {
                "seenMinutesAgo" => "0.000014"
                "mapLabelCalloutTitleKey" => "Guilherme’s Apple Watch"
                "mapLabelCalloutPercentageValue" => "100"
                "bucketName" => "MyAppleDevices"
                "rssiValue" => "-54"
            }
            2: <dictionary> {
                "seenMinutesAgo" => "23.935236"
                "mapLabelCalloutTitleKey" => "FIBARO Single Switch "
                "mapLabelCalloutPercentageValue" => "50"
                "bucketName" => "OtherSanitizedLexical"
                "rssiValue" => "-67"
            }
            3: <dictionary> {
                "seenMinutesAgo" => "23.823255"
                "mapLabelCalloutTitleKey" => "Aqara Smart Door Lock U100 430F"
                "mapLabelCalloutPercentageValue" => "25"
                "bucketName" => "OtherSanitizedLexical"
                "rssiValue" => "-78"
            }
        }
    }
}

As seen in the example above, the kCBMsgArgTCCLEDevicesAroundDetails array included up to 4 dictionaries containing information about the user’s devices. This message was received by the app before the user had given it consent for Bluetooth LE access.

In fact, an app wouldn’t even need CoreBluetooth to get access to this information, not even the mandatory NSBluetoothAlwaysUsageDescription Info.plist privacy key.

So what the app shown in the video above does is to run this CoreBluetooth handshake process every 5 seconds, without ever getting to the point where a prompt is shown to the user. Because the list of devices returned varies with device proximity, many unique devices can be identified that way.

Device data is then collected and displayed on a list, but it could be aggregated and sent to a server to help create a tracking profile for the user, gather information about devices that they have, real names of people used as device labels, etc.

Bonus: Tampering With The Prompt

Because the information displayed in the Bluetooth LE TCC prompt was ultimately controlled by the process requesting authorization, a malicious app would be able to tamper with the information to make it seem more benign to the user by hiding the details added for privacy awareness:

This second app seen above creates a subclass of CBCentralManager. The subclass overrides the private performTCCCheck: method, replacing the kCBMsgArgTCCLELocalizedCenterLabel with a message that makes it look like the app was vetted by Apple and encouraging the user to tap the "Allow" button. It also removes all items from the kCBMsgArgTCCLEDevicesAroundDetails array so that the map that's shown in the TCC prompt is empty.

This would neutralize the effects of the enhanced TCC prompt by completely removing the privacy implication details.

Timeline

  • March 5, 2025: initial report sent to Apple
  • May 1, 2025: got a message from Apple informing me that mitigation was in progress; latest iOS 18.5 beta already had the fix implemented
  • May 12, 2025: bug assigned CVE-2025-31212, addressed in iOS/iPadOS 18.5 and aligned releases
  • Bug bounty is still pending

Mitigation

The solution to this issue was to adopt the same architecture that other types of permission prompts that display information are using on iOS.

Prompts such as the one to access the user’s location rely on extensions that run in the background and receive data directly from the daemon that’s accessing the protected information.

The daemon responsible for the data will send it via TCC directly to the corresponding extension, ensuring that the client app never receives the data for the permission prompt.

In the case of the Bluetooth Low Energy permission, assembling the prompt is now handled by the CoreLocationNumberedMapCalloutPromptPlugin extension.

]]>
https://rambo.codes/posts/2025-05-12-cracking-the-dave-and-busters-anomalyCracking The Dave & Buster’s Anomalyhttps://rambo.codes/posts/2025-05-12-cracking-the-dave-and-busters-anomalyMon, 12 May 2025 11:50:00 -0300I was listening to an episode of one of my favorite podcasts this weekend. The show is called Search Engine, and every episode tries to answer a question that can’t be easily answered through an actual search engine (or even AI).

This episode grabbed my attention because it was about an iOS bug, and a really weird one.

The bug is that, if you try to send an audio message using the Messages app to someone who’s also using the Messages app, and that message happens to include the name “Dave and Buster’s”, the message will never be received.

In case you’re wondering, “Dave and Buster’s” is the name of a sports bar and restaurant in the United States.

At the time I’m writing this post, this bug is still happening, so you should be able to reproduce it. I reproduced it using two iPhones running iOS 18.5 RC. As long as your audio message contains the phrase “Dave and Buster’s”, the recipient will only see the “dot dot dot” animation for several seconds, and it will then eventually disappear. They will never get the audio message.

You can also watch the video below showing the bug in action:

Search Engine’s Investigation

The bug sounded so outlandish when I first heard about it in the podcast that I decided to pause the episode, do my own investigation, then continue listening later. I didn’t want my investigation to be biased by whatever Search Engine was able to find out.

If you’d rather not have the episode spoiled, I strongly recommend pausing your read here, listening to the episode, then coming back later. PJ Vogt is a much better storyteller than I am, you will not regret listening to the episode!

Here’s a picture of Yoshi, my corgi, serving as a buffer between the paragraph above and the spoilers below…

A brown and white corgi is lying curled up on a gray dog bed, looking up with a relaxed but slightly sad expression. The floor is tiled, and the background includes a wall and a black metal gate.

I was honestly kind of hoping that Search Engine wouldn’t have a definitive answer for the origin of the bug because it would make my investigation more interesting.

They ended up figuring out the general reason for why the name “Dave and Buster’s” breaks iMessage, with help from Alex Stamos. But as someone who speaks fluent iPhone, I wasn’t completely satisfied with the explanation.

To be clear, this is not a critique of the podcast. It is not a technical show, and there’s only so much depth you can go into on a podcast without it becoming incredibly boring, especially for those who are not that interested in the details.

My Investigation

The first thing I wanted to figure out was whether the problem occurred with the sender or the recipient. I had a hunch that the issue would be on the recipient side, since on the sender side the message would appear just fine, but the recipient would just see the “typing” animation until it eventually timed out.

So I plugged the recipient device into my Mac and captured the logs right after the device received the problematic message.

Screenshot of the Console app on macOS showing various logs from an iOS device

As you can see, there’s an error being reported by MessagesBlastDoorService:

BlastDoor contained an explosion with error: 
BlastDoor.Explosion(
  domain: "com.apple.BlastDoor.TextMessage.Message", 
  errorType: "XHTMLParseFailure", 
  keyPath: nil
)

Shortly after that, there’s a similar message from imagent, basically replicating the issue reported by the BlastDoor service.

As soon as I saw the XHTMLParseFailure error, I immediately knew what was happening. If you’re more technically inclined, you may have figured it out as well, perhaps even before seeing that error message.

Something else you may have noticed is that when you send an audio message using the Messages app, the message includes a transcription of the audio. If you happen to pronounce the name “Dave and Buster’s” as someone would normally pronounce it, almost like it’s a single word, the transcription engine on iOS will recognize¹ the brand name and correctly write it as “Dave & Buster’s” (with an ampersand).

You see, HTML is a markup language, it uses different tags to indicate to a browser or some other program how to interpret the contents of a message. XHTML is a stricter version of HTML that’s based on XML and allows custom tags, as long as the document is compatible with the XML standard.

The ampersand symbol has special meaning in HTML/XHTML. It’s used to indicate special characters that would normally be interpreted as code. For example, a paragraph in HTML is represented with <p>. But what if you want to actually have the less than or greater than sign in your text?

To achieve that, you use something called an HTML entity. For the less than sign <, you’d write &lt; instead. The same is true for the ampersand symbol itself. Because it has special meaning in HTML, if you actually want to display an ampersand symbol, you must write it as &amp;. This practice is often referred to as “escaping”.

I dug a bit deeper using a debugger and managed to capture the XHTML code that the recipient device was attempting to parse when it received the “Dave & Buster’s” message, which you can see below (formatting done by me):

<html>
<body>
    <FILE 
    name="Audio Message.caf"
    width="0"
    height="0"
    datasize="7206"
    uti-type="com.apple.coreaudio-format"
    inline-attachment="ia-0"
    audio-transcription="Dave & Buster's"
    />
</body>
</html>

The audio-transcription attribute includes the full transcription from the audio, and it clearly has the unescaped ampersand symbol.

TL;DR

MessagesBlastDoorService uses MBDXMLParserContext (via MBDHTMLToSuperParserContext) to parse XHTML for the audio message. Ampersands have special meaning in XML/HTML and must be escaped, so the correct way to represent the transcription in HTML would have been "Dave &amp; Buster's". Apple's transcription system is not doing that, causing the parser to attempt to detect a special code after the ampersand, and since there's no valid special code nor semicolon terminating what it thinks is an HTML entity, it detects an error and stops parsing the content.

Since BlastDoor was designed to thwart hacking attempts, which frequently rely on faulty data parsing, it immediately stops what it's doing and just fails. That’s what causes the message to get stuck in the “dot dot dot” state, which eventually times out, and the message just disappears.

Is This a Security Vulnerability?

On the surface, this does sound like it could be used to “hack” someone’s iPhone via a bad audio message transcription, but in reality what this bug demonstrates is that Apple’s BlastDoor mechanism is working as designed.

Many bad parsers would probably accept the incorrectly-formatted XHTML, but that sort of leniency when parsing data formats is often what ends up causing security issues.

By being pedantic about the formatting, BlastDoor is protecting the recipient from an exploit that would abuse that type of issue.

¹ In case you’re wondering, the issue will also be triggered by any other recognized brand name that contains an ampersand, such as “M&M’s”. The way speech recognition works on iOS is by first doing a rough pass based on the user’s utterances, then running it through essentially an autocorrect mechanism, which will recognize brand names and correct them to their “official” spelling.
]]>
https://rambo.codes/posts/2025-04-24-how-a-single-line-of-code-could-brick-your-iphoneHow a Single Line Of Code Could Brick Your iPhonehttps://rambo.codes/posts/2025-04-24-how-a-single-line-of-code-could-brick-your-iphoneSat, 26 Apr 2025 11:00:00 -0300This is the story of how I found one of my favorite iOS vulnerabilities so far. It’s one of my favorites because of how simple it was to implement an exploit for it. There’s also the fact that it uses a legacy public API that’s still relied upon by many components of Apple’s operating systems, and that many developers have never heard of.

Darwin Notifications

Most iOS developers are likely used to NSNotificationCenter, and most Mac developers are also likely used to NSDistributedNotificationCenter. The former only works within a single process, the latter allows simple notifications to be exchanged between processes, with the option to include a string with additional data to be transmitted alongside the notification.

Darwin notifications are even simpler, as they’re a part of the CoreOS layer. They provide a low-level mechanism for simple message exchange between processes on Apple’s operating systems. Instead of objects or strings, each notification may have a state associated with it, which is a UInt64, and typically is only used to indicate a boolean true or false by specifying 0 or 1.

A simple use case for the API would be for a process that just wants to notify other processes about a given event, in which case it can call the notify_post function, which takes a string that’s usually a reverse DNS value like com.apple.springboard.toggleLockScreen.

Processes interested in receiving such a notification can register by using the notify_register_dispatch function, which will invoke a block on a given queue any time another process posts the notification with the specified name.

A process that’s interested in posting a Darwin notification with a state has to first register a handle for it, which can be done by calling the notify_register_check function, which takes the name of the notification and the pointer to an Int32, which is where the function returns a token that can be used to call notify_set_state, which also takes a UInt64 value for the state.

Via the same notify_register_check mechanism, a process that wants to get the state of a notification can call notify_get_state to get its current state. This allows Darwin notifications to be used for certain types of events, but also hold some state that any process on the system can query at any given time.

The Vulnerability

Any process on Apple’s operating systems — including iOS — can register to be notified about any Darwin notification, from within its sandbox, without the need for special entitlements. This makes sense given that some system frameworks used by third-party apps rely on Darwin notifications for important functionality.

Given that the amount of data transferred through them is very limited, Darwin notifications are not a significant risk for sensitive data leaks, even though the API is public, and sandboxed apps can register for notifications.

However, just as any process on the system can register to receive Darwin notifications, the same is true for sending them.

To summarize, Darwin notifications:

  • Require no special privileges for receiving
  • Require no special privileges for sending
  • Are available as public API
  • Have no mechanism for verifying the sender

Considering these properties, I began to wonder if there were places on iOS using Darwin notifications for powerful operations that could potentially be exploited as a denial-of-service attack from within a sandboxed app.

You’re reading this blog post, so I’ve already spoiled it: the answer was “yes”.

Proof of Concept: EvilNotify

With that question in mind, I grabbed a fresh copy of the iOS root filesystem — one of the early iOS 18 betas at the time, I think — and began looking for processes that used notify_register_dispatch and notify_check.

I quickly found a bunch of them, and made a test app called “EvilNotify” that I could use for testing.

Unfortunately, I no longer have a vulnerable device I could use to record a proper on-device video, but the iOS Simulator demo above shows most of what the proof of concept was able to do. Some of them don’t work in the Simulator, so I couldn’t demo them in the video.

You can see a hint at the end of the video of what the ultimate denial of service was, but let me mention all the other things it was capable of doing. Keep in mind, all of them would affect the entire system, even if the user force-quit the app.

  • Cause the “liquid detection” icon to show up in the status bar
  • Trigger the Display Port connection status to show up in the Dynamic Island
  • Block system-wide gestures for pulling down Control Center, Notification Center, and Lock Screen
  • Force the system to disregard Wi-Fi and use the cellular connection instead
  • Lock the screen
  • Trigger a “data transfer in progress” UI that prevented the device from being used until the user cancelled it
  • Simulate the device entering and leaving Find My’s “Lost Mode”, triggering an Apple ID password dialog prompt to re-enable Apple Pay
  • Trigger device entering a “restore in progress” mode

“Restore in Progress”

Since I was looking for a denial-of-service attack, this last one seemed to be the most promising, as there was no way out of it other than by tapping the “Restart” button, which would always cause the device to reboot.

It was also quite neat, since it consisted of a single line of code:

notify_post("com.apple.MobileSync.BackupAgent.RestoreStarted")

That’s it! That single line of code was enough to make the device enter “Restore in Progress”. The operation would inevitably fail after a timeout since the device was not actually being restored, for which the only remedy was tapping the “Restart” button, which would then reboot the device.

Looking into the binaries, SpringBoard was observing that notification to trigger the UI. The notification is triggered when the device is being restored from a local backup via a connected computer, but as established before, any process could send the notification and trick the system into entering that mode.

Denial of Service: VeryEvilNotify

Now that I had a Darwin notification with the potential of becoming a denial of service, I just had to figure out a way to trigger it repeatedly across device reboots.

At first, this sounded quite tricky, since apps on iOS have very limited opportunities for background processing, and quite a few APIs with side effects are prevented from working when an app is not in the foreground. The latter I found out would not be a problem, as I could verify that notify_post worked even when the app was not in the foreground.

As for being able to post the notification again and again as the device rebooted multiple times, I wasn’t so sure, but I had a hunch that an app extension would be the most likely to succeed.

Some types of third-party app extensions may run before first unlock on iOS devices, so I decided to try a type of app extension I’m quite familiar with, and created a widget extension, in a new app that I called “VeryEvilNotify”.

Widget extensions are periodically woken up in the background by iOS. They have a limited amount of time for generating snapshots and timelines, which the system then displays in various places, including the Lock Screen, Home Screen, Notification Center, and Control Center.

Because of how widespread the use of widgets is on the system, when a new app that includes a widget extension is installed and launched, the system is very eager to execute its widget extension. That gets an app’s widgets ready for the user to pick and add to the various supported placements.

A widget extension is ultimately just a process that can run code, so I added the aforementioned line of code to my widget extension. I had configured the extension to include every possible type of widget, just to make it as likely as possible that iOS would execute it as quickly as possible.

There’s a problem though: widget extensions produce placeholders, snapshots, and timelines, which are then cached by the system in order to preserve resources. These extensions are not running in the background all the time, and even if the extension requests very frequent updates, the system will enforce a time budget and delay updates if the extension attempts to request them too frequently.

To circumvent that, I decided to try making my widget extension always crash shortly after running the notify_post function, which I did by calling Swift’s fatalError() function in every extension point method of its TimelineProvider.

The call to notify_post was made as part of the entry point of the extension, before handing off execution to the extension runtime:

import WidgetKit
import SwiftUI
import notify

struct VeryEvilWidgetBundle: WidgetBundle {
    var body: some Widget {
        VeryEvilWidget()
        if #available(iOS 18, *) {
            VeryEvilWidgetControl()
        }
    }
}

/// Override extension entry point to ensure the exploit code is always run whenever
/// our extension gets woken up by the system.
@main
struct VeryEvilWidgetEntryPoint {
    static func main() {
        notify_post("com.apple.MobileSync.BackupAgent.RestoreStarted")

        VeryEvilWidgetBundle.main()
    }
}

With that widget extension in place, as soon as I installed the VeryEvilNotify app on my security research device, the “Restore in Progress” UI was shown, then failed with a prompt to restart the system.

After restarting, as soon as SpringBoard was initialized, the extension would be woken up by the system, since it had failed to produce any widget entries before, which would then start the process all over again.

The result is a device that’s soft-bricked, requiring a device erase and restore from backup. I suspect that if the app ended up in the backup and the device was restored from it, the bug would eventually be triggered again, making it even more effective as a denial of service.

My theory was that iOS would have some sort of retry mechanism when a widget extension crashes, which would obviously have some sort of throttling mechanism. I still think that’s true, but something about the timing of the extension crashing and the restore starting then failing probably prevented such a mechanism from working.

Satisfied with my proof of concept, I reported the issue to Apple.

Timeline

Below is a summarized timeline of events for this vulnerability report. There were additional status updates via automated messages from Apple’s security reports system that I have not included for brevity.

  • June 26, 2024: initial report sent to Apple
  • September 27, 2024: got a message from Apple informing me that mitigation was in progress
  • January 28, 2025: issue flagged as resolved and bounty eligibility confirmed
  • March 11, 2025: bug assigned CVE-2025-24091, addressed in iOS/iPadOS 18.3
  • Bug bounty amount: US$17,500

Even though the CVE has already been assigned and Apple has provided a link where the advisory and credit are supposed to be published, that hasn’t happened yet. I’ve been informed that it will be published soon, but you can read the advisory below in case it hasn’t been published yet by the time this post goes out.

Apple has assigned CVE-2025-24091 to this issue. CVEs are unique IDs used to uniquely identify vulnerabilites. The following describes the impact and description of this issue. Impact - An app may be able to cause a denial-of-service. Description - An app could impersonate system notifications. Sensitive notifications now require restricted entitlements.

Notice how the advisory mentions that “sensitive notifications now require restricted entitlements”, hinting at what the mitigation was. You can read more about that in the following section.

Mitigation

As mentioned by Apple in the advisory, sending sensitive Darwin notifications now requires the sending process to possess restricted entitlements. It’s not a single entitlement that just allows posting any sensitive notification, but a prefix entitlement in the form of com.apple.private.darwin-notification.restrict-post.<notification>.

From what I could gather from a brief look into the disassembly, what causes a notification to be “restricted” is the prefix com.apple.private.restrict-post. in the name of the notification.

For example, the com.apple.MobileBackup.BackupAgent.RestoreStarted notification is now posted as com.apple.private.restrict-post.MobileBackup.BackupAgent.RestoreStarted, which causes notifyd to verify that the posting process has the com.apple.private.darwin-notification.restrict-post.MobileBackup.BackupAgent.RestoreStarted entitlement before it allows the notification to be posted.

Processes observing the notification will also be using its new name with the com.apple.private.restrict-post prefix, thus preventing any random unentitled app or process from posting a notification that can have serious side effects on the system.

I didn’t have the opportunity to bisect numerous older iOS releases to find the exact version where this mechanism was introduced, but thanks to ipsw-diffs, it appears that the entitlement first showed up in iOS 18.2 build 22C5125e, AKA iOS 18.2 beta 2.

The first adopters were backupd, BackupAgent2, and UserEventAgent, all gaining entitlements related to notifying the system about device restores, mitigating the most egregious exploit presented in my proof of concept.

Throughout the various iOS 18 betas and releases, more and more processes began adopting the new entitlement for restricted notifications, and with the release of iOS 18.3, all issues demonstrated in my PoC were addressed.

]]>
https://rambo.codes/posts/2023-04-04-macos-security-bugs-exposed-safari-history-and-device-location-to-unauthorized-appsmacOS Security Bugs Exposed Safari History and Device Location to Unauthorized Appshttps://rambo.codes/posts/2023-04-04-macos-security-bugs-exposed-safari-history-and-device-location-to-unauthorized-appsTue, 4 Apr 2023 15:00:00 -0300If you’ve been reading my previous posts about security vulnerabilities that I discovered on Apple’s operating systems, you’ve probably noticed a pattern of bugs being caused by improper validation of clients by XPC services. So it’s probably not a surprise that the latest CVEs also fall into that category, but the first one that I'm going to talk about has a slight twist.

CVE-2023-23506/28192: Expectation vs. Reality

Broken assumptions are the cause of many security issues found in shipping software. The developers behind a given piece of software expect a certain behavior from the operating system and surounding environment, those expectations feed into their decision making, and then if the expectations are broken, things start to fall apart, and that includes security.

An assumption I’m sure many Mac developers have had for a long time has to do with how XPC services on macOS are isolated to a given namespace.

There are basically three types of XPC service on macOS:

  • Global privileged service: a global mach service that runs with a higher level of privilege than the logged in user account, usually running with root privileges; accessible from any process that can look up mach services
  • Global service: a global mach service that runs with the same (or lower) privilege than the logged in user account; accessible from any process that can look up mach services
  • Local service: a service that’s constrained to a given context, usually an app bundle; accessible from processes spawned from within the same context, such as the app’s main executable, in the case of an app

Note: this is not an official categorization by Apple, it’s just how I like to view them.

Let’s focus on the local service type. That's the type of XPC service you usually find bundled within an application, in the Contents/XPCServices directory. It may also be embedded in other types of bundles, such as frameworks, where executables linked against that framework will be able to access its bundled XPC service.

Local XPC services don’t tend to have a lot of privilege, they’re usually responsible for small tasks that must be isolated from the app or some other client. However, some local XPC services Apple ships with their operating systems are used for tasks that can be considered privileged, because they include entitlements that grant access to sensitive data regular apps can’t access.

That is also the case for many global mach services in Apple’s OSes, but those tend to have very strong and effective authentication for connecting clients, preventing them from being used by malicious processes trying to exfiltrate user data.

However, when it comes to these local XPC services, the assumption that their scope is limited -- both in terms of functionality as well as which processes can even look them up to initiate a connection in the first place — means that not all local XPC services on macOS have strong authentication for clients.

So I started to wonder: what would happen if that assumption were to be broken? 🤔

It didn’t take me long to find out that it could in fact be broken, and using a feature that’s played a part in many security vulnerabilities in the past: symlinks.

When creating a connection to a bundled XPC service in the Contents/XPCServices directory, an app will use the initWithServiceName: initializer on XPCConnection. The system will then look up a matching service bundle within the client’s own bundle, spawing the XPC service process and completing the connection. If a corresponding XPC service bundle can't be found in the namespace, the connection will be invalidated.

But if I want to connect to some local XPC service that’s not a part of my app bundle, but is installed on the macOS filesystem somewhere, what can I do?

Well, turns out you could just symlink another bundle’s Contents/XPCServices directory into your own app’s Contents/XPCServices, and launchd would happily follow that symlink and allow your app to lookup and connect to a local XPC service embedded in a completely unrelated bundle.

After figuring this out, I wanted to find a fun practical example on macOS where a local XPC service was not doing proper validation of connecting clients. I found quite a few, but most of them didn't have more privilege than any other regular app. Then I actually made a little tool that would scan my macOS installation for local XPC bundles, collecting the list of entitlements for each one in a searchable fashion, so that I could focus my testing on services that included interesting capabilities.

I quickly found one.

TimeZoneService.xpc

That service grabbed my attention because of its name, and because it included the com.apple.locationd.effective_bundle entitlement. That gives a process the ability to access the device's location via CoreLocation without having the access attributed to the process itself, but to a separate bundle that's only used for that purpose.

This service is embedded in /System/Library/PreferencePanes/DateAndTime.prefPane/Contents/Resources/TimeZone.prefPane, which is responsible for the time zone features in System Settings > General > Date & Time.

One of the things this service handles is the “Set timezone automatically using your current location” option. When enabled, the preference pane uses the bundled XPC service in order to obtain the current device location. Because the location request goes through TimeZoneService and it has the effective bundle entitlement, what the location icon in the Menu Bar shows is just "Setting Time Zone”.

That is, if the option to show that icon is enabled at all. By default, macOS won’t show the location icon when a “system service” is accessing the device’s location, so a malicious process that exploited TimeZoneService would likely go completely undetected by the user.

For my proof of concept, all I had to do was symlink /System/Library/PreferencePanes/DateAndTime.prefPane/Contents/Resources/TimeZone.prefPane/Contents/XPCServices into my app’s Contents/XPCServices, which I did as a run script build phase in Xcode. I then implemented a simple client that would open the XPC connection and call the location-gathering method exposed by TimeZoneService in its interface. The result was a sandboxed app that could access the device's location without the user's permission or knowledge. You can see it in action in the video below:

Timeline:

  • November 1, 2022: initial report sent to Apple
  • January 23, 2023: macOS 13.2 released with a fix in libxpc (CVE-2023-23506)
  • March 27, 2023: macOS 13.3 released with a fix in TimeZoneService (CVE-2023-28192)
  • The fix was also deployed on macOS 12.6.4 and macOS 11.7.5
  • Bug Bounty amount: US$23,500

📣 Audit Your XPC Bundles

Apple fixed the issue by preventing this symlink trick from working. It’ll still work if launching the app from Xcode, but not on regular user app launches. TimeZoneService also received an update and will now validate connections against an entitlement, mitigating the issue completely.

However, it’s possible that the underlying trick of using a symlink to access another bundle’s local XPC services was not the only way to achieve this. What this means is that developers of Mac apps that bundle local XPC services should audit their apps to make sure that malicious processes can’t take advantage of this trick to access sensitive user data or compromise app functionality.

The best way to do it is to get the audit token for the process on the other end of the connection then validate it against certain criteria, such as app identifier or development team ID. For apps targeting macOS Ventura and later, there’s new API to facilitate that.

CVE-2023-23510: Safari History Access

This vulnerability is a more “classic” one where a global system service is not validating connecting clients appropriately, exposing private user data.

The browsing history database that's created and maintained by Safari can contain very sensitive data about a user's browsing habits, including search queries, sensitive private URLs, and URLs to file sharing services containing confidential information that's secured by the obscurity of the URL itself.

Because of that, the directory where Safari stores its history database is one of the locations that System Integrity Protection (SIP) guards, preventing any random process on the system from accessing the database.

Directories with that protection are considered so sensitive that not even root can read the contents of the directory where Safari stores the user's browsing history. If you have SIP enabled on your Mac (you should), you can verify this by running sudo ls ~/Library/Safari in Terminal:

Screenshot of a Terminal window on macOS showing the invocation sudo ls ~/Library/Safari with an operation not permitted error as the result

As you can see, a simple attempt to list the contents of Safari’s directory in the user’s Library fails with "Operation not permitted”.

But of course Safari itself needs to access that directory, so how is that achieved?

If your answer was “a private entitlement”, you are correct!

Safari uses the com.apple.Safari.History agent. The agent has the com.apple.private.security.storage.Safari entitlement, which grants it access the Safari folder in the user's Library. That agent vends an XPC interface that Safari can then use in order to access and manipulate the user's browsing history.

You can probably see where this is going…

Safari’s history agent was not validating client processes that connected to it, which meant that any process running on the system could access the user’s Safari browsing history.

Here’s the proof of concept in action:

This issue also affected Safari Technology Preview, which has its own com.apple.Safari.History agent. Interestingly, the version of the agent that ships with Safari Technology Preview is a local XPC bundle, not a global mach service like the one used by regular Safari, so that one required the use of CVE-2023-23506 in order to be exploited.

Timeline:

  • November 7, 2022: initial report sent to Apple
  • January 23, 2023: macOS 13.2 released with a fix (CVE-2023-23510)
  • Bug Bounty amount: US$12,000
]]>
https://rambo.codes/posts/2022-10-25-sirispy-ios-bug-allowed-apps-to-eavesdropSiriSpy - iOS bug allowed apps to eavesdrop on your conversations with Sirihttps://rambo.codes/posts/2022-10-25-sirispy-ios-bug-allowed-apps-to-eavesdropWed, 26 Oct 2022 13:00:00 -0300TL;DR: Any app with access to Bluetooth could record your conversations with Siri and audio from the iOS keyboard dictation feature when using AirPods or Beats headsets. This would happen without the app requesting microphone access permission and without the app leaving any trace that it was listening to the microphone.

Access to Sensitive Data on Apple's Platforms

One of the biggest myths when it comes to security and privacy on mobile devices is the old saying that Facebook is using your device's microphone to listen to everything you say, in order to sell more targeted ads. There's never been any evidence of that, and iPhones have very strong security measures in place to prevent such a thing.

This section might be too basic for folks who are already familiar with how this stuff works under the hood, feel free to skip it.

The system that protects you from unfettered access to your sensitive data on Apple's operating systems is TCC (Transparency, Consent, and Control), which is directly responsible for most of the permission prompts you see when an app asks to access your location, calendar, microphone, camera, etc.

Access to system resources is mediated with the use of daemons, which are system processes that run in the background, many times with elevated privileges when compared to regular apps. Apps can then request information from those daemons, effectively opening a little door from their sandbox to the outside world.

Those doors are usually very tightly controlled on Apple's platforms with the use of code signing and entitlements. Out of the box, modern Apple devices will only run apps with a code signature that's been approved by Apple. You can think of a code signature of an app as the equivalent of a government-issued ID, where the government is Apple. Entitlements are like licenses, little bits of information that have also been verified by Apple and can give apps access to system resources that are normally not accessible.

All of these protections can be quite effective. However, their effectiveness relies heavily on how well Apple's engineers have implemented them in the system daemons, and sometimes unforeseen workarounds can result in a situation where the door has been very well shut and secured, but the window has been left wide open.

AirPods and Siri

Since the introduction of the H1 chip with the AirPods (2nd generation), users can trigger "Hey, Siri" with AirPods, and talk to the assistant without much effort and then receive a reply in the form of "here's what I found on the web...". One thing you may or may not have noticed if you've used Siri with modern AirPods is that there's no disruption to audio quality when you're talking to Siri, even though you're using the microphone in the AirPods to do so. This is very different from when you're using it for a video conference, for example, where you'll notice a significant drop in the output audio quality.

I always wondered why that was the case. Knowing that the drop in output quality when using the microphone is a physical limitation of the Bluetooth standards used by AirPods and other similar headsets, how talk to Siri had been implemented on AirPods without disrupting audio quality had always been a bit of a mystery to me, but I never put much thought or effort into figuring that out.

As part of my work developing AirBuddy, I'm constantly testing various aspects of AirPods and other Apple and Beats headsets in order to develop new features, troubleshoot issues, or just learn more about how these devices work under the hood.

I'm a fan of creating tools that make my job easier, so a while back I wrote a little command-line tool that I call bleutil, which can be used to interact with Bluetooth Low Energy devices on macOS. I use it all the time to debug what's going on with my AirPods by looking at the advertisement packets they're sending out.

Screenshot of a Terminal window on macOS showing the invocation bleutil scan --mfg-prefix 4c0012 and a long list of timestamps, UUIDs, RSSI levels, MAC addresses and hex bytes

While working on a new feature for this tool, which can be used to connect to a Bluetooth LE device and query its GATT¹ database, I decided to add the ability to subscribe to notifications to a service's characteristics using this tool, which would then stream a hexadecimal representation of the values over time to the Terminal window.

¹ If you're not familiar with Bluetooth Low Energy terminology, GATT stands for "Generic Attribute Profile". It's a standard adopted by Bluetooth LE devices that allows them to send data back and forth using services and characteristics. You can think of services as folders on a file system, where each service can have a bunch of characteristics within it, which are like files.

I had never looked into the services and characteristics present on AirPods and similar devices because most of the information I use to power AirBuddy's features comes from advertisements or Bluetooth Classic, which don't require me to connect to the devices over Bluetooth LE and interact with the GATT database.

Naturally, while testing this new feature I was working on, I was wearing my AirPods. I noticed that the AirPods included a service with the UUID 9bd708d7-64c7-4e9f-9ded-f6b6c4551967, and with characteristics that supported notifications². I ran my tool against my AirPods and left it running for a while, but no events came through.

² In Bluetooth LE GATT, when a characteristic supports notifications (or indications), it means that other devices can subscribe to be notified when the data stored by that characteristic changes, without having to be constantly asking (polling) for the current data. It's essential for real-time communication between devices.

Digging a bit into it, I learned that 9bd708d7-64c7-4e9f-9ded-f6b6c4551967 is the DoAP service, a service used for Siri and Dictation support.

I decided to test it again. This time, while my tool was running and waiting for events to come from the AirPods, I invoked Siri while wearing them. As soon as I did that, a firehose of hex bytes started to stream down my Terminal window. Not only that, but as I spoke to Siri through my AirPods, I noticed that the bytes would change rapidly, and would settle down as I went silent again. Could it be that I was looking at audio data? 😨

You can watch a reproduction in the video below:

As it turns out, I was in fact looking at audio data coming from the AirPods. My first thought was "oh, so that's how they do it, this is cool". My second thought was "oh, no!".

I always have mixed feelings when I discover something like this: a mix of excitement for having found a cool new thing to investigate and learn from, and disappointment/concern that this issue has been there in the wild, sometimes for years.

Finding out that I could get audio from AirPods without asking for permission to use the microphone on macOS was the first step.

The second step was checking Apple's other platforms to see if they were also affected. So I wrote a little app that I could run on iPhone, iPad, Apple Watch, and the Apple TV, then tried it out on devices running both the shipping version of iOS 15 and the latest iOS 16 beta at the time (this happened in late August).

The third step was figuring out what the audio data was. I was definitely seeing a bunch of bytes coming in, but who knows, maybe they were encrypted or something. The seemingly direct correlation between me speaking to Siri and the bytes changing had already made me think they weren't, but I had to confirm it.

Decoding DoAP Audio

I know a little bit about how digital audio works, but it's definitely an area I've had very little experience in throughout my career in software development, limiting myself to using high level APIs such as Apple's AVFoundation whenever I have to deal with audio or video.

The first thing I tried was to grab the hex dump from my Terminal window, paste it into HexFiend, then use the "open raw data" option in tools such as Audacity and Adobe Audition, trying various combinations of sample rate, bit rate, endianness, etc.

I did notice with some combinations of parameters that the garbled mess I was hearing did vaguely match the loudness of what I had said during the recording, which again told me the data was likely unencrypted.

In hindsight, I should've realized that it wouldn't make any sense for the audio being sent over Bluetooth LE to be uncompressed, given the bandwidth constraints of the technology. Now all I had to do was figure out which codec was being used, then I'd be able to decode the audio and play it back.

After looking through some of the system components responsible for this feature, I noticed that Opus was referenced quite a bit. Looking at the website for the Opus codec:

Opus is unmatched for interactive speech [...]

Well, sounds a lot like the sort of thing you'd use for talking to digital assistants.

So I compiled the Opus library for all of Apple's platforms, then wrote a little app that would connect to the AirPods and keep the connection open in the background, listening to notifications and audio data.

It sounds simple, but the paragraph above comprises several hours of work – almost a full day – after which I had this:

Here's a summary of what the app does:

  • Asks for Bluetooth permission³
  • Finds a connected Bluetooth LE device that has the DoAP service
  • Subscribes to its characteristics to be notified of when streaming starts and stops, and when audio data comes in
  • When streaming starts, creates a new wav file, then feeds the Opus packets coming from the AirPods into a decoder, which then writes the uncompressed audio to the file
  • Once streaming stops, closes the wav file, then sends a local push notification to demonstrate that the app has successfully recorded the user in the background

In a real-world exploit scenario, an app that already has Bluetooth permission for some other reason could be doing this without any indication to the user that it's going on, because there's no request to access the microphone, and the indication in Control Center only lists "Siri & Dictation", not the app that was bypassing the microphone permission by talking directly to the AirPods over Bluetooth LE.

³ Yes, even though this exploit bypasses the microphone permission, it still needs access to Bluetooth, so that permission is not bypassed. However, most users would not expect that giving an app access to Bluetooth could also give it access to their conversations with Siri and audio from dictation. And, as you'll see in the following paragraphs, I was also able to find a way around the Bluetooth permission on macOS.

Full TCC Bypass on macOS

In the course of figuring out how things work for my report on the vulnerability described above, I had to investigate how Apple's operating systems communicate with the AirPods, which led me to discover another issue.

The system process responsible for handling of the DoAP protocol on Apple's platforms is BTLEServerAgent (or BTLEServer, depending on the platform). This agent or daemon provides an interface over the mach service com.apple.BTLEAudioController.xpc, which other processes on the system can use to request audio from the AirPods DoAP service.

There are hundreds (if not thousands) of mach services exposed by system agents and daemons on Apple's operating systems, but sandboxing restrictions and entitlement requirements prevent most apps from talking to them.

For services that are exposed to third-party apps, system daemons usually check for a specific entitlement before allowing an app to send requests to them, or put up a TCC prompt on the app's behalf, only allowing the communication to go through once the user has approved it.

You can probably see where this is going: BTLEServerAgent did not have any entitlement checks or TCC prompts in place for its com.apple.BTLEAudioController.xpc service, so any process on the system could connect to it, send requests, and receive audio frames from AirPods. This exploit would only work on macOS, because the more restricted sandbox of iOS prevents apps from accessing most global mach services directly.

So at least on macOS, apps would be able to record your conversations with Siri or dictation audio without any permission prompts at all. Even worse, this particular exploit would also allow the app to request DoAP audio on-demand, bypassing the need to wait for the user to talk to Siri or use dictation.

Here's a demo of this in action:

Once again, these issues show that no matter how private and secure Apple's products and software can be, there's always more work to be done.

Timeline

  • August 26, 2022: I discovered the issues and reported them to Apple’s security team
  • August 29, 2022: I got a reply confirming that they were investigating
  • October 24, 2022: iOS 16.1 and remaining Apple operating systems updated with the fix (CVE-2022-32946)
  • October 25, 2022: after reaching out, I was told I'll be receiving a US$7000 bug bounty payment for reporting these issues (see update below)

November 9, 2022 - Update: The original version of this article mentioned a bug bounty payment of US$7000. However, this was due to an issue with the way Apple's security team had communicated about the bounty. They broke down the two vulnerabilities discovered into separate CVEs, one of which was awarded a bounty of US$7000, while the other one was awarded US$22500. So the total bounty payment for the bugs described in this report was of US$29500. Apple's security team apologized for the confusion, and has since released a new web platform for bug submissions, which should make this a lot better going forward.

Update: Mitigations

When I first published this writeup, I hadn't included details about the mitigations Apple has put in place for the issues discussed, because to be honest they're not that interesting. Since a few folks have asked for details on this, here they are.

The main issue – direct access to AirPods DoAP over BLE GATT – was addressed by restricting access to the service. Even though AirPods and iPhones, Macs, etc are standard Bluetooth devices, Apple has a system in place to limit which services third-party apps can access, so they just added the DoAP service to that deny list.

For the second issue – talking to BTLEServerAgent on macOS – the system agent now correctly checks that the calling process has the com.apple.bluetooth.system entitlement before allowing communication to continue. This is the same entitlement that also opens up access to those "forbidden" GATT services.

Now, if an app attempts to talk to the agent without the appropriate entitlement, it closes the connection, then logs a passive-aggressive message to the console:

Not an entitled process. Good bye.
]]>
https://rambo.codes/posts/2022-06-27-creating-custom-extension-points-with-extensionkitCreating custom extension points for Mac apps with ExtensionKithttps://rambo.codes/posts/2022-06-27-creating-custom-extension-points-with-extensionkitMon, 27 Jun 2022 17:00:00 -0300This year's WWDC introduced many new APIs, two of which caught my attention: ExtensionFoundation and ExtensionKit.

Screenshot of the TextTransformer sample app

We've been able to develop extensions for Apple's apps and operating systems for a while, but Apple never offered a native way for third-party apps to provide custom extension points that other apps can take advantage of.

With ExtensionFoundation and ExtensionKit on macOS, now we can.

However, Apple's documentation lacks crucial information on how to use these new APIs (FB10140097), and there were no WWDC sessions or sample code available in the weeks following the keynote.

Thanks to some trial and error, and some help from other developers, I was able to put together some sample code demonstrating how one can use ExtensionFoundation/ExtensionKit to define custom extension points for their Mac apps.

This post is basically a replica of the readme that I wrote for the project. It's not a tutorial, but it's a decent guide for folks looking for how to define custom extension points for their Mac apps.

I recommend browsing through the sample code while reading to get a more complete picture of how things are implemented.

What the ExtensionFoundation and ExtensionKit frameworks provide

First of all, it's important to set clear expectations. ExtensionFoundation and ExtensionKit provide the underlying discovery, management, and declaration mechanism for your extensions. They do not provide the actual communication protocol that your app will be using to talk to its extensions.

What you get is a communication channel (via NSXPCConnection) that you can then use to send messages back and forth between your app and its extensions. If you're already used to XPC on macOS, then you're going to find it familiar. It's very similar to talking to a custom XPC service, agent, or daemon.

Declaring a custom extension point

The main thing that's not explained in Apple's documentation at the time of writing is how apps are supposed to declare their own extension points to the system.

Extension points are identifiers (usually in reverse-DNS format) that apps providing extension points expose to apps that want to create extensions for those extension points.

In order to declare a custom ExtensionKit extension point for your Mac app, you have to include an .appextensionpoint file (or multiple files, one per extension point) in your app's bundle, under the Extensions folder.

This sample app has a codes.rambo.experiment.TextTransformer.extension.appextensionpoint file with the following contents:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>codes.rambo.experiment.TextTransformer.extension</key>
    <dict>
        <key>EXPresentsUserInterface</key>
        <false/>
    </dict>
</dict>
</plist>

It's a simple property list file listing the app's extension point identifier (codes.rambo.experiment.TextTransformer.extension) and, for this particular extension point, that extensions of this type do not present any user interface (EXPresentsUserInterface set to false).

The host app target is then configured with an additional Copy Files build phase in Xcode that copies the .appextensionpoint file into the ExtensionKit Extensions destination.

Implementing an extension for a custom extension point

This sample code also includes an ExtensionProvider app, which demonstrates how apps can implement extensions for custom extension points.

The ExtensionProvider target itself doesn't do much, it only serves as a parent for the Uppercase target, which is an extension that implements the custom extension point above.

To create a target for a custom extension point, you can pick the "Generic Extension" template from Xcode's new target dialog.

Generic Extension target template screenshot

The Uppercase target declares support for the custom extension point in its Info.plist by setting the corresponding identifier for the EXAppExtensionAttributes.EXExtensionPointIdentifier property:

<key>EXExtensionPointIdentifier</key>
<string>codes.rambo.experiment.TextTransformer.extension</string>

Defining the API for extensions

Apps that wish to provide custom extension points that other developers can write extensions for are likely going to be providing some sort of library or SDK that clients can use.

This sample code emulates this in the form of TextTransformerSDK, a Swift package that defines a TextTransformExtension protocol, which looks like this:

/// Protocol implemented by text transform extensions.
///
/// You create a struct conforming to this protocol and implement the ``transform(_:)`` method
/// in order to perform the custom text transformation that your extension provides to the app.
public protocol TextTransformExtension: AppExtension {
    
    /// Transform the input string according to your extension's behavior
    /// - Parameter input: The text entered by the user in TextTransformer.
    /// - Returns: The output that should be shown to the user, or `nil` if the transformation failed.
    func transform(_ input: String) async -> String?
    
}

Apps that wish to provide extensions can then implement a type that conforms to the given extension protocol, like the Uppercase extension does:

import TextTransformerSDK

/// Sample extension that transforms the input into its uppercase representation.
@main
struct Uppercase: TextTransformExtension {
    typealias Configuration = TextTransformExtensionConfiguration<Uppercase>
    
    var configuration: Configuration { Configuration(self) }
    
    func transform(_ input: String) async -> String? {
        input.uppercased()
    }
    
    init() { }
}

Enabling extensions

Even with all of the above correctly set up, if you try to use the API to find extensions for your custom extension point, you're likely going to get zero results.

That's because every new extension for your extension point is disabled by default, and the system requires user interaction in order to enable the use of a newly discovered extension within your app.

In order to present the user with a UI that will let them enable/disable extensions for your app's custom extension point, you can use EXAppExtensionBrowserViewController, which this sample app presents on first launch if it detects that no extensions are enabled, or when the user clicks the "Manage Extensions" button.

UI to manage extensions for TextTransformer

After extensions are enabled, they will then be returned in the async sequence you can subscribe to with AppExtensionIdentity.matching.

Communication between the app and its extensions

This is a bonus section, since the communication between an app and its extensions is implemented through XPC, which is outside the scope of this sample app.

The way I chose to implement the simple protocol used by TextTransformer was to define a TextTransformerXPCProtocol that gets exposed over the NSXPCConnection.

Here's the protocol itself:

@_spi(TextTransformerSPI)
@objc public protocol TextTransformerXPCProtocol: NSObjectProtocol {
    func transform(input: String, reply: @escaping (String?) -> Void)
}

Note that even though the protocol is declared as public, I don't want clients of my fictional SDK to have to worry about its existence, since the entire XPC communication is abstracted away. However, I need to be able to expose this protocol to the TextTransformer app itself, hence the use of @_spi, which has a scary underscore in front of it, but is the perfect solution for this need (exposing a piece of API only to a specific client that "knows" about it).

I then implemented a TextTransformerExtensionXPCServer class that is used as the exportedObject for the NSXPCConnection from the extension side:

@objc final class TextTransformerExtensionXPCServer: NSObject, TextTransformerXPCProtocol {
    
    let implementation: any TextTransformExtension
    
    init(with implementation: some TextTransformExtension) {
        self.implementation = implementation
    }
    
    func transform(input: String, reply: @escaping (String?) -> Void) {
        Task {
            let result = await implementation.transform(input)
            await MainActor.run { reply(result) }
        }
    }
    
}

The glue is implemented in TextTransformExtensionConfiguration, which is the configuration type associated with the TextTransformExtension protocol.

From the point of view of an extension implementing the TextTransformExtension protocol, all they have to do is return an instance of TextTransformExtensionConfiguration for the configuration property, as seen in the Uppercase implementation above.

extension TextTransformExtensionConfiguration {
    /// You don't call this method, it is implemented by TextTransformerSDK and used internally by ExtensionKit.
    public func accept(connection: NSXPCConnection) -> Bool {
        connection.exportedInterface = NSXPCInterface(with: TextTransformerXPCProtocol.self)
        connection.exportedObject = server
        
        connection.resume()
        
        return true
    }
}

The counterpart for this is TextTransformerExtensionXPCClient, which is implemented in the TextTransformer app target itself, since it's not something that extension implementers have to use.

When a request is made to perform a transformation to the text, TextTransformExtensionHost creates a new AppExtensionProcess from the extension that the user has selected in the UI.

From that process, it then instantiates a TextTransformerExtensionXPCClient, which grabs a new NSXPCConnection handle from the AppExtensionProcess and configures it to use the TextTransformerXPCProtocol:

// TextTransformerExtensionXPCClient.swift

public func runOperation(with input: String) async throws -> String {
    // ...
    
    let connection = try process.makeXPCConnection()
    connection.remoteObjectInterface = NSXPCInterface(with: TextTransformerXPCProtocol.self)
    
    connection.resume()
    
    // ...
}

The rest is pretty much just grabbing the remote object proxy (an instance of something that implements TextTransformerXPCProtocol) and calling the transform method, which will cause the method to be called on the instance of TextTransformerExtensionXPCServer that's running in the app extension, which in turn calls the transform method defined by the TextTransformExtension protocol.

If you're not familiar with XPC, this may all seem really alien, but it's not as complicated as it sounds.

UI extensions

With ExtensionKit, Mac apps can also define extension points that support extensions presenting their own user interface.

In this sample project, I've added another extension point, called codes.rambo.experiment.TextTransformer.uiextension, by following the same approach of adding the .appextensionpoint file, this time setting the EXPresentsUserInterface property to true:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>codes.rambo.experiment.TextTransformer.uiextension</key>
    <dict>
        <key>EXPresentsUserInterface</key>
        <true/>
    </dict>
</dict>
</plist>

For this example, I decided to use the UI extension point to give extensions the ability to provide configuration options for the text transformations that they perform.

I've included another sample extension in the ExtensionProvider app called Shuffle, which shuffles the input string, randomizing it. This Shuffle extension offers an option to also uppercase the shuffled string, and this option can be toggled in a configuration UI provided by the extension:

Popover for configuring the Shuffle extension

The thing to keep in mind here is that the toggle you're seeing in that popover is not being created in the TextTransformer app, what you're seeing is a "portal" into the Shuffle app extension itself, which is creating that view and controlling what happens to it, as well as responding to its events.

The TextTransformer SDK provides a new protocol that extensions can conform to if they wish to provide a custom configuration UI:

/// Protocol implemented by text transform extensions that also provide a view for configuration options.
///
/// You create a struct conforming to this protocol and implement the ``transform(_:)`` method
/// in order to perform the custom text transformation that your extension provides to the app, just like the non-ui variant (``TextTransformExtension``).
///
/// Extensions also implement the ``body`` property, providing a scene with the user interface to configure
/// settings specific to the functionality of this extension.
public protocol TextTransformUIExtension: TextTransformExtension where Configuration == AppExtensionSceneConfiguration {
    
    associatedtype Body: TextTransformUIExtensionScene
    var body: Body { get }
    
}

Notice that TextTransformUIExtension inherits from TextTransformExtension, since the UI extension will be both providing the configuration UI, as well as performing the text transformation itself. This was just how I decided to do it for this sample project, but you may want to design the API differently depending on your extension point's needs. For example, I could have named this other extension point something like "TextTransformConfigurationExtension", which would be used to configure an extension's options, but wouldn't actually be providing any text transformation functionality.

Implementing the Shuffle extension now looks like this:

import SwiftUI
import TextTransformerSDK

/// Sample extension that shuffles the input string and provides an "options" scene
/// with UI to toggle between also uppercasing the string when doing the shuffle.
@main
struct Shuffle: TextTransformUIExtension {
    init() { }
    
    var body: some TextTransformUIExtensionScene {
        TextTransformUIExtensionOptionsScene {
            ShuffleOptions()
        }
    }
    
    func transform(_ input: String) async -> String? {
        // ...
    }
}

struct ShuffleOptions: View {
    @AppStorage(Shuffle.uppercaseEnabledKey)
    private var uppercaseEnabled = false
    
    var body: some View {
        Toggle("Also Uppercase", isOn: $uppercaseEnabled)
    }    
}

Thanks to @main and the fact that the base AppExtension protocol implements a static func main() for us, this looks pretty much like a standard SwiftUI app entry point.

The implementation for TextTransformUIExtensionOptionsScene can be seen below, it's basically wrapping the view provided by the extension's body property in a PrimitiveAppExtensionScene, which comes from ExtensionKit. It uses a custom wrapper view type that ensures the contents are contained within a Form that uses the .grouped style, as well as enforcing a minimum frame size and padding, which would be a way to keep things consistent between extensions in a real app.

/// Protocol implemented by scenes that can be used in `TextTransformUIExtension.body`.
public protocol TextTransformUIExtensionScene: AppExtensionScene {}

/// A concrete implementation of `TextTransformUIExtensionScene` that provides a form where the user can configure options for a given extension.
/// You return an instance of this scene type from the `TextTransformUIExtension.body` property.
///
/// The content of the scene is where you create your user interface, using SwiftUI.
/// Do not use any property wrappers that invalidate the view hierarchy directly in your extension, wrap your UI in a custom view type
/// and add any property wrappers to the view.
public struct TextTransformUIExtensionOptionsScene<Content>: TextTransformUIExtensionScene where Content: View {
    
    public init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }
    
    private let content: () -> Content
    
    public var body: some AppExtensionScene {
        PrimitiveAppExtensionScene(id: TextTransformUIExtensionOptionsSceneID) {
            TextTransformUIExtensionOptionsContainer(content: content)
        } onConnection: { connection in
            connection.resume()
            
            return true
        }
    }
}

The id provided to the PrimitiveAppExtensionScene is a custom string. Your SDK can provide multiple types of scenes that extensions can use, and each scene type is identified by this string. Each scene can also have its own XPC connection, which could use a different protocol from the main connection that you've seen before. For this simple example, I just resume the connection and return true, without performing any communication over that channel.

In TextTransformer itself, I've extended the TextTransformExtensionHost so that it can provide a SwiftUI view for a given extension's options scene:

extension TextTransformExtensionHost {
    
    /// Returns a SwiftUI view for the extension's "options" scene.
    /// - Parameter appExtension: The extension  to get the options scene for (must be a UI extension).
    /// - Returns: A SwiftUI view that renders and controls the extension's options UI.
    func optionsView(for appExtension: TextTransformExtensionInfo) -> some View {
        return TextTransformerUIExtensionHostViewWrapper(appExtension: appExtension)
    }
    
}

To actually display the scene, ExtensionKit provides EXHostViewController, which I have wrapped in an NSViewControllerRepresentable to make it easy to use from TextTransformer's UI, which is all implemented in SwiftUI.

Here's how the EXHostViewController is being configured in TextTransformer:

// TextTransformExtensionHost.swift

// ...

// TextTransformerExtensionUIController

let identity: AppExtensionIdentity

init(with appExtension: TextTransformExtensionInfo) {
    // ...
}

// ...

private lazy var host: EXHostViewController = {
    let c = EXHostViewController()
    c.configuration = EXHostViewController.Configuration(appExtension: identity, sceneID: TextTransformUIExtensionOptionsSceneID)
    c.delegate = self
    c.placeholderView = NSHostingView(rootView: TextTransformUIExtensionPlaceholderView())
    return c
}()

Pretty simple. The instance of EXHostViewController is then embedded into TextTransformerExtensionUIController, which is then wrapped in an NSViewControllerRepresentable so that the app can display it in SwiftUI using a popover.

The delegate for EXHostViewController has callbacks for XPC connection errors, and it also has a callback to configure the NSXPCConnection, which in my example is not doing anything other than resuming it and returning true.

Comments and use cases

Plug-ins for software have been around for a really long time. Traditional ways of implementing plug-ins on macOS would typically involve the host app loading an untrusted bundle of code into its own address space, which can have serious impacts in performance, security, and reliability.

When Apple began introducing extensions into its operating systems back in the iOS 8 days, the approach was quite different. Extensions are completely separate processes that run within their own sandbox and can't mess with the memory of the process that they're loaded into.

The result is a system that protects user privacy and leads to a more reliable experience overall, since in general a poorly behaving extension can't crash the app that's trying to use it (but that largely depends on how the extension hosting is implemented).

Use cases for custom extension points on Mac apps include any idea that involves external developers augmenting our apps with code running at native speeds, with access to the full macOS SDK, while at the same time isolating that code from our apps, protecting the trust that users have in them.

If you have a Mac app that currently provides other ways for developers to write scripts or extensions for it, or if you have an idea for a type of app that could benefit from third-party extensions, I'd consider implementing them with ExtensionKit.

Huge thanks to @mattmassicotte for the help

]]>
https://rambo.codes/posts/2022-03-15-how-a-macos-bug-could-have-allowed-for-a-serious-phishing-attack-against-usersHow a macOS bug could have allowed for a serious phishing attack against usershttps://rambo.codes/posts/2022-03-15-how-a-macos-bug-could-have-allowed-for-a-serious-phishing-attack-against-usersMon, 14 Mar 2022 15:00:00 -0300Phishing attacks are a very common threat in our digital lives. So much so that many companies try to trick their own employees into falling for fake phishing attacks in order to assess their skills when trying to identify a certain message as genuine or not.

If you don’t know what a phishing attack is, here’s what Wikipedia has to say about it:

Phishing is a type of social engineering where an attacker sends a fraudulent (e.g., spoofed, fake, or otherwise deceptive) message designed to trick a person into revealing sensitive information to the attacker or to deploy malicious software on the victim’s infrastructure like ransomware.

The issue on macOS

One example of a phishing attack would be someone sending you an email pretending to be Apple, saying that there’s some sort of problem with your Apple ID and that you need to act quickly to sort it out. Of course most savvy users would notice such an attempt and simply ignore the email or log in to their account manually to check if everything was ok.

There are many signs of a phishing attack one could check for in that example, such as whether the sender of the email is really Apple and whether the domain of the link directs to an Apple official website.

However, what would you do if you suddenly received a notification on macOS telling you to “Verify your Apple ID information”, then upon launching System Preferences, saw an alert like the one below?

An alert is showin within the System Preferences app telling the user that they must verify their Apple ID
Apologies for not including Retina-quality assets in this post.

Check out this video to see the whole thing in action:

I’d say most users would accept the default “Verify Now” action, which then launches a form where you fill in your Apple ID email and password.

The only problem is that such a scary-looking alert, right within System Preferences, could be sent by ANY app running on your Mac. The malicious scenario would involve an app that looks like a simple, regular app (it could even be a sandboxed app) sending such a notification, which would then open up an Apple ID login panel that when submitted sent your email and password to the bad actor. The notification could even be made to look like it came from the System Preferences app itself, making it much more believable.

“But what about two-factor authentication?”, you might be asking. A sophisticated attacker could devise a system to try to authenticate your credentials on a machine under their control, cause a two-factor code to appear on your device, and convince you to type this code into the panel. This would send that code to the attacker, who would then gain access to pretty much all the data in your Apple account.

Additionally, it is possible to extract some information from an Apple ID just by authenticating with the email and password, without two-factor authentication.

The technical details

The phishing attack described here relies on a mechanism on macOS (which is also present on iOS) called CoreFollowUp. You know those annoying “Verify your Apple ID” or “Finish Setting Up Your Device” messages you get all the time? Those are being posted via CoreFollowUp.

CoreFollowUp is used by several components of both macOS and iOS, and they communicate with it through a daemon called followupd. The problem was that the daemon failed to validate connections made to it on macOS, which meant that any process that could look up its Mach service (including sandboxed apps) would be able to send it commands, including ones that would trigger that scary dialog within System Preferences.

A partial fix, released in macOS 11.3, included an allow list for the URL that’s launched when the user clicks the “Verify Now” button in the example shown above. By ensuring that only Apple-approved URLs could be used, this prevents the attack from being very useful except for sending unsolicited notifications without user permission.

A more complete fix was released in macOS 12.3, preventing any random binary on the system from talking to the daemon and registering notifications, regardless of the destination URL. Apple addressed it by introducing a new entitlement: com.apple.private.followup. As of macOS 12.3, when a process attempts to connect to the CoreFollowUp daemon, it will validate the connection by checking if the connecting process has that entitlement in its code signature, denying communication if it doesn’t.

This was an interesting attack vector and it shows how important it for Apple to protect powerful daemons on the system against any random process that could be trying to do bad things.

It’s not just an Apple problem though, many third-party apps on macOS ship with their own privileged daemons that accept connections over XPC. It’s important that such services use some form of authentication of the calling process, in order to prevent potentially malicious use and exfiltration of user data. If you are a developer and would like to learn more about the subject, there’s an excellent series of posts on theevilbit.

Timeline:

  • December 23, 2020: I discovered the issue and reported it to Apple’s security team
  • January 11, 2021: I got a reply confirming that they were investigating it
  • February 15, 2021: Another reply stating that it would be fixed in a future update
  • April 26, 2021: macOS 11.3 released with a partial fix
  • August 31, 2021: I received a US$5000 bug bounty payment from Apple
  • March 14, 2022: macOS 12.3 released with a complete fix (CVE-2022-22660)

As you can tell, it took Apple quite a long time to fully address this issue, which is far from ideal.

P.S. About Apple’s Bug Bounty Program

Note: the information below is how it worked for my specific situation, I’m not sure but it might vary according to where you’re located (I’m in Brazil).

This was my first time ever participating in Apple’s bug bounty program. One thing to note about the Apple program specifically has to do with how you get paid: through an Apple Developer account.

In order to receive the payment, you must have an Apple Developer account with the paid applications agreement properly signed and banking set up. If you don’t have a developer account yet, you have to sign up for one and Apple will refund the $99 fee with your bounty payment.

This account requirement does not apply to submitting the initial bug report and communicating with Apple’s security team, it only becomes a requirement once it gets to the point of getting paid for it. Anyone can submit security vulnerability reports by emailing [email protected], a process that is detailed here.

The email that you use to submit the original bug report must be associated with that developer account, otherwise they can’t pay you. In my case, I sent the report with an email that is part of my Apple ID, which is in turn part of my developer account, but the email that I used to submit was not my primary Apple ID email, so it took a bit of back and forth until they understood that the email was already a part of my developer account due to it being a part of my Apple ID.

So if you’re already a member of the Apple Developer program, I strongly recommend submitting your security vulnerability reports through the primary Apple ID email that’s associated with that account (the “account holder” email), that’s what I’ll be doing from now on.

Regardless of this minor bump in the process, I did receive my payment in about a week after replying with the developer account information.

All in all, this was a mostly positive experience. The main thing I don’t like about how Apple does it is that they don’t communicate very well throughout the process, and it can take them a really long time to completely address an issue.

]]>
https://rambo.codes/posts/2021-03-02-maskeraid-by-casey-lissMaskerAid by Casey Lisshttps://rambo.codes/posts/2021-03-02-maskeraid-by-casey-lissThu, 3 Mar 2022 12:00:00 -0300My friend, podcaster and fellow indie developer Casey Liss has just released his latest app, MaskerAid.

Rather than trying to explain the app to you, here's Casey himself:

In short, MaskerAid allows you to quickly and easily add emoji to images.

Plus, thanks to the magic of ✨ machine learning ✨, MaskerAid will automatically place emoji over any faces it detects.

MaskerAid is free to try but you may only add 🙂 to images. There is a one-time $3 in-app purchase to unlock the rest of the emoji.

I don’t normally write about apps here on my personal blog, but I decided to mention MaskerAid because I think it’s a perfect example of what indie software is about.

Again, Casey sums it up really well:

MaskerAid is designed to be a very particular kind of app: do one thing, do it well, and do it quickly.

MaskerAid screenshots

Many people come to me for advice on how to create a successful indie app. I don't know if MaskerAid is going to be a hit or not, but it's a really good demonstration of the types of apps that I like to see coming from indie developers, and the types of apps that people who aspire to become indie developers — or to have a commercial side-project — should try to do first.

The idea for the core feature in MaskerAid might be simple, but many developers who come up with such an idea will simply dismiss it, thinking there’s already other software out there that can achieve the same results.

That might be true, but a piece of software dedicated to performing a specific task well without any roadblocks like ads, subscriptions, notifications, etc, is surprisingly rare to find these days, and being an indie developer gives us the freedom to launch products such as these.

Can I do the same thing that MaskerAid does with Photoshop or some other advanced image editing tool? Yes. Will it be as quick, easy, and fun? No.

So go get MaskerAid and be sure to make that in-app purchase to support indie app development.

]]>
https://rambo.codes/posts/2022-01-04-encoding-and-decoding-references-to-other-types-with-codableEncoding and decoding references to other types with Codablehttps://rambo.codes/posts/2022-01-04-encoding-and-decoding-references-to-other-types-with-codableTue, 4 Jan 2022 15:00:00 -0300I’ve been working on a new app in my spare time using the new Swift Playgrounds 4 for iPad. As mentioned in my previous post, this app is document-based. I’ve chosen the JSON format to be the underlying data format for my app’s documents because I (and possibly future users) would like to be able to have the app’s documents in version control, and dealing with merge conflicts and diffs of binary files is the worse.

While working on expanding a feature of the app, I realized that I would need a collection of models that’s user-configurable at the document level, and each element in one of the document’s children would be assigned a model from that collection by setting one of its properties to its identifier.

To make things less abstract, think about the model for a “blog” document where each post is assigned a given category, but the definition for the categories is stored at the document level. You may choose to nest posts within the given category, so your Blog model would have a categories: [Category] property and each Category would have a posts: [Post] property. However, due to reasons that are not relevant to this post, that wouldn’t be ideal in my case.

Another option would be for each Post to simply have a category: Category property that’s set to the category for that post. This has another problem though: the JSON representation of the document on disk would contain several duplicates of the same data, since the canonical representation of a category is defined at the document-level, but each post has a copy of it within its own JSON representation.

The solution then is to, instead of storing posts within a given category or storing the full category model within the post, store just a reference to the category within the post model.

This could look similar to the code below:

struct Category: Identifiable, Hashable, Codable {
    let id: Int
    var title: String
}

struct Post: Identifiable, Hashable, Codable {
    let id: Int
    var title: String
    var categoryID: Category.ID
}

struct Blog: Hashable, Codable {
    var categories: [Category]
    var posts: [Post]
}

However, this would require us to manually look up the Category corresponding to the post’s categoryID at runtime, making call sites aware of this implementation detail in our data model. Since Post doesn’t know which Blog it belongs to, it wouldn’t be trivial to just add a computed property that looks up the category based on the ID, given that categories are stored in Blog.

Ideally, we’d want all of our codebase outside of the core document model to be completely unaware of this, such that the Post model would look like the version below:

struct Post: Identifiable, Hashable, Codable {
    let id: Int
    var title: String
    var category: Category
}

A way to implement it in practice would be to customize the encoding/decoding pipeline manually, but that adds a bunch of overhead whenever anything about the Post model changes, since we’d have to update its Encodable and Decodable conformances manually.

Property Wrappers to the rescue

The solution I came up with was to use a custom property wrapper that encapsulates the property we want to store as a reference by its id. Here’s the type declaration for that property wrapper:

@propertyWrapper
struct CodableReference<T>: Hashable where T: Identifiable, T.ID: Codable & Hashable

And here’s what it looks like to declare a property that should be stored as a reference:

struct Post: Identifiable, Hashable, Codable {
    let id: Int
    var title: String
    @CodableReference var category: Category
}

The CodableReference property wrapper implements Codable and takes care of encoding only the id of the wrapped value, and decoding the id and resolving the reference when it’s being decoded.

There is a missing piece of the puzzle though: how do we figure out which Category corresponds to any given category ID when decoding Post?

The solution I came up with was to take advantage of the userInfo property in JSONDecoder in order to provide the collection of models that the property wrapper can use while decoding.

So let’s introduce a protocol to the mix, which I called ReferenceEncodable:

protocol ReferenceEncodable: Identifiable {
    static var referenceStorageKey: CodingUserInfoKey { get }
}

Types that can be encoded/decoded by their ID will implement this protocol. Its only requirement can be satisfied with a default implementation that just returns the name of the type:

extension ReferenceEncodable {
    static var referenceStorageKey: CodingUserInfoKey {
        CodingUserInfoKey(rawValue: String(describing: Self.self))!
    }
}

Let’s also update the CodableReference property wrapper to use this new protocol:

@propertyWrapper
struct CodableReference<T>: Hashable where T: ReferenceEncodable, T.ID: Codable & Hashable

Now we have a way to instruct the CodableReference property wrapper as to where it should be looking for the collection of models that it can use to resolve its wrappedValue during decoding.

Here’s what the models end up looking like with this new solution:

struct Category: ReferenceEncodable, Hashable, Codable {
    let id: Int
    var title: String
}

struct Post: Identifiable, Hashable, Codable {
    let id: Int
    var title: String
    @CodableReference var category: Category
}

struct Blog: Hashable, Codable {
    var categories: [Category]
    var posts: [Post]
}

In order to decode Blog, we have to customize the JSONDecoder instance so that CodableReference knows where to look for a given post’s category:

let data: Data = // JSON data for a `Blog` model
let decoder = JSONDecoder()

let categories: [Category] = // somehow decode just the array of categories from the data (such as by using a custom type with just that property)

decoder.userInfo[Category.referenceStorageKey] = categories

let blog = try decoder.decode(Blog.self, from: data)

// Each post in blog.posts now has a `category` matching the category with the ID that was encoded

So this solution does make the decoding part a bit more complicated, since we now have to decode the referenced collections separately and provide them in the correct userInfo keys, but I think the tradeoff is worth it considering the benefits.

Here’s what the encoded JSON looks like, with the category property on Post containing just the ID of the category:

{
  "posts" : [
    {
      "id" : 0,
      "title" : "How to encode stuff",
      "category" : 0
    },
    {
      "id" : 1,
      "title" : "How to decode stuff",
      "category" : 0
    },
    {
      "id" : 2,
      "title" : "My Paper",
      "category" : 1
    }
  ],
  "categories" : [
    {
      "id" : 0,
      "title" : "Tutorials"
    },
    {
      "id" : 1,
      "title" : "Papers"
    }
  ]
}

You can find the complete implementation of this property wrapper on Gist.

This was a fun exploration of how to make my file format’s JSON representation more efficient, while keeping the usage of my models simple at runtime. It’s definitely not the right solution for all situations, but it solved the problem for my use case, so I hope it comes in handy for others.

As always, feel free to reach out on Twitter with questions or feedback.

]]>
https://rambo.codes/posts/2021-12-28-a-document-based-app-in-swift-playgrounds-for-ipadA document-based app in Swift Playgrounds for iPadhttps://rambo.codes/posts/2021-12-28-a-document-based-app-in-swift-playgrounds-for-ipadTue, 28 Dec 2021 14:00:00 -0300It’s been just a couple of weeks since Apple introduced the new Swift Playgrounds 4 for iPad, which now enables full app creation and publishing directly from an iPad, but many people are already making some really interesting projects with the app.

Of course bringing app creation into a new platform that’s much more limited in general than macOS and Xcode means that Apple had to prioritize which app creation features to port over, and the things that they didn’t want to do “the Xcode way” just because that’s how it’s been done in the past.

This doesn’t mean that Swift Playgrounds can’t eventually become the “Xcode for iPad” that many people want, but it does mean that old-school developers like myself ¹ have to adapt to this new way of working if we want to take advantage of the new app. You may also choose to just not use Swift Playgrounds at all, and that’s fine.

I’ve been working on a little app that I’ve been wanting for myself for quite a while, and the release of the new Swift Playgrounds right before the holidays was the perfect excuse for me to dust off the iPad Pro and finally use it for something creative.

The experience so far has been really enjoyable. The performance of the code editor and autocompletion on my A12Z-powered iPad Pro is just fantastic, way better than Xcode on my M1 Macs, which has improved quite a bit, but is still not as good as the Playgrounds app on iPad. I’ve even learned some new iPadOS multitasking tricks that I didn’t know, now that I’m actually using it for a few hours at a time instead of picking it up once a week for a couple of minutes just to test something out.

After I had developed a good chunk of my app’s functionality and UI, I realized that it would make for a more streamlined workflow if the app was document-based, instead of a shoebox² type app. I don’t have lots of experience with making document-based apps for iOS, and even less experience with making document-based apps in SwiftUI, so I used Apple’s sample code as a guide.

One of the key aspects of creating a document-based app is declaring your app’s custom file type, or the standard file types that your app can handle. In order for LaunchServices to be able to know that your app handles a given file type, you have to declare support in your app’s Info.plist. Here’s what such a declaration might look like in Xcode’s Info.plist editor:

Declaring an app's document types in Xcode

That’s simple enough. However, I couldn’t find a way to declare a custom document type for my app in Swift Playgrounds. I thought that maybe Apple had moved that into the Capabilities editor, but didn’t find anything there.

Something I tried was to use a standard Uniform Type Identifier in code instead of declaring it in my app’s Info.plist. In my case, I was encoding my app’s document with a JSONEncoder anyway, so I just used the .json UTI in code, and it did work.

However, not everything worked. The main thing that wasn’t working as expected was the auto-save functionality you get when you create a document-based app in SwiftUI. My document was being saved, but in very unpredictable ways, and it would simply lose my edits quite frequently, especially if I opened the document, made a bunch of edits in quick succession, then closed it.

I was almost giving up when I remembered this excellent article by Aaron Sky where he unpacks the extensions that Apple has added to Swift Package Manager in order to support the creation of iOS apps. Here’s the extension that declares the .iOSApplication product type:

extension PackageDescription.Product {
  public static func iOSApplication(
      name: String,
      targets: [String],
      bundleIdentifier: String? = nil,
      teamIdentifier: String? = nil,
      displayVersion: String? = nil,
      bundleVersion: String? = nil,
      iconAssetName: String? = nil,
      accentColorAssetName: String? = nil,
      supportedDeviceFamilies: [PackageDescription.ProductSetting.IOSAppInfo.DeviceFamily],
      supportedInterfaceOrientations: [PackageDescription.ProductSetting.IOSAppInfo.InterfaceOrientation],
      capabilities: [PackageDescription.ProductSetting.IOSAppInfo.Capability] = [],
      additionalInfoPlistContentFilePath: String? = nil
    ) -> PackageDescription.Product
}

Notice the additionalInfoPlistContentFilePath in there? Sounds like exactly what I need.

The only problem is that it’s not exposed anywhere within the Swift Playgrounds app, so I had to use my Mac in order to edit the Package.swift file. It’s worth noting that the file itself has a warning at the top telling you that you shouldn’t edit it manually, so consider what I’m going to show here a temporary hack until Apple adds native support for including custom Info.plist content, or some other way to declare an app’s documents that doesn’t require messing with the Info.plist file directly.

So here’s what I had to do:

  • Add a MoreInfo.plist file to my package’s root directory, with the Info.plist contents I’d like Swift PM to merge into the final Info.plist file during the build process
  • Set it as the additionalInfoPlistContentFilePath in my Package.swift
  • Add MoreInfo.plist to my Swift Package target’s exclude array in order to silence a warning
  • On the Mac, clean the build folder to make sure that the new settings are picked up

While you’re there, might as well add the ITSAppUsesNonExemptEncryption key with a value of NO so that you don’t have to go into App Store Connect in order to release every new internal TestFlight beta (assuming your app fits the criteria, of course).

Definitely not very straightforward, but the fact that this capability is there gives me hope that Apple plans on exposing this functionality in the future. I filed FB9824788 requesting this feature.

If you’d like to see an example, here’s a repo on my Github with a document-based Swift Playgrounds app that uses this technique.

Again, this is a hack, and it will probably break every now and then such as when you add new package dependencies to your app, so be careful.

Another big caveat that I’ve noticed is that, when uploading the app to App Store Connect through Swift Playgrounds on iPad, it will overwrite your Package.swift configuration, so the additional Info.plist data won’t be there. So if you decide to use this hack in your app and upload it to TestFlight, you’ll have to do that from Xcode on the Mac.

One more thing®: the built-in previews in Playgrounds stop working when your app has a DocumentGroup as its root scene. To workaround this and keep using previews, you can create a separate scene just for previews and use a regular WindowGroup that just displays whatever the root view of your document editor UI is.

Update: According to someone who works in developer tools at Apple, editing additionalInfoPlistContentFilePath is the right way to customize things such as supported document types, but support for that in the current version of Swift Playgrounds is still a work in progress, hence why it'll sometimes remove that property when the package manifest changes. I have filed FB9824864 for this specific issue.

Even though this is a risky hack, I'm using it in the app that I've been developing. Worst case scenario, I can just port it over to Xcode and publish it from there if this breaks.

I hope you found this post useful. If you've been working on a cool new app using Swift Playgrounds, let me know.

PS: This is probably my last post of the year, so Happy New Year!

¹ Older folks who know me might be laughing at me calling myself “old-school” given that I’m not even 30 years old yet. I get it, but keep in mind that old-school in tech is different from old-school in real-life. I learned the basics of programming in DOS with batch scripts and Pascal, and started to learn Mac development in Objective-C back when Interface Builder was still a separate app from Xcode, I think I can call that old-school

² This is the term that Apple used to employ in the Human Interface Guidelines when describing Mac apps that manage their files for you, apps such as iPhoto and iTunes, as opposed to document-based apps such as Pages, Numbers, or TextEdit

]]>
https://rambo.codes/posts/2021-12-06-using-cloudkit-for-content-hosting-and-feature-flagsUsing CloudKit for content hosting and feature flagshttps://rambo.codes/posts/2021-12-06-using-cloudkit-for-content-hosting-and-feature-flagsMon, 6 Dec 2021 12:00:00 -0300The most common application of CloudKit by far is to store private user data with the goal of keeping their devices in sync. This is mostly what my CloudKit 101 post is focused on, as well as explaining the basic concepts of how CloudKit works and the best practices around that type of data synchronization.

In that post, I did mention that you can use the public database offered by CloudKit for some interesting applications, such as storing app configuration and feature switches on CloudKit, making it possible to change things about your app without the need to release an update, and also without the need to use a third-party library or service such as Firebase.

This post is a continuation of that topic, detailing how exactly you can do such things with CloudKit, and it also includes a case study of how I use it for feature switches and content hosting in one of my apps, ChibiStudio.

CloudKit refresher: the public, shared and private databases

If you haven’t read my CloudKit 101 article and you’re not familiar with using the framework in general, I strongly recommend starting with that one, since it’ll make it a lot easier to understand the concepts explained here.

Regardless, let’s do a quick refresher on the three distinct database types that CloudKit has to offer:

Private Database

This is where the app stores private user data, such as the items that they have in a todo list app. Only the owner can access this data from a device that’s logged in to their iCloud account — you as the developer can’t read it, not even from the iCloud Console.

Data stored in the private database counts against the user’s iCloud account storage quota.

Shared Database

The shared database is where shared records are stored. These are records two or more users of your app have shared with each other such that they can collaborate remotely, like shared notes in the Notes app.

Public Database

This is the one we’ll be focusing on throughout this article. It’s a database that’s unique per iCloud container. Every authenticated user of your app can (by default) read, write, and create records in the public database.

As you might imagine, due to its public nature, usage of the public database can have some implications that we should be aware of when using it for things such as content hosting or feature flags, so that we don’t end up in an embarrassing situation such as having a hacker delete all of our app’s data, kinda like what happened to Shortcuts a while back (seriously).

Public Database Questions

Besides the subject of the aforementioned security risk when using the public database, which I’ll address shortly, people often ask me the same few questions when I mention that I use CloudKit for content hosting and similar applications, and I’ll try to go through them briefly:

What if the person who’s using my app doesn’t have an iCloud account?

By default, the contents of a record in the public database can be read by anyone, even if they don’t have an iCloud account. So if you plan on using the public CloudKit database for content hosting, feature flags, or other similar applications, there’s no need to worry about users who don’t have iCloud enabled, since all of the operations will be read-only.

But what if I decide to implement a web or Android app that can access the same content?

If you’d like to consume the same content that you’re hosting on the public CloudKit database from an Android app or from a web app, you can. You can use the CloudKit Web Services API, which lets you do pretty much everything that can be done through the CloudKit framework over HTTP.

Don’t believe me? Click the button below and the box will be filled with a JSON blob for the current feature flags configuration in ChibiStudio, fetched by your browser directly from CloudKit with just a few lines of plain Javascript code.

Live example: fetch from the public database over HTTP

CloudKit response will show up here

The code for this example is available on Gist.

What if my app becomes the next TikTok, won't it cost me a fortune?

This is a very interesting question. I briefly touched on this subject during my CloudKit 101 post, where I said that you as the developer are not going to pay for CloudKit, period. Back when I wrote that article, Apple still had a little widget on the website showing that for an app with 10 million active users, you had 1 petabyte of asset storage, 20 terabytes of database store, and some other things, completely for free.

They never said what would happen if you went over that limit, and they've since removed that from the website entirely. I also know of at least one app that has gone over the limit and so far the developer hasn't received a call from Eddy Cue. So I'm even more confident in saying that you are not going to have to pay Apple for your CloudKit usage.

With those questions out of the way, let’s look at how we can protect the public database from potential abuse by employing a feature of CloudKit that most developers are not familiar with.

Using CloudKit Security Roles

When using the public database for content hosting or feature flags, you don’t want any random iCloud user to have the rights to publish or edit content on your behalf or to change the feature flags that control your app’s behavior for all of your users, that would be really bad.

That’s where CloudKit’s security roles come in. You can think of them as Unix access control groups for record types. They allow you to restrict the types of operations that users belonging to a given group (security role) can perform on any given record type, and you can then assign security roles to specific users.

Every CloudKit container comes with three default security roles: World, iCloud, and Creator.

The World role means “everyone”, including users of your app who are not signed in to iCloud. This role has read access to every record type on the public database, but can’t write or create records. You can remove the read permission for a given record type from World in order to restrict access to only those users who have an iCloud account, but you can't add the write permission to this group.

The iCloud role means “authenticated iCloud users”, that is, users who are signed in to iCloud on their device while using your app. By default, users in this role can create records of any type, but they can’t read or write records.

You might be wondering what the point is of users having the create permission, but not the read or write one. That becomes clearer once you learn about the third and final default security role in every CloudKit container, which is the Creator role.

The creator role means “the user who has created this record”. So it’s possible to allow any authenticated user to create a record of a given type in the public database, but not read or write to any record other than the ones that they have created themselves. This can be useful for social media type apps that want to allow users to have public data that anyone can see, but that only the creator can modify. You could imagine having a Post record type where the creator has write permission, but other security roles only have the read permission, so that all users can view posts from other users, but only the owner of a given post can edit its contents or delete it.

But the most interesting thing about security roles for content hosting and similar applications is that we can define our own security roles and assign them to specific users.

Taking the example from my app ChibiStudio, we have two record types that are used for content hosting and feature flags: the PublishedChibi record type, which is used for the curated collection of chibis that can appear in the app’s widget, and the AppFeatures record type, which is used for feature flags.

Those records live in the public database so that all app users can access that content, but they can only be edited by me. To achieve that, I have created a new security role called “Admin”. I then granted that security role create, read, and write permission on the record types that I want only a subset of iCloud users to be able to edit.

The Admin security role in the CloudKit Console

Just adding the permissions to the new Admin role doesn't protect the records from being written by any authenticated user, so I also had to remove the create and write permissions from the iCloud security role for the same record types. Now, only users that belong to the Admin security role will be able to create or edit records of those types.

Remember that the World role is always read-only, and the Creator role is irrelevant in this case since no users other than those in the Admin role will be able to create those types of records.

The iCloud security role in the CloudKit Console

The end result is that in order to publish content for the widget or change the app’s feature flags, the user that’s authenticated in the app must have the Admin security role assigned to them. It's also possible to do those edits from the CloudKit Console, of course.

Speaking of the CloudKit Console, that’s where you can manage security roles. With a container selected, you can select “Security Roles” in the sidebar, and then you can edit the permissions for each security role or create new ones. These changes have to be done in the development environment and then promoted to production, just like with any other change you do to your CloudKit schema.

But how do you assign a security role to a user? To do that, you first have to know the record ID for the User record representing the user you’d like to assign the security role to. That can be done in your app with the fetchUserRecordID API. Note that this ID will be different between the development and production environments. You can run your app from Xcode with the production environment by following the tip I gave in the CloudKit 101 post (in the “Environments” section).

A user in the CloudKit Console has the Admin role assigned to them

You can then go to the CloudKit Console and query the User record type in the public database for the record name that you got from running fetchUserRecordID in your app, select that user record and check the box for the security role you’d like to assign. It’s a bit of a convoluted process, but the good thing is that it only has to be done once for each user that you want to assign special permissions to.

The risks of not implementing security roles

Some might be wondering if the correct application of security roles in CloudKit is worth the effort given that you as the developer control what your app does in relation to CloudKit. After all, if you don’t ship code in your app that allows users to modify your feature flags or hosted content, then you must be safe, right?

Well, not quite. Jailbreaking iPhones is still common practice by security researchers and other types of hackers, and once you have code injected into an app, you can call CloudKit APIs and do whatever you want.

Not only that, but with Macs running iOS apps natively and more and more apps offering Catalyst versions for macOS, it’s only getting more likely that someone might want to poke around your public database by attaching a debugger to your app. It's also possible to use any CloudKit container on macOS by disabling SIP, setting a couple of boot arguments and signing a binary with fake entitlements for another app's container.

So I would say that if you’re planning on using the public CloudKit database in a way where not every iCloud user is welcome to create or edit records, then you must be sure to implement the correct security roles in order to prevent bad actors from modifying your app’s content of features.

Case Study: ChibiStudio

I use CloudKit in ChibiStudio for regular user data sync, but there are a few “unusual” applications of the service, including the two examples given in this article which are content hosting and feature flags.

Feature flags

When I developed the feature flags system, I opted into having a single record of the AppFeatures type where each field in the record is an integer representing a feature flag state (0 for off, 1 for on). If I were to start over, I’d probably have individual records for each feature flag, since I think that’s more flexible.

Having each feature represented by an individual record would allow for things such as targeting features for a specific locale, OS version, device type, etc. However, our use of feature flags in ChibiStudio is extremely simple, so I didn’t feel the need to implement anything too fancy, I just wanted a way to roll out features with the ability to turn them off in case anything went wrong.

The way I implemented this in the app was to introduce a FeatureSwitch type like the one below:

typealias FeatureSwitchIdentifier = String

/// Defines a feature that can be switched on or off.
struct FeatureSwitch: Hashable, Codable {
    
    /// An unique identifier for this feature.
    let identifier: FeatureSwitchIdentifier
    
    /// Whether this feature is enabled by default (before the feature state is known).
    let isEnabledByDefault: Bool
    
#if DEBUG
    /// A debug display name for the feature (never shown to the end user).
    let displayName: String?
#endif
    
}

#if DEBUG
extension FeatureSwitch {
    
    /// A key that can be used to store an override for this feature.
    var overrideKey: String {
        "featureOverride_\(self.identifier)"
    }
    
}
#endif

The displayName and overrideKey properties are only available in debug and internal builds. I distribute internal builds on TestFlight and those are built with a copy of the release configuration, but that includes the -D DEBUG flag, so that debug-only features get included in the TestFlight build that I and the person who works with me in the app can use.

There’s also a FeatureSwitchProvider protocol that gets its implementation injected wherever there’s a need to know about the state of feature switches in the app. Having a protocol allows me to mock things out for tests or SwiftUI previews, and will also be handy in case I decide to use something other than CloudKit for my feature switches in the future.

To define the feature flags themselves, I have a Swift file where I extend FeatureSwitch defining the current flags that the app supports:

extension FeatureSwitchIdentifier {
    
    static let chibiVersionCheck = "chibiVersionCheck"
    static let tipJar = "tipJar"
    
    // ...
    
}

extension FeatureSwitch {
    
    static let chibiVersionCheck = FeatureSwitch(
        identifier: .chibiVersionCheck,
        isEnabledByDefault: true,
        displayName: "Chibi Version Check"
    )
    
    static let tipJar = FeatureSwitch(
        identifier: .tipJar,
        isEnabledByDefault: true,
        displayName: "Tip Jar"
    )
    
    // ...
    
}

Checking if a given feature is currently enabled looks like this:

if provider.isFeatureEnabled(.tipJar) {
    // Show the tip jar button
}

Every feature must have a default enabled/disabled fallback value in case its state is checked before the app has had a chance to download the current state from CloudKit. I haven’t had a situation yet where a feature is checked early enough in the app’s lifecycle where this would become a problem.

To actually change the feature flags in production, I use the CloudKit Console, I haven’t bothered with implementing a custom panel for this in internal builds of the app or as a separate tool, since it’s something that I don’t have to do very often.

There is of course a section in the app’s settings when running an internal build that lets the user override the state of any feature flag, but that’s only applied locally. That’s why every FeatureSwitch has a displayName and an overrideKey. The displayName is used in the debug UI, and the overrideKey is the UserDefaults key that’s used to override the state of the feature when toggled in that UI.

Feature switches internal UI in ChibiStudio

To actually fetch the feature state from CloudKit, a simple query operation is performed shortly after each app launch:

func fetchLatestFeatureStateRecord(with completion: @escaping (Result<CKRecord, Error>) -> Void) {
    let query = CKQuery(recordType: .appFeatures, predicate: NSPredicate(value: true))
    
    query.sortDescriptors = [NSSortDescriptor(key: "modificationDate", ascending: false)]
    
    let operation = CKQueryOperation(query: query)
    
    operation.resultsLimit = 1
    operation.queuePriority = .veryHigh
    
    operation.recordFetchedBlock = { record in
        completion(.success(record))
    }
    
    operation.queryCompletionBlock = { [weak self] _, error in
        // Error handling or completion(.failure(error)) if not recoverable
    }
    
    container.publicCloudDatabase.add(operation)
}

The resulting CKRecord is then parsed in order to get the state for each feature flag, which is stored in memory for when a feature flag is checked. The app also stores the latest feature state in UserDefaults in order to persist it between sessions.

Curated collection of chibis for the widget

When iOS 14 introduced Home Screen widgets, I wasn’t sure if there was going to be enough demand for it to be worth the effort of implementing widgets for ChibiStudio. When I saw the explosion of widgets on the internet and how people were using them to personalize their Home Screens with things that they like, that changed my mind.

I expected that not all users would have a particularly vast collection of chibis of their own creation, and I didn’t want to have “your library” and “random chibi” (a randomly generated chibi) as the only options for the widget’s contents.

The ChibiStudio widget and its collections

Throughout the development of the app, we have created thousands of chibis ourselves, and we also publish what we call the “Chibi of the Week” every Friday, and have been doing so for quite a while, so there’s a large collection of chibis that could be offered in the widget.

Bundling all of that content within the app would be impractical, especially considering how much work I put into making it smaller so that I could create an App Clip for it. I also wanted to be able to publish new chibis to that widget collection over time and to be able to target content for special occasions such as Halloween or Christmas.

The solution was once again CloudKit. The PublishedChibi record type that I mentioned earlier represents a chibi that’s part of the widget collection. Here’s what that record’s schema looks like:

The PublishedChibi record type in the CloudKit Console

As you can see, the main field is the entityData one, that stores the binary vector data for the chibi itself. Another important field is publishedAt, which allows us to send a chibi to the collection, but determine a date for it to start to show up on the widget. There are also other fields for targeting based on app version and limiting the chibi to specific locales.

To fetch a random chibi from the collection, a CuratedCollectionsManager class is used. There is no way to tell CloudKit to give you a “random” record, so it fetches a bunch of PublishedChibi records using a pseudo-random sort descriptor and then shuffles the array, before caching the data for the record that it has chosen to display. I do have some ideas on how this randomness could be improved, but the system is working just fine as-is.

Here’s an approximation of what that looks like in practice:

func fetchRandomChibiForWidget(with completion: @escaping (Result<PublishedChibi, Error>) -> Void) {
    let predicate = NSPredicate(format: "publishedAt <= %@", Date() as CVarArg)
    
    let query = CKQuery(recordType: .publishedChibi, predicate: predicate)
    
    query.sortDescriptors = randomSortDescriptors()
    
    let operation = CKQueryOperation(query: query)
    
    operation.database = container.publicCloudDatabase
    operation.qualityOfService = .userInteractive
    operation.resultsLimit = 30
    
    var candidates = [PublishedChibi]()
    
    operation.recordFetchedBlock = { record in
        do {
            let chibi = try PublishedChibi(record: record)
            candidates.append(chibi)
        } catch {
            // Error handling...
        }
    }
    
    operation.queryCompletionBlock = { _, error in
        guard error == nil else {
            // Error handling or completion(.failure(error))
            return
        }
        
        let filteredCandidates = candidates.filter {
            $0.isValidInCurrentEnvironment // Checks for expiration, locale, min app version, etc
        }
        
        guard let chibi = filteredCandidates.shuffled().first else {
            // Couldn't find any valid candidates, calls completion(.failure(...))
            return
        }
        
        let cacheURL = URL.publishingStorageURLForItem(with: chibi.id)
        
        do {
            try FileManager.default.copyItem(at: url, to: cacheURL)
            
            completion(.success(chibi))
        } catch {
            // Couldn't save to cache, calls completion(.failure(...))
        }
    }
    
    cloudOperationQueue.addOperation(operation)
}

I was a bit scared about including this CloudKit fetching as part of the widget pipeline, since it involves doing quite a bit of networking through CloudKit, a bunch of file I/O, not to mention the work that goes into rendering the image for the chibi based on the vector data that’s downloaded.

It turned out to be perfectly fine in the end though, and the widget works quite well. Another option to make this more robust would be to use background fetching or background tasks in the app itself to pre-warm the curated collections content, storing a small subset of the collection on-device so that the widget could just pick from that.

So as you can see, using CloudKit in app extensions such as widgets can also be done, but of course you have to keep in mind the limitations of those environments.

Finally, to actually publish a chibi to this widget collection, internal builds of ChibiStudio include a “Publish for Widget” option when you tap and hold one of the chibis in the library. The little form was written in SwiftUI and lets the publisher pick a start and end date, minimum version of the app and locales.

The internal UI that enables the publication of chibis

Thanks to CloudKit security roles, even if a regular user of the app were to get access to an internal build that includes this feature, they wouldn’t be able to publish one of their chibis to the collection because their iCloud User record wouldn’t have the Admin role assigned to it.

I hope this article has provided some inspiration around different uses of CloudKit other than the common use case that is to sync private user data. I still have more things about CloudKit that I would like to write, so be sure to add this blog to your RSS reader and follow me on Twitter for more.

]]>
https://rambo.codes/posts/2021-02-16-programming-the-esp8266-on-an-m1-macProgramming the ESP8266 on an M1 Machttps://rambo.codes/posts/2021-02-16-programming-the-esp8266-on-an-m1-macTue, 16 Feb 2021 14:30:00 -0300Programming microcontrollers is something I’ve always liked to do, there’s something very satisfying about writing code that controls things in “real life”, instead of just pixels on a screen. Recently, I decided it would be a fun side project to turn a cheap air humidifier into a HomeKit accessory. I started out with an Arduino board to test things out, but then people reminded me of the ESP32 and ESP8266 microcontrollers, which integrate BLE and WiFi and can run the HomeKit Accessory Protocol (HAP) natively.

To make things easier for me, I decided to program my boards with the Arduino Pro IDE, which I’m already familiar with, so I had to install the libraries for the ESP8266. That’s where I hit a problem, because I’ve been using the new M1 MacBook Air as my work computer for a while, but unfortunately the ESP support for the Arduino IDE doesn’t work out of the box. It wasn’t hard to figure out a workaround, so I decided to write it here both as a future reference for myself as well as a helpful resource for others.

Obligatory disclaimer: this is not a tutorial. I’m just laying out the steps that worked for me, but I am by no means an expert on the subject, so if this doesn’t work for you, it’s very unlikely that I’ll be able to help you out any further.

The first step is to install HomeBrew, if you don’t have it yet. HomeBrew has recently been updated with support for M1 Macs.

Then, use HomeBrew to install Python 3, which is required by the scripts in the ESP support for the Arduino IDE:

brew install python3

Use pip3 from the new python3 that’s just been installed to install the pyserial library:

pip3 install pyserial

Make sure you have added the ESP8266 URL to “additional boards manager URLs” in the Arduino IDE settings (this works for both the Arduino Pro IDE as well as the regular one):

https://arduino.esp8266.com/stable/package_esp8266com_index.json

After installing the ESP8266 boards, if you try to upload your sketch to a board, you’re likely going to run into an error:

pyserial or esptool directories not found next to this upload.py tool.
Upload error: Error: 2 UNKNOWN: uploading error: uploading error: exit status 1

For some reason, the upload tool included with the ESP8266 package seems to include its own version of Python, which for some other reason doesn’t really work in my particular environment. But since I have installed Python 3 using HomeBrew, I can tweak the package to use that version instead.

To do that, open ~/Library/Arduino15/packages/esp8266/hardware/esp8266/2.7.4/platform.txt in a text editor, then replace all occurrences of {runtime.tools.python3.path} with the path to your own Python install, in the case of HomeBrew on M1, it would be /opt/homebrew/bin. Restart the Arduino IDE and everything should work fine now.

I don’t feel particularly good about this, it seems that I should instead be replacing whatever is setting runtime.tools.python3.path to point to my own version of Python instead, but I don’t have enough knowledge about how the environment works in the Arduino IDE, and being able to flash my projects to the board was more important than learning that.

If you happen to know of a better solution, feel free to reach out on Twitter.

]]>
https://rambo.codes/posts/2021-01-08-distributing-mac-apps-outside-the-app-storeDistributing Mac apps outside the App Store, a quick start guidehttps://rambo.codes/posts/2021-01-08-distributing-mac-apps-outside-the-app-storeFri, 8 Jan 2021 10:00:00 -0300The Mac has always been very different from its close relative, iOS, especially when it comes to what a user is or is not allowed to run on their system. Even with the introduction of Apple Silicon, Apple has made it very clear that the Mac is still the Mac, and is still hackable, even when running on the new architecture.

What this means for us developers is that, when targeting the Mac platform, we have choices: we can distribute our apps independently, outside the Mac App Store, through the Mac App Store exclusively, or through both at the same time.

This article is my brain dump on the subject. It is meant to be a guide on the things that you’ll need to know about when distributing a Mac app outside the App Store, rather than a how-to tutorial. My hope is that having everything listed here will help demystify the process for beginners, and the descriptions of my own process will be useful as starting points.

App Store x Direct: pros and cons

All of these choices come with their pros and cons, and depending on which type of Mac app you’re making, you might not be able to have it in the Mac App Store to begin with. An example of that is my app AirBuddy which, in order to provide deep integration with Apple’s wireless devices, needs to run a system agent and use some private APIs, which would never be allowed in the App Store. The same goes for many other types of apps which simply wouldn’t work with the restrictions of the Mac’s sandbox.

For those who do have that choice, I’ve compiled a list of what I believe to be the pros and cons between shipping through the Mac App Store or shipping directly.

Mac App Store pros

  • Apple handles the distribution, billing and licensing for you
  • The app is easier to find and install for most users
  • Potential to get featured by Apple and reach more customers
  • Can use features such as Sign in With Apple which are not available for apps distributed outside the Mac App Store

Mac App Store cons

  • Have to pay a 15% or 30% cut of all sales to Apple, depending on how much you make in a year across all apps
  • Every single update, no matter how minor, has to go through App Review and has the potential of being rejected for arbitrary and random reasons
  • Can’t unlock the full potential of macOS because of the strict sandboxing requirements
  • Can’t do paid upgrades

Direct distribution pros

  • Ship updates whenever you want, no need to wait for them to be reviewed and no fear of random rejections
  • Unlock the full potential of macOS with system extensions, daemons, no sandboxing, private API, and more
  • Keep a higher percentage of your sales
  • Do paid upgrades or other business models which are not allowed in the App Store
  • Live without the constant fear that your app will suddenly become a problem for Apple and be threatened with removal from the App Store

Direct distribution cons

  • Have to handle licensing, distribution, and updates (it’s not that hard, you’ll see)
  • Not as easy to do consumable or non-consumable in-app purchases (no StoreKit)
  • Can’t use some Apple services such as Sign in With Apple (others such as CloudKit still work just fine)

A note on Catalyst and SwiftUI

With the introduction of Catalyst, we’re now seeing many new Mac apps being released, since it’s a lot easier to take an existing iPad app and turn it into a Mac app. Apps ported to macOS through Catalyst are not required to be released in the App Store, even if their counterpart on iOS is.

Additionally, there is currently no TestFlight for macOS (one of my wishes for 2021), so if you’d like to distribute beta builds of a Catalyst app, you’ll have to do that outside the Mac App Store, and it is not that different from distributing a production app.

A lot of what I’m presenting here will also apply for Catalyst apps — they’re Mac apps, after all — but some might require additional hacking in order to work around the fact that Apple doesn’t want you to use the entirety of AppKit directly from within a Catalyst app. With a bit of work though, you can make a Catalyst app very Mac-capable, including support for AppleScript and other features.

For SwiftUI apps targeting the Mac, there should be no major differences with the distribution process, since you can use all features of the macOS API in a SwiftUI app without requiring a lot of hacking like it does for Catalyst apps.

Distribution

Distribution of an app involves two parts: actually uploading, storing and serving the app binary and its updates somewhere, and also producing the right package that will work for your users.

Hosting

The first major step with getting your Mac app in the hands of users without the App Store is to figure out how to distribute its binary. No App Store means that you’ll have to host your app’s binaries and updates somewhere on the internet and provide a link for your users to download it.

There are several ways you can go about this. For an open-source app, you can use Github releases and even host your app’s update feed in the Github repo. That’s how I distribute the WWDC app for macOS.

For my commercial apps, I’ve been using Backblaze B2 for storage of both the app binaries, delta updates and update feed, and proxying all requests through Cloudflare so that I can have a custom domain for the downloads/updates and also add filtering, caching and logic on the server if needed.

B2 is an extremely affordable provider (I rarely pay over US$1 in a month). Most Mac apps are not that large in size, so even if your app is downloaded a lot, it’s unlikely that you’ll end up having to pay a lot of money for storage/bandwidth. Another popular option is using Amazon S3 buckets, but their control panel gives me nightmares so I prefer to use B2 which is a lot simpler (and less expensive).

I haven’t automated the publishing step for my app releases as of yet, so to upload a new release I just use Transmit as a client for my B2 buckets. Speaking of that, before we even get to upload a release to whatever provider we’ve chosen, there’s a very important step: getting the right file to put out there.

Notarization and packaging

When exporting an archived app from within Xcode, we get two main options for distribution: App Store Connect and Developer ID. To distribute apps without the App Store, you’re going to be using Developer ID.

The same developer account you use for distributing apps to the Mac App Store can be used to sign your apps for Developer ID distribution. The certificate itself is different, but Xcode will auto-generate and install one for you during the process of exporting the archive if you haven’t done so yet.

Since macOS Catalina, all apps distributed directly to users must be notarized by Apple, otherwise they won’t launch by default. This process uploads your app to Apple, which will then run automated malware checks and “staple” your binary with a special signature that will allow it to run. This is not App Review, it’s an automated check to prevent malware from being distributed through this method, and it is also a way for Apple to flag a single binary for malware, instead of a developer’s entire account, should it become compromised at some point.

Whether or not you notarize the binary directly from within the Xcode organizer will depend on which packaging method you’ll be using to distribute your app. We can’t just upload a .app directory to a server and let users download that, we have to turn it into a flat file. The simple way to do that is to just zip the app and distribute it as a zip file, but I’ve found through experience that distributing the app as a DMG file reduces support requests by quite a bit.

You’ve probably seen DMGs before when downloading Mac apps. They’re disk images that are mounted by macOS when double-clicked in Finder, and they can also provide some artwork instructing the user to drag the app into their Applications folder. This makes it easy for a user to figure out what to do, and it also reduces the chances that a given user will be running your app from their Downloads folder or some other random place like that.

If you’re going to be distributing your app as a DMG, you should just export it using the Developer ID option in Xcode, without notarization, then notarize the DMG itself. There’s no option in Xcode to export a DMG, so you’ll have to use a third-party tool. The one I like to use is create-dmg. I’ve also created and open-sourced dmgdist, a tool that automates the process of creating, uploading and stapling the DMG so that you can get the image ready to be distributed by running a single command.

To distribute the app as a zip file, the process is simpler: pick the upload option from Xcode after selecting “Developer ID” and it’ll produce a notarized version of your app, which you can then zip up and distribute directly.

App updates

Another aspect of the App Store is that it also handles app updates. Whenever we upload a new version to App Store Connect and it gets approved, users receive the update in the App Store. For apps distributed directly, we need to replicate that somehow.

The best way to do that — and the most common — is to use Sparkle. It’s been around for many many years and is pretty much the official way to distribute updates for Mac apps distributed outside the Mac App Store.

Sparkle is currently living a double life of sorts. You can either use the “legacy” version of Sparkle or use a more modern “v2” branch which includes many improvements such as the ability to update sandboxed apps. I still use the “legacy” version because it’s the one that I’m familiar with and I find that integrating the more modern version is still a bit more complicated. If it ain’t broke, don’t fix it.

The process of generating an app update usually goes as follows: ensure that with every update you increase the app’s version (of course), produce the package as described before (Sparkle can handle zips, DMGs and installer packages), then use the generate_appcast tool to update the feed. After doing that, upload the deltas, the package for the new version, and the updated AppCast feed to your hosting method of choice, at which point users will start seeing the new version when they check for updates within the app.

It may sound complicated, and it definitely has a learning curve to it, but after you get things set up and working, it’s a really smooth process (way better than dealing with App Store Connect, if you ask me).

Making money outside the Mac App Store

If you’re looking to distribute your Mac app outside the App Store, chances are that at some point you’d like to make some money from it. Just as with the App Store, there are lots of different business models you could adopt, but by far the most common for apps distributed directly to customers is the good old paid upfront model: the user pays to download the app, registers it using a license key and gets updates for free, at least for a certain period of time.

Another business model that’s common for apps distributed outside the App Store is the subscription model, where users pay a certain amount monthly or yearly to keep using the app. Picking a business model could be an entire guide (or series of guides) by itself, so I’m not going to help you with that. I’ll assume paid upfront — which is the model that I use for my apps — for the remainder of this section.

In order to get paid for your product, you’ll need a storefront that users can visit to learn about it and (hopefully) purchase it. A good option for beginners is Gumroad, which offers a storefront page, payment processing, hosting, and licensing. When I first released AirBuddy back in January 2019, I used Gumroad, and it served me very well, with tens of thousands of copies sold during that year.

However, Gumroad was not initially designed to sell software, so it lacks some flexibility that other providers give you. With the release of my new app FusionCast and also AirBuddy 2.0, I moved over to Paddle, which is now handling payment and licensing for my apps. Another provider that's often used is FastSpring, I don't have a lot of personal experience with them, but many Mac apps use their service as well.

Another option is to simply use a payment provider, something like Stripe, then handle all of the fulfillment and licensing yourself. That way you get ultimate flexibility, although it is more work and will likely require you to hire additional providers (to send emails, for example).

I’d say that if you’re looking to make some money on the side by selling a Mac app outside the Mac App Store, Gumroad is the best option for you, since they handle pretty much everything and you won’t even have to create a website for your app. However, if you’re selling apps as a company or as your main source of income, a more professional solution such as Paddle will have fewer limitations and offer more flexibility.

Licensing, copy protection and piracy

A concern you might have about distributing Mac apps directly is piracy: anyone can get your app’s binary and run it without necessarily paying for a license, unless you employ some sort of copy protection.

While that is true, I’ve found that it is not worth it to developers — especially indies — to spend any significant amount of time working on copy protection. Yes, a few people out there are going to steal your work, but those are people who wouldn’t have paid for your app anyway, and any time you spend worrying about it or coding in some super advanced DRM into your app is time away from fixing bugs and developing new features. Additionally, this type of practice tends to end up punishing legitimate users more often than it stops piracy (just look at the numerous examples from the game industry).

The first version of AirBuddy didn’t even include any sort of copy protection, not even a simple registration form for the user to enter their license key. I did find a few pirated copies of the app available online (some of them with malware included, of course), but I saw no evidence that a large percentage of users were pirating the app, and my numbers didn’t reflect that either. In version 2, I’m using the Paddle SDK for registration during the app’s onboarding, but that’s all I’m doing.

Apps distributed through the Mac App Store are not automatically protected from piracy either: you have to manually check the App Store receipt to make sure the copy is legit. Most receipt verification code is trivial to crack, so an app distributed through the Mac App Store is no more protected from piracy than an app that’s distributed directly.

Marketing

I’m including this section mostly to make a point: there’s no major difference in marketing between distributing your Mac app directly or distributing it through the Mac App Store. These days, simply releasing an app in the App Store means almost nothing, since it’s very unlikely that users will just organically discover a brand new app without any external input.

What you might be able to skip when distributing through the App Store is having a website for your app, since the App Store page can then be used as the main storefront for it, but even then I’d say most apps can benefit from having a dedicated landing page that’s not just the App Store product page.

Marketing apps could be yet another guide on its own, but in general you’ll want to use every channel that’s available for you, especially if you already have a following (Twitter, Instagram, TikTok, etc). Sending your app (including a free license) to websites and people who cover Mac apps can also be a great way to get it out there. You can also do paid advertising on social media, podcasts, and publications.

That’s it!

I hope you found this guide useful. If you have any questions or comments, feel free to reach out on Twitter.

]]>
https://rambo.codes/posts/2020-09-08-creating-configurable-widgets-for-macos-big-surCreating configurable widgets for Big Sur (UPDATED)https://rambo.codes/posts/2020-09-08-creating-configurable-widgets-for-macos-big-surTue, 8 Sep 2020 09:00:00 -0300Update (September 29, 2020): With Xcode 12.2 beta 2, Apple has added an official template to create macOS Intents extensions.

One of the most exciting additions to iOS 14, WidgetKit is also available on macOS Big Sur, where it replaces the legacy “Today extensions”. Widgets created with WidgetKit can be configured by the user in a system UI which is configured by an intent definition in Xcode.

I’m not really a fan of Apple’s choice of using intents as a way to configure widgets, but I must admit that intent definitions are quite powerful and flexible. When it came time to create a Big Sur widget for AirBuddy 2.0 (in beta at the time of writing), I wanted to let the user pick which devices should show up there, given that the space is fairly limited.

In order for that to work, I would have to provide dynamic values for the widget configuration UI, which the docs say should be done through an intents extension. That’s where I ran into a bit of an issue.

All of the documentation with regards to configuring widgets was written specifically for iOS — or Mac Catalyst — apps, but AirBuddy is an AppKit Mac app. To provide dynamic options for a configurable widget, the app that hosts the widget must also include an intents extension, but if you go into Xcode’s File > New > Target menu, there is no intents extension template for the Mac. If you try to create a target using the template from the iOS tab, it won’t let you embed it in your Mac app.

At first, I thought this was maybe something that was simply not supported on the Mac, but then I noticed that many of the builtin apps (such as Reminders) include dynamic configuration options for their new widgets. Looking at the bundle for the Reminders app on Big Sur, I could see that it includes an intents extension plug in, and it’s not a Catalyst app, so it’s definitely possible.

Then I remembered that it’s possible to create custom templates for Xcode, something that I used previously for one of my articles. So I took the iOS intents extension template from Xcode, duplicated it, did some tweaks, and it actually worked!

Screenshot showing a widget in macOS Big Sur with a menu that reads

You can download the custom template here. To enable it, just copy the xctemplate folder into ~/Library/Developer/Xcode/Templates/Custom, then restart Xcode. After doing that, you’ll be able to select “Mac Intents Extension” from the “Other” tab when creating a new target.

Screenshot creating a new target in Xcode, with the

There is a sample project on my Github which includes a configurable widget with dynamic options provided by an intents extension.

Gotchas

The above solution does work, but there are some things to keep in mind. First, you must enable sandboxing for the widget target. It’s not enabled by default from the template, but the widget won’t work if it’s not sandboxed.

Something else worth pointing out is that I don’t know if this would be an issue when submitting the app to the Mac App Store. I suspect it wouldn’t, but since I haven’t submitted a Mac app to the store with this setup, I can’t say for sure.

And just in case there's someone from Apple reading this, check out FB8651868 😇.

I hope you’ve found this article useful. If you have anything to say, feel free to reach out on Twitter.

]]>
https://rambo.codes/posts/2020-08-29-turning-the-chibistudio-canvas-into-an-app-clipTurning the ChibiStudio canvas into an App Clip for iOS 14https://rambo.codes/posts/2020-08-29-turning-the-chibistudio-canvas-into-an-app-clipSat, 29 Aug 2020 10:00:00 -0300One of my favorite new things announced during this year’s WWDC was App Clips. They allow developers to offer a small experience from their app to users, without the need to install the entire app from the App Store.

App Clips can be activated through a variety of methods: NFC tags, QR codes, special App Clip codes from Apple, or through a simple link somewhere, like your app’s website.

When I saw the announcement, I immediately had the idea to create an App Clip for my app, ChibiStudio. It’s a great way to show users what the app has to offer before they actually commit to downloading the full app.

Two iPhones side by side showing the ChibiStudio canvas with a grid of items below and a chibi being edited above

ChibiStudio is an app that allows users to create their own chibis by selecting from a variety of items, similar to other avatar creation apps or even Memoji. I knew that would be the experience we’d like to turn into an App Clip.

But putting the creation experience from ChibiStudio into an App Clip proved to be a significant challenge. One of our goals with ChibiStudio has always been to make as much of the app usable offline as possible, which means that we ship all of the assets required to enable chibi creation within the app bundle itself, we don’t download that after the fact.

App Clips are limited to 10MB in size after going through app thinning though, so the main challenge with creating the chibi editing experience for the App Clip was finding ways to reduce the size of the app as much as possible.

Check out the result below and read on for the technical details:

Making the inventory fit

The first thing we had to do was to reduce the size of the item inventory itself. The full app has thousands of items split into a little over 20 item packs, some of which are premium, which means they’re unlocked by in-app purchases.

Since App Clips can’t have in-app purchases, all of the premium packs were removed. But even with all of them gone, the free packs alone were about 50MB in size. I asked my artist to pick just two packs and remove a bunch of items from them, anything that wouldn’t hurt the experience too much.

The tricky part was that he wouldn’t be able to tell exactly how much smaller the packs were getting, since they’re compiled before being added to the app, and that compilation step is only done by me. So after receiving the two reduced packs from him, I compiled them, but they alone were over 12MB in size. Ouch!

Here I must explain a little bit about how the packs are compiled. A sqlite database is exported which contains metadata about all packs and items, but that’s very small (less than 100KB). The actual assets are compiled into asset catalogs and contained in bundles (a bundle for each pack). Each item in a pack is comprised of two assets in the catalog: a data asset containing the compressed vector data (Core Animation archive) for the item — used in the canvas — and an image asset containing a preview image for that item — which is used in the item drawer.

Reducing vector data size with LZMA compression

The first thing I did was to change the compression used for the vector data of the items. I was using zlib (gzip) since day one of the app, but I had a look at Apple’s Compression framework and noticed that the LZMA algorithm promised a high-compression ratio, so I decided to try that out.

When compiling all of the packs for the main app, using the LZMA algorithm reduced the full size of the inventory from 127.6MB to 78.5MB, without impacting runtime performance in any noticeable way. That’s great, since it reduces the size of the main app by a lot, but even then, the size of the reduced packs for the App Clip was still a little over 8MB by itself, and that’s not counting all of the code and assets that would have to go into it 😬.

Rendering item previews on the fly

After looking at the compiled asset catalogs, I noticed that most of their file size was due to the preview images, so I decided to see what would happen if I just removed them. Initially, the item drawer was just showing a bunch of blank squircles, which makes sense since there were no preview thumbnails to be used, but my idea was to instead generate the thumbnails at runtime.

I had tried that out before, but it was way back in the iOS 10 days when devices were much slower. It turns out that it’s perfectly viable to render the previews for items on the fly. I’m still not sure if I’m going to stop including the pre-baked previews in the main app, but at least for the App Clip, I’m rendering those at runtime.

For those who are curious about how it’s done in practice, there’s nothing special about it. For every item that needs to be rendered, the cell will request the image from the inventory, which owns a custom serial dispatch queue where work items are executed to perform the rendering, which is done with UIGraphicsImageRenderer. From the point of view of the cell, it looks very similar to downloading an image from the internet, so the same caveats about doing asynchronous work related to recyclable cells apply.

The resulting image for each item is cached in memory using NSCache, but I could probably also cache it using file storage so that it would persist between different runs of the app.

Checking the thinned App Clip size locally

As much as I would like to tell you that all of the work described above was enough to make the whole chibi canvas experience fit in under 10MB, unfortunately I can’t, because that was only half the battle.

After doing all of that work and actually creating the App Clip experience, which was quite easy because I was mainly leveraging existing work from shared frameworks within the app, I decided to check how I was doing in terms of App Clip size.

The 10MB limit applies after app thinning, so to test that out locally we need to archive and export the app for Ad Hoc distribution. When picking that option, Xcode will actually offer to export the app or the App Clip, so I chose that option and selected “All compatible device variants” in the “App Thinning” dropdown.

Xcode sheet offering to select between an App Clip and the app itself and a separate sheet where all compatible device variants is selected in the app thinning dropdown

Looking at the resulting IPA file — the one with no UUID next to it, which is universal — it was 9.5MB, just under the 10MB limit. So I thought I was ready to upload it to TestFlight (narrator: “he was wrong”).

After uploading to TestFlight and waiting a while for it to process, I got this lovely e-mail from App Store Connect:

An e-mail message from App Store Connect saying that the build was invalid because the app clip was over the 10MB limit after thinning

Turns out, when talking about the 10MB limit, they’re not referring to the size of the compressed IPA file, but the size of the actual .app package inside of it. So in order to verify that locally, we need to unzip the IPA file. After doing that for the build, this is what I saw:

A screenshot from Finder showing the dot app file which is over 13MB in size

So even though my IPA package was under 10MB, the App Clip itself after being extracted was over 13MB, I still had some work to do.

Making the code fit

We tend to underestimate how large code itself can get, especially when working with Swift. ChibiStudio looks simple on the surface, but it’s actually quite complex with lots of features, many of which are not related to the experience we wanted to provide in the App Clip.

The app is already broken down into multiple frameworks, which is how I like to work since it makes it easier to deal with app extensions and make sure things are organized properly, avoiding coupling where it’s not appropriate.

Unfortunately, over time, most of the app ended up being implemented inside the CutenessUI framework, which is the framework that implements most of the user interface of the app, including basic components, most of the theming, and the chibi canvas itself, which is what we want in the App Clip.

Another side-effect of implementing a large portion of the app in that framework was that it also depends on other frameworks, and at least one of them will not be needed in the App Clip.

Below you can see a diagram of the app’s components, before the changes I’ve done to make the App Clip smaller:

App architecture diagram showing the different frameworks that make up the app, including CutenessUI which is very large

As you can see, CutenessUI had lots of things below it, and it was itself quite large. In order to reduce its scope for the App Clip, I decided to move the canvas to a separate framework, which I called CutenessCanvas. This new framework couldn’t depend on CutenessUI, but it needed some base UI code from it, so I moved that base UI code to yet another framework, which I called CutenessUIFoundation. Doing that also removed the need to include the Bengoshi framework (analytics and in-app purchasing code) from the App Clip, which doesn’t need that functionality.

This is what the modules look like, now with the App Clip and the UI frameworks split into multiple modules:

App architecture diagram showing the different frameworks that make up the app, now with the app clip also being shown and with CutenessUI broken up into CutenessUIFoundation and CutenessCanvas

When comparing the previous build with the CutenessUI framework, which was 3.8MB in size, the CutenessUIFoundation and CutenessCanvas frameworks in the new build add up to just above 2MB in size.

I still had to do some more work to reduce the size even more, which I accomplished by removing even more items from the reduced packs, and also finding some old unused code which was still being compiled into the core and UI frameworks.

After all of that work, I finally had a version of the App Clip which was just under 10MB — it ended up at 9.5MB. I’m sure I can still find some unused code in there somewhere and come up with other clever ways to reduce its size even more, but I’m happy with how it’s working so far.

As much as the work described in this post might sound like a chore, I actually had a lot of fun during this entire process. Making great things with constrained resources can be a fun challenge, so don’t give up if your initial attempts at making an App Clip prove difficult due to the 10MB limitation.

This post focused on making my particular app experience fit in an App Clip, but there’s a lot more to App Clips than just making them be under 10MB. One of them is testing, and for that I highly recommend this post from Kushagra.

And finally, if you’d like to try out the App Clip that I made for ChibiStudio, you can find it on TestFlight.

]]>
https://rambo.codes/posts/2020-05-07-the-big-facebook-crashThe big Facebook crash of 2020 and the problem of third-party SDK creephttps://rambo.codes/posts/2020-05-07-the-big-facebook-crashThu, 7 May 2020 09:00:00 -0300UPDATE JUL 10, 2020: It has come to my attention that implementing the Facebook login flow without using their SDK is against their terms of service for third-party applications, further confirming that Facebook is more interested in gathering data about an app's users than it is in providing a useful service (shocker).

NOTE: I know we’re not living in the best time of our lives right now. If you're the type of person who worries about their privacy and security online, this post may cause you anxiety.

May 6th, 2020. It was 8 pm, I had just finished work for the day and was about to order dinner from one of my favorite places when the app I use for that simply wouldn’t launch. It crashed on launch every single time.

I happen to have a friend who works on that app’s engineering team, so I sent them a video of the issue just to make sure it was on their radar, when I was told it was actually a problem with the Facebook SDK and that it was crashing many other iOS apps as well. So I tried launching TikTok, and it crashed. Spotify, crashed. The app I work as a contractor for, crashed. It was bad.

The issue was caused by some bad data being sent by Facebook’s server to their SDK, which caused code in the SDK to crash, which in turn brought down the app that was running the SDK. Since this happened during the initialization of the SDK — something that occurs right after launching the app — the apps simply became unusable. You can read more about it here.

I did find a workaround that allowed me to order dinner though. Since the crash was caused by data sent by Facebook’s servers, I blocked the facebook.com domain (and all of its subdomains) on my network using Pi-Hole. I wasn’t going to starve because of Facebook.

The big SDK problem

You know how people are saying these days that it’s dangerous how companies like Apple and Google control their ecosystems, to the point of accusing them of monopoly? I’m not going to dismiss that completely here, but I think we have a much bigger problem that’s been lurking in our apps for several years, unnoticed: third-party SDK creep.

It’s quite possible that every single app you use on any particular day is running code from Facebook, Google and other data-gathering and data-mining companies. Because of the way this code is integrated — by linking to a dynamic library at build time — it means these companies can effectively control those apps, or worse, access all of the data those apps have access to.

We saw a demonstration of this power yesterday: it was as if Facebook had an “app kill switch” that they activated, and it brought down many of people’s favorite iOS apps — Apple’s appocalypse video never felt so real. Of course it was a bug and not something done intentionally, but it highlights the point that they do have control over apps that include their code.

Even if you don’t sign in with Facebook in a particular app, the app will run Facebook’s code in the background just for having the SDK included. You don’t need a Facebook account for it to track you either, they can track people very well without one.

The technical solution

There are some technical workarounds which could be applied to this problem. It’s clear that many people want to use Facebook as a login method, so “just remove it” is not as easy as it might seem. The same thing goes for “Just implement Sign in with Apple”. Whenever someone starts a phrase about programming with “just”, any senior developer’s eyebrows rise.

The first solution would be for developers to implement features such as Facebook login without employing their SDK. Facebook even offers documentation on how to do that. Implementing the login flow without using their SDK would ensure that only users who actually log in with Facebook get their data processed by Facebook.

The other solution would be some form of sandboxing that isolates this type of SDK from the main app code. Apple’s operating systems already have and use XPC extensively — and iOS supports extensions — but it still doesn’t expose such functionality to developers.

Having analytics SDKs isolated from main app code would prevent those SDKs from slurping user data without the app explicitly sending it to the SDK, it could also be used to implement some form of permission dialog. Imagine launching an app and seeing a prompt that reads “This app would like to send your location to Facebook. Is that ok?”.

It’s not a matter of developers just being careful with what SDK code they call. Yesterday’s problem would happen to an app even if it just included the SDK, even if it never called any methods on it.

The cultural solution

Many people rush to blame engineers for these types of problems. “Of course it’s the engineers’ fault: they included the SDK after all, didn’t they?”.

Even though it was technically an engineer who programmed the SDK into their company’s app, those types of decisions are usually top-down. Someone over at marketing decides they want better analytics on their Facebook campaigns, they talk to the product people and the product people just order that from the engineers.

And that’s where the problem is. This type of decision must go through engineering — and preferably someone from infosec as well — the marketing department shouldn’t have the power to mandate that random third-party code be added to an app.

Unfortunately, I don’t see this changing any time soon, so I believe the solution will have to be technical.

Let’s see what WWDC brings this year.

]]>
https://rambo.codes/posts/2020-03-19-implementing-mouse-interactions-on-ipadImplementing mouse pointer interactions on iPadhttps://rambo.codes/posts/2020-03-19-implementing-mouse-interactions-on-ipadTue, 24 Mar 2020 18:00:00 -0300The new iPad Pro is here! It features a brand new LiDAR sensor and there’s also a cool new Magic Keyboard with a built-in trackpad coming in May. But the best thing about this latest Apple launch for developers is the new and improved mouse and trackpad support on iPadOS 13.4, which works on every iPad model that can run the version.

The way mouse interaction works on iPad is not the same as it does on a Mac, since the cursor can morph into UI elements and work similarly to the focus engine on tvOS, so if you want to make sure your app works great with pointing devices on iPad, there’s some work you have to do. Since there are new APIs for developers to adopt mouse interactions, I decided to write up this quick, simple guide to help you get started with that.

First of all, make sure your test device is running the iPadOS 13.4 GM (called “beta 6” in the developer portal). Also make sure you have downloaded the Xcode 11.4 GM, which includes the new APIs in the iOS SDK. If you’re reading this after March 24, 2020, then the final public releases of iPadOS 13.4 and Xcode 11.4 should already be out.

UIHoverGestureRecognizer

This subclass of UIGestureRecognizer has been available since iOS 13.0, and was designed to be used on iPad apps running on the Mac via Catalyst. It triggers its .began state when the mouse cursor enters the view, and its .ended state when it exits the view.

Using UIHoverGestureRecognizer, you can create simple hover effects, such as scaling up an UI element when the mouse is over it. The first step is to add the gesture recognizer to some view, like you’d do with any other gesture recognizer:

let gesture = UIHoverGestureRecognizer(target: self, action: #selector(viewHoverChanged))
targetView.addGestureRecognizer(gesture)

Then you need to implement the method that’s called when the gesture’s state changes, and animate your view to the desired appearance:

@objc private func viewHoverChanged(_ gesture: UIHoverGestureRecognizer) {
    UIView.animate(withDuration: 0.3, delay: 0, options: [.allowUserInteraction], animations: {
        switch gesture.state {
        case .began, .changed:
            self.targetView.layer.transform = CATransform3DMakeScale(1.1, 1.1, 1)
        case .ended:
            self.targetView.layer.transform = CATransform3DIdentity
        default: break
        }
    }, completion: nil)
}

Notice how the example above uses the longer version of the animate method, which includes the option .allowUserInteraction. It's very important that you include that option when animating during a hover gesture, otherwise your animations can end up cancelling click events.

Here’s what that looks like:

Simple hover gesture

That’s it! With a simple gesture recognizer you can make your UI elements come to life when the cursor is over them on iPadOS. If you’ve already ported your iPad app to the Mac with Catalyst, make sure you’re including your UIHoverGestureRecognizer code when compiling for iOS as well, so that your users on iPad can benefit from the same hover effects you have on the Mac.

That’s a very simple way to improve mouse support on your iPad app. But as I mentioned before, the system has special treatment for some UI elements. Try it: move the pointer over icons in SpringBoard and notice how the cursor snaps to each icon and has a cool parallax effect when you move it. The same happens for bar button items.

The good news is that you can implement the same effect in your apps, using UIPointerInteraction.

UIPointerInteraction

If you want to go even further with pointer support in your app, you have to use UIPointerInteraction. It implements the UIInteraction protocol, one which I’m a huge fan of, since it can encapsulate complex interactions in a very simple API (an article specifically about UIInteraction is in the works, stay tuned).

Basic pointer interaction

We can start simple and just add a UIPointerInteraction to some view, like so:

let interaction = UIPointerInteraction(delegate: nil)
targetView.addInteraction(interaction)

Just by doing that, you get the effect of the mouse cursor transforming into the shape of the view (in this case, a rounded rectangle):

Custom pointer interaction

It's important to note that if your view's area is too large, the parallax effect won't be applied.

Advanced pointer interaction

If you’d like to customize the pointer interaction further, you have to implement the UIPointerInteractionDelegate protocol. It lets you customize the region of the view that triggers the point interaction, what your view looks like while the interaction is happening and even the shape of the cursor.

Let’s say you have a view that’s shaped like a star and it has a property, starPath which is the UIBezierPath representing the star being drawn. In your implementation of UIPointerInteractionDelegate, you can customize the shape of the cursor by implementing the following method:

func pointerInteraction(_ interaction: UIPointerInteraction, styleFor region: UIPointerRegion) -> UIPointerStyle? {
	let params = UIPreviewParameters()
	params.visiblePath = starView.starPath
	
	let preview = UITargetedPreview(view: starView, parameters: params)

	return UIPointerStyle(effect: .automatic(preview), shape: .path(starView.starPath))
}

By doing that, the cursor will transition into the star shape of the view. The same can be done for any type of shape, so that the cursor disappears neatly into your view.

Two other delegate methods that are worth mentioning are pointerInteraction:willEnterRegion:animator: and pointerInteraction:willExitRegion:animator: . They can be used to perform custom animations alongside the cursor transition animation.

If you’d like to change your view’s background color while the interaction is happening, you could implement these delegate methods like so:

func pointerInteraction(_ interaction: UIPointerInteraction, willEnter region: UIPointerRegion, animator: UIPointerInteractionAnimator) {
	animator.addAnimations {
		self.starView.backgroundColor = .systemPink
	}
}

func pointerInteraction(_ interaction: UIPointerInteraction, willExit region: UIPointerRegion, animator: UIPointerInteractionAnimator) {
	animator.addAnimations {
		self.starView.backgroundColor = .systemYellow
	}
}

By adding property changes to the animator, the system ensures the changes animate alongside the cursor transition.

This was a very quick overview on how to implement custom pointer support on iPad for your apps. To learn more, check out Apple’s official documentation.

]]>
https://rambo.codes/posts/2020-03-01-writing-command-line-interfaces-for-ios-appsWriting command line interfaces for iOS appshttps://rambo.codes/posts/2020-03-01-writing-command-line-interfaces-for-ios-appsSun, 1 Mar 2020 18:00:00 -0300Writing automated tests like unit, integration, or UI tests can be a great way to have reproducible steps that ensure an app is working the way we expect it to. But there are some circumstances where automated testing just doesn’t cut it.

Some behaviors of mobile apps have subtleties that can only be assessed by holding a device in your hand — just like your users do — and seeing how things behave. When something is not working as expected, repeatedly testing things by hand or going back to a known initial state for debugging can be a pain.

There are countless ways to go about creating a better environment for debugging and iteration while working in iOS apps, such as using launch arguments, environment variables, or having an internal settings or debug menu inside the app itself where you can tweak things. I believe every shipping app should include those, since they improve the development process significantly.

ChibiStudio’s internal settings

But even with all of those options available, I still think there’s room for one more: a command line interface. Yes, you read it correctly: I wrote a command line interface for my iOS app.

chibictl

Since the app is called ChibiStudio, I decided to call this command line tool chibictl (it’s pronounced “chee bee cee tee el”, Erica).

Watch the video below for a look at some of the things this tool can do (so far) and read on for the technical bits on how it works.

How is it made?

We can’t write or run command line tools on regular iOS devices (yet, FB7555034). Besides, the whole point of having a command line interface for an iOS app would be to run it from your Mac so you don’t need to interact with the device itself.

Thus, there needs to be a way to send data back and forth between a Mac and iOS devices (or the Simulator). There’s probably some way to do it using the wired lightning connection, we could also spin up a socket or HTTP server on the device, but I decided to use the MultipeerConnectivity framework.

The framework allows iOS, Mac and tvOS — no watchOS yet, sorry — devices to talk to other nearby devices over WiFi, peer-to-peer WiFi or Bluetooth. The cool thing about it is that the underlying communication is abstracted for you, so there’s no need to worry about the low-level networking bits.

Even though MultipeerConnectivity provides a somewhat high-level API, in my previous experiences using it, I noticed that there tends to be quite a bit of boilerplate involved in getting it all up and running. For that reason, I decided to create the MultipeerKit library, which lets you set up communication between devices very easily, like so:

// Create a transceiver (make sure you store it somewhere, like a property)
let transceiver = MultipeerTransceiver()

// Start it up!
transceiver.resume()

// Configure message receivers
transceiver.receive(SomeCodableThing.self) { payload in
	print("Got my thing! \(payload)")
}

// Broadcast message to peers
let payload = SomeEncodableThing()
transceiver.broadcast(payload)

My favorite thing about it is that it allows you to send and receive anything that conforms to the Codable protocol, registering a specific closure to be called for each type of entity you want to transfer between peers.

Practical example: app side

Listr

The code snippets below are from the Listr sample app. It’s a very simple to-do app written in SwiftUI, which includes a CLI tool for interacting with its data.

Since it’s easy to exchange data between a Mac and iOS devices or the Simulator using MultipeerKit, I decided to represent each possible command the CLI can send as a struct.

Here’s an example, showing the struct that represents the “add item” command:

struct AddItemCommand: Hashable, Codable {
    let title: String
}

This is a very simple command with just a single property — the title of the list item to be added. There are some commands which don’t even require any arguments, but are defined as structs conforming to the Codable protocol just so that I can receive them with MultipeerTransceiver.

An example of such a command that takes no input is the ListItemsCommand:

struct ListItemsCommand: Hashable, Codable { }

In order to respond to these commands being sent by the command line tool, I’ve implemented a CLIReceiver, which registers a MultipeerTransceiver with the service type listrctl (the name of the CLI tool).

With the transceiver in place, I can then register handlers for each command the app supports:

final class CLIReceiver {

    // ...
    
    func start() {
        transceiver.receive(AddItemCommand.self, using: response(handleAddItem))
        transceiver.receive(ListItemsCommand.self, using: response(handleListItems))
        transceiver.receive(DumpDatabaseCommand.self, using: response(handleDumpDatabase))
        transceiver.receive(ReplaceDatabaseCommand.self, using: response(handleReplaceDatabase))

        transceiver.resume()
    }

}

Notice how each command handling function is wrapped by a response function. That’s how the receiver can send data back to the command line tool. In order to do that, I defined another model, CLIResponse, which is actually an enum:

enum CLIResponse: Hashable {
    case message(String)
    case data(Data)
}

Why an enum? I didn’t want each command to have a specific response type, since that would complicate the implementation significantly. What I figured out was that I really only needed two types of responses: a message describing what happened or returning human-readable content for the CLI user, or some binary data that the CLI will then write to a file. So that’s why I decided to go with an enum with associated values: String or Data.

Making this enum conform to the Codable protocol requires custom init(from decoder: Decoder) and encode(to encoder: Encoder) implementations, which are quite simple, but I won’t include them here for brevity — check the sample app for the full implementation.

Back to that response wrapper, this is what it looks like:

private func response<T: Codable>(_ handler: @escaping (T) -> CLIResponse) -> (T) -> Void {
    return { [weak self] (command: T) in
        let result = handler(command)

        self?.transceiver.broadcast(result)
    }
}

All it does is call the handler function that was passed in, which returns a CLIResponse. It then broadcasts that response to all connected peers — in this case, just the device running the command line tool. This was the best way I found to implement handlers as functions that take the specific command struct as input and return a CLIResponse as an output, leaving the multipeer communication to the receiver itself, rather than having to repeat the broadcast call everywhere.

How each command handler itself is implement is completely dependent on the app itself. Here’s an example of how I’ve implemented the “list items” command handler:

private func handleListItems(_ command: ListItemsCommand) -> CLIResponse {
    let list = app.store.items.map { item in
        "\(item.done ? "[✓]" : "[ ]" ) \(item.title)"
    }.joined(separator: "\n")

    return .message(list)
}

In that context, app is just a computed property that returns the AppDelegate, which in turn has a property of its own that’s the data store used by the app. Accessing the app delegate like this is not necessarily a good practice in app code, but since this is an internal, debug-only feature, I didn’t think it was that big an issue.

Speaking of internal and debug-only, all code related to the CLI tool that’s included in the app is between #if DEBUG and #endif, to ensure it’s never included in the app when being uploaded to the App Store. Something else I do in some of my apps is to have a separate Internal configuration, which allows me to upload internal versions of the app — with all debugging tools included — to TestFlight, for internal testers.

Practical example: CLI side

That’s an overview of how the command receiver was implemented in the app. Now let’s see how the command line tool itself can be implemented. The first step is to add a new target to the Xcode project, selecting macOS > Command Line Tool from the new target sheet when prompted. I called the CLI listrctl.

Creating listrctl target

All command models must be included as part of both the iOS app target and the CLI target, since both will need to use them. Also, MultipeerKit must be included in the CLI’s “Frameworks & Libraries”.

Since this is a command line tool, one of its tasks will be to parse arguments passed to it. Luckily, Apple has recently released the ArgumentParser library, which greatly simplifies that task, so I decided to use it for the command line tool.

Just like the app has a CLIReceiver, the command line tool has a CLITransmitter — technically, they’re both transceivers, since they can both send and receive data. The transmitter is responsible for sending the commands to nearby iOS devices or Simulator instances, and receiving the CLIResponse structs sent by the iOS app.

There’s a problem, though: I’m writing a command line tool, which is inherently synchronous — it runs until it gets results back, then stops — but at the same time I’m dealing with MultipeerConnectivity through MultipeerKit, a highly asynchronous process.

That means the transmitter has to wait until it sees a connected device, send the command, wait for a reply to come back, then finally show the results to the CLI user and terminate its process.

Here’s how I’m achieving that:

final class CLITransmitter {

    static let current = CLITransmitter()

    static let serviceType = "listrctl"

    private lazy var transceiver: MultipeerTransceiver = {
        var config = MultipeerConfiguration.default

        config.serviceType = Self.serviceType

        return MultipeerTransceiver(configuration: config)
    }()

    var outputPath: String!

    func start() {
        transceiver.receive(CLIResponse.self) { [weak self] command in
            guard let self = self else { return }

            switch command {
            case .message(let message):
                print(message)
            case .data(let data):
                self.handleDataReceived(data)
            }

            exit(0)
        }

        transceiver.resume()
    }

    private func handleDataReceived(_ data: Data) {
        try! data.write(to: URL(fileURLWithPath: outputPath))
        outputPath = nil
    }

    private let queue = DispatchQueue(label: "CLITransmitter")

    private func requirementsMet(with peers: [Peer]) -> Bool {
        !peers.filter({ $0.isConnected }).isEmpty
    }

    func send<T: Encodable>(_ command: T) {
        queue.async {
            let sema = DispatchSemaphore(value: 0)

            self.transceiver.availablePeersDidChange = { peers in
                guard self.requirementsMet(with: peers) else { return }

                sema.signal()
            }

            _ = sema.wait(timeout: .now() + 20)

            DispatchQueue.main.async {
                self.transceiver.broadcast(command)
            }
        }

        CFRunLoopRun()
    }

}

As you can see in the start method, the transmitter registers a receiver for the CLIResponse type. When a response comes in, if it’s just a String, it’s just printed to the console, if it’s Data, it’s written to the filesystem location set by the path property. After that’s done, the process is terminated successfully by calling exit(0).

The send method is the most interesting one. It immediately dispatches to a separate queue, then it sets up a semaphore to wait in that queue until a device that meets the criteria is found. In the sample app, any remote device that’s currently connected to the device running the CLI meets the criteria. Once the criteria is met, it then broadcasts the command to all connected devices. It’s worth noting that MultipeerKit by default will automatically establish a connection to nearby peers, so I don’t have to do any of that manually.

The CFRunLoopRun() call after that just keeps the main runloop active, which is necessary for all of the multipeer machinery to do its job.

The main.swift file in listrctl is where commands are defined, using the API provided by ArgumentParser. Here’s an excerpt:

CLITransmitter.current.start()

struct ListrCTL: ParsableCommand {
    
    static let configuration = CommandConfiguration(
        commandName: "listrctl",
        abstract: "Interfaces with Listr running on a device or simulator.",
        subcommands: [
            Item.self,
            Store.self
    ])

    // ...

    struct Store: ParsableCommand {

        static let configuration = CommandConfiguration(
            commandName: "store",
            abstract: "Manipulate the data store.",
            subcommands: [
                Dump.self,
                Replace.self
            ]
        )

        struct Dump: ParsableCommand {

            static let configuration = CommandConfiguration(
                commandName: "dump",
                abstract: "Dumps the contents of the store as a property list."
            )

            @Argument(help: "The output path.")
            var path: String

            func run() throws {
                CLITransmitter.current.outputPath = path

                send(DumpDatabaseCommand())
            }

        }

        // ...
    }
}

ListrCTL.main()

This part of the CLI almost feels declarative, since it’s just defining the commands that can be invoked, and they defer all major work to CLITransmitter.

I can call send from within the command’s run method because of this extension I made:

extension ParsableCommand {
    func send<T: Encodable>(_ command: T) {
        CLITransmitter.current.send(command)
    }
}

Here's the result:

Conclusion

That’s it! I know writing command line interfaces for iOS apps can seem like a crazy idea, but I encourage you to try it out and see how it can improve your development and testing workflow. Try out the sample app and explore the code in more detail.

]]>
https://rambo.codes/posts/2020-02-25-cloudkit-101CloudKit 101https://rambo.codes/posts/2020-02-25-cloudkit-101Tue, 25 Feb 2020 02:00:00 -0300Note: this article is a revision of the article I wrote back in 2017. If you’d like to listen to an informal conversation about CloudKit, check out iPhreaks episode #226.

Apple introduced CloudKit in 2014. Since then, it has received many improvements, like the ability to use it outside Apple's platforms and to use it on apps distributed outside the App Store on the Mac.

Even though CloudKit is widely used by Apple in first-party apps and by some developers, I believe it has potential to become even more popular as a backend and sync solution for apps in Apple’s platforms, and that’s why I decided to write this introductory article.

Is CloudKit safe to use?

Unfortunately, whenever talking about Apple and the cloud, the question “is this safe to use in production?” arrises. It's true that Apple has had issues before when it comes to cloud services, but the good news is this doesn't apply to CloudKit.

You don't have to take my word for it. Do you use Apple Notes? Photos? iCloud Drive? Activity sharing on Apple Watch? These are all powered by CloudKit and they've been working just fine for me.

The thing to be aware of is that, even though CloudKit is a server-side technology, it’s still driven by the client — your app — so the success of the implementation largely depends on how well it’s implemented. Luckily, with the introduction of NSPersistentCloudKitContainer last year, the most common use case — syncing private user data across devices — is mostly taken care of for you.

That said, it can still be beneficial to know the ins and outs of CloudKit if you wish to implement your own sync solution or use it for other purposes. If that’s the case, read on.

Should I use CloudKit?

Even if you're convinced CloudKit is good and works well, it doesn't mean it's the best solution for your particular problem, since there are some applications where CloudKit is the best solution and some where it's not.

Where to use CloudKit

These are the situations for which CloudKit is the most indicated:

Sync private user data between devices

This is perhaps the most obvious use for CloudKit: syncing your users' data across their devices.

Example: a note taking app where the user can create and read notes on any device associated with their iCloud account.

Alternatives: Realm Mobile Platform, Firebase, iCloud KVS, iCloud Documents, custom web app.

Store application-specific data

By using the public database on CloudKit, you can store data that's specific to your app (or a set of apps you own). Let's say for instance you have an e-commerce app and you want to change the colors of the app during Black Friday. You could store color codes in CloudKit's public database and load them every time the app is launched.

The public database on CloudKit is a good place to store remote app configuration in general. In my app ChibiStudio, I use it to configure feature switches so that features can be rolled out or rolled back after a build has been released for customers.

Alternatives: Realm Mobile Platform, Firebase, custom web app.

Sync user data between multiple apps from the same developer

If you have a suite of apps, they can all share the same CloudKit container so users can have access to their data for all of your apps on every device they have associated with their iCloud account.

Alternatives: Realm Mobile Platform, custom web app.

Use iCloud as an authentication mechanism

You can use CloudKit just to get the user's unique identifier and use it as an authentication method for your service. These days you can also use Sign in with Apple, but that’s a little bit more involved and less transparent to the user than simply getting access to their anonymous user identifier.

Send notifications

You can use CloudKit to send notifications, eliminating the need to use a 3rd party service or custom web server.

Alternatives: Firebase, custom web app.

Where NOT to use CloudKit

Now we've seen some applications for which CloudKit is best suited for, let's see some examples where it's not.

Store and sync documents

If your app works primarily with documents, CloudKit is probably not the best tool for the job. In this case you'd be better off using Google Drive, DropBox, iCloud Drive or other similar services. You can store large files in CloudKit, but it might not be the best solution if you have a document-based app.

Examples: text editors like Pages, image editors like Pixelmator and Sketch.

Sync user preferences

To store simple user preferences or very small amounts of data, use iCloud KVS (NSUbiquitousKeyValueStore).

Example: your app has a simple boolean preference to show or hide a toolbar and you want this preference to sync across a user's devices. You can do this with CloudKit, but it'd be overkill.

Why not just use an alternate service like Firebase?

CloudKit is a first-party technology which comes pre-installed on all devices, doesn't require a separate authentication besides the user's iCloud account, has powerful features (like you'll see in the rest of this article) and has a great chance of continuing to exist and be supported for the foreseeable future. These are the main reasons why I think CloudKit is a better option than most 3rd party services.

How much does it cost?

This is a very common question when talking about CloudKit and it is frequently misinterpreted by developers.

As said by Craig Federighi when introducing CloudKit back in 2014:

CloudKit is free… with limits

1

But what does that mean?

Simply put: you're not going to be paying for CloudKit. Period.

What Apple has done is they have created a system which prevents abuse. Therefore, if you do a "regular" use of the service, it'll always be free.

From WWDC 2014, session 231:

We don’t want to prevent legitimate use. We just don’t want anyone abusing CloudKit.

But wait, there's more! The private database counts against your user's iCloud quota, so if you're only using the private database in your app, you'll never even start to consume your app's quota.

This does mean that if your user is out of space in their iCloud, you’ll receive an error back from the server when trying to save something to their private database. The best way to handle it is to just let the user know about what’s going on.

But even if you are using the public database — which counts against your app's quota — you have very generous limits and they scale with the number of users of your app.

Simulation of an app's public database quota for 10 million users:

2

CloudKit in practice

With the introductory stuff out of the way, let's start coding. I'll be explaining various concepts about CloudKit with small code snippets showing how to use them in practice.

Enabling CloudKit for your target

Before you can use CloudKit, you have to enable it for your app in Xcode's Signing & Capabilities panel.

3

When you do that, Xcode talks to Apple's servers to update the app's provisioning profile and create its default container.

You can also click the + button in Containers and create a custom container so that you can share it with your other apps or between your app and its extensions. Containers should be named using the reverse-DNS notation, in my example iCloud.codes.rambo.CloudKitchenSink20.

Notice that when CloudKit is enabled, Xcode automatically enables push notifications. That's because CloudKit's subscriptions are powered by the Apple Push Notification Service.

Environments

CloudKit has two environments: the development environment and the production environment. By default, when building the app in Xcode with the debug configuration, the development environment will be used. When distributing the app through TestFlight or the AppStore, the production environment will be used.

It can be useful sometimes to use the production environment in a debug build for testing purposes. You can achieve that by manually editing your app’s entitlements file and setting the com.apple.developer.icloud-container-environment key to Production.

4

Keep in mind that even though CloudKit's databases are schema-less — you don't have to define your schema by hand — this is only true in the development environment. After you deploy your app, you can only change your schema by changing it in the development environment first and then publishing the changes to the production environment. Deploying a development schema to production is done in the CloudKit Dashboard.

Container

A container is nothing more than a little box where you put all of your users' data. The most common configuration is to have a single container per app, but you can have an app that uses multiple containers and you can also use the same container across multiple apps.

Containers are represented by instances of CKContainer.

Accessing the default container

To access your app's default container, you use the default static property in CKContainer.

let container = CKContainer.default()

Accessing a custom container

If you’ve created a custom container using the + button in the capabilities tab, you’ll have to specify its identifier when initializing an instance of CKContainer.

let containerIdentifier = "iCloud.codes.rambo.CloudKitchenSink20"
let secondContainer = CKContainer(identifier: containerIdentifier)

I highly recommend you always create a custom container so that it’s easier for you to share your CloudKit databases between your app and extensions, or between your app on multiple platforms — iOS, watchOS and macOS, for example.

Remember that if you're using the default container, you can always just use the default static property on CKContainer. I'll be using the default container in the rest of the snippets for brevity.

Database

A database is where you're going to be storing your users' data. Databases are represented by objects of the type CKDatabase.

You don’t create databases in CloudKit, every container already comes with 3 databases:

Private Database

This is the database where you'll be storing your users’ private data. Only the user can access this data through a device authenticated with their iCloud account. You as the developer can't access data in a user’s private database.

During development, if you use the same Apple ID for your developer account as the one you use on your device, you can access your own private data for debugging, using the CloudKit dashboard. If that’s not possible, you can sign in to iCloud in an iOS Simulator with your developer account for testing.

To access the private database, you use the privateCloudDatabase instance method from CKContainer.

Public database

This is the database where you'll be storing global app data relevant to all users of the app, this data can be created by you using the CloudKit dashboard or a custom CMS, or it can be data generated by your users that should be visible to other users.

Even though the database is public, it's possible to restrict access to its records by using security roles, so that only the user who created a certain record can get access to it, or to have all data be read-only but certain accounts that are “admin” users and have permission to insert, update and delete information on the public database.

To access the public database, you use the publicCloudDatabase instance method from CKContainer.

Shared database

Back on iOS 10 and macOS Sierra, Apple added sharing to CloudKit. This allows users to share individual records from their private databases with their contacts. The shared database is used to store these records, but you don’t have to interact with it directly.

I won’t be covering sharing in this article, but you can learn more here.

Zone

A zone is like a directory where you can save records. All databases on CloudKit have a Default Zone. You can use the default zone to store your records but it’s also possible to create custom zones, both in the CloudKit dashboard or in code. Only the private database can have custom zones — they are not supported in the public database.

Some of CloudKit's features, such as saving related records in batches and sharing can only be used with custom zones. Because of that, I highly recommend that you have at least one custom zone where you save all your users’ records. Zones are represented by objects of the type CKZone.

Record

Records are objects of the type CKRecord and can be considered the model object of CloudKit. A CKRecord is basically a dictionary where the keys become fields on the database's tables.

Supported data types

Although CKRecord is basically a dictionary, this doesn't mean you can store any type of data in CloudKit. Here are the types you can use as values in CKRecord:

  • String:  Apple recommends String for small amounts of text
  • NSNumber:  Swift's numeric types are automatically bridged for you
  • Data:  you could use Data to store custom objects serialized with NSCoding
  • Date:  dates and times can be stored in CloudKit directly
  • CLLocation:  very useful for location-based apps
  • CKAsset:  used to store big files (photos and videos, for instance)
  • CKReference:  a reference that points to another record

Besides all of the types above, any key in CKRecord can contain an array of any of the supported types, provided that it only contains elements of the same type.

Creating a Record

Let’s say we're creating an app where the user can register recipes. We'd probably have a “Recipe” model, which can be represented as a CKRecord with the recordType set to Recipe.

To create a Recipe record, initialize a CKRecord:

let record = CKRecord(recordType: "Recipe")

With that object created, all you have to do is set its properties, which can be done with a simple subscript:

record["title"] = "Spaghetti Carbonara"
record["ingredients"] = ["Spaghetti", "Guanciale", "Eggs"]

Improving the code with a custom subscript

Notice the stringly typed subscript in the example above. You can improve it by creating a custom subscript on CKRecord for keys specific to the Recipe record type:

extension Recipe {
    enum RecordKey: String {
        case title
        case subtitle
        case ingredients
        case instructions
        case image
    }
}


extension CKRecord {
    subscript(key: Recipe.RecordKey) -> Any? {
        get {
            return self[key.rawValue]
        }
        set {
            self[key.rawValue] = newValue as? CKRecordValue
        }
    }
}

Now, to change the values in records, the code will look a lot cleaner (and be a lot safer):

record[.title] = "Spaghetti Carbonara"
record[.ingredients] = ["Spaghetti", "Guanciale", "Eggs"]

Remember that with this custom subscript we're still limited to the data types supported by CloudKit. If you try to set a key to an unsupported type, the field will be nil.

Another important detail: CKRecord is NOT a value type, so when you pass objects of the type CKRecord around, you're passing them by reference. This means that if you have a property that is a CKRecord and you add a didSet observer to it, it will not be executed when one of its values is changed.

This detail is not that important, since when working with CloudKit in your app, you should be converting between your own model objects — which can be structs — and CKRecord. This conversion can be automated with something like my CloudKitCodable or code generation, but you may prefer to do it manually so that you can fully control the encoding and decoding process.

The CloudKit Dashboard

Now that you know how to create records on CloudKit, it'd be cool to have some means of knowing what's going on at the server when records are saved.

Apple created a tool for this, the CloudKit Dashboard. In the dashboard it’s possible to browse all of your containers, databases, record types and more.

After you create a record in your own private database, you can go to the dashboard and query for records of that type, allowing you to see all data about the record you’ve just created:

5

If you see an error when trying to query a record type in the dashboard, it probably means you haven’t added an index to the recordName. To do that, you can select “Schema” in the top menu, then in the “Record Types” menu select the “Indexes” option, and add an index so that recordName is QUERYABLE.

Notice that, apart from the data that’s been explicitly added to the record, CloudKit adds some metadata automatically:

Record Name : this is the unique identifier for the record, used to locate records on the database. CloudKit is able to generate an ID automatically for us, by I strongly recommend creating a custom record name with your own ID that matches what you have in your local storage. In the example app I made for this article, that’s what I’m doing.

Created: the date/time of creation. This can be accessed using the creationDate property of CKRecord.

Created By : the ID of the user who created the record. Can be accessed using the creatorUserID property of CKRecord.

Modified: the date/time of modification. This can be accessed using the modificationDate property of CKRecord.

Modified By : the ID of the user that made the last modification to the record. Can be accessed using the lastModifiedUserRecordID property of CKRecord.

User records

Every database on CloudKit comes with a record of type User by default. This record type is used to store the user records, which represent each user of your app. By default it contains only the unique identifier for the user. This user record identifier is unique per container, which means that a single user will have the same identifier between zones and databases on the same container, but if you happen to use multiple containers in your app, the same user will have different identifiers for each one.

We can do quite a lot with this user record:

  • Know whether the user is logged in to iCloud
  • Get the user record from the container
  • Get the user's full name
  • Get the identifiers for the user's contacts who have corresponding records on the same container (i.e. “friends using the app”)
  • Update it with data that's useful to your app
  • Be notified of changes in the user's iCloud account's status

Checking if the user is logged in to iCloud

There are many situations where it can be important to know if the user is logged in to iCloud on the current device to decide whether a certain feature of the app should be enabled or even prevent the user from doing anything if not logged in.

Notice: if you decide to prevent the user from using your app when an iCloud account is not available, make sure to include a very detailed explanation on your app's review notes for Apple, if you can't explain why your app needs authentication to work, your app may be rejected.

To get the status of the user's iCloud account, use the accountStatus method from CKContainer:

CKContainer.default().accountStatus { status, error in
    if let error = error {
      // some error occurred (probably a failed connection, try again)
    } else {
        switch status {
        case .available:
          // the user is logged in
        case .noAccount:
          // the user is NOT logged in
        case .couldNotDetermine:
          // for some reason, the status could not be determined (try again)
        case .restricted:
          // iCloud settings are restricted by parental controls or a configuration profile
        @unknown default:
          // ...
        }
    }
}

Fetching the user record

To fetch the user record, you need to get its ID first. You can use the fetchUserRecordID method from CKContainer to do this.

CKContainer.default().fetchUserRecordID { recordID, error in
    guard let recordID = recordID, error == nil else {
        // error handling magic
        return
    }
    
    print("Got user record ID \(recordID.recordName).")
}

Now that you have the record ID for the user record, you can use the fetch method from CKDatabase to get the actual user record:

// `recordID` is the record ID returned from CKContainer.fetchUserRecordID
CKContainer.default().publicCloudDatabase.fetch(withRecordID: recordID) { record, error in
    guard let record = record, error == nil else {
        // show off your error handling skills
        return
    }

    print("The user record is: \(record)")
}

In the example above, I'm using publicCloudDatabase, but I could be using privateCloudDatabase. Which database you use will depend on your application.

You can actually have two separate records for the same user: one in the public database and another one in the private database. Both will have the same identifier, but the data they contain can be different. You can use the public record to store information such as an avatar and nickname and the private record to store e-mail, address and other sensitive data.

Getting the user's full name

To get a user's full name from iCloud, you need to ask for permission by using the requestApplicationPermission method from CKContainer, with the option .userDiscoverability. There will be an alert asking the user for permission.

After getting the user's permission, the discoverUserIdentity method from CKContainer can be called to get the user's identity. This identity will contain the user's full name as a PersonNameComponents value, which can be formatted using a PersonNameComponentsFormatter.

CKContainer.default().requestApplicationPermission(.userDiscoverability) { status, error in
    guard status == .granted, error == nil else {
        // error handling voodoo
        return
    }

    CKContainer.default().discoverUserIdentity(withUserRecordID: recordID) { identity, error in
        guard let components = identity?.nameComponents, error == nil else {
            // more error handling magic
            return
        }

        DispatchQueue.main.async {
            let fullName = PersonNameComponentsFormatter().string(from: components)
            print("The user's full name is \(fullName)")
        }
    }
}

Discovering user contacts who use the app

To get a list of records for the user's friends using the same app, you can use the discoverAllIdentities method from CKContainer. This will return identities for people in the user’s address book who have your app installed and have given it permission to access their identity.

CKContainer().default().discoverAllIdentities { identities, error in
    guard let identities = identities, error == nil else {
        // awesome error handling
        return
    }

    print("User has \(identities.count) contact(s) using the app:")
    print("\(identities)")
}

Adding extra information to the user record

Let's say you’d like to list records with the name and avatar of the user who created them. Unfortunately, Apple doesn't offer a method to get the user's iCloud avatar, but this feature can be implemented by adding a custom field to the user record and letting the user manually upload an image. The same technique can be used to include other information in the user’s record.

If you’d like to add something like an avatar to a user’s record, or any large asset to any other type of record, you have to use CKAsset, which is an object used to store large files on CloudKit. Apple recommends that any field that's larger than a few kilobytes be represented by a CKAsset.

Working with CKAsset is really simple: you initialize it with an URL to a local file you want to upload to CloudKit and add the CKAsset as a value to one of a record's keys.

When that record gets saved or retrieved, CloudKit will take care of uploading or downloading it on your behalf, populating the fileURL property of the asset with the URL to a local file your app can read. For a production app, make sure you check the type of file that’s being uploaded and if it’s an image, scale it down and compress it so if the user selects a huge image file it doesn't consume too much bandwidth and storage space.

Let’s say you have a button on the interface that opens an UIImagePickerController so the user can select a picture from the library to use as an avatar. The snippet bellow shows what happens after the user selects an image:

private func updateUserRecord(_ userRecord: CKRecord, with avatarURL: URL) {
    userRecord["avatar"] = CKAsset(fileURL: avatarURL)

    CKContainer.default().publicCloudDatabase.save(userRecord) { _, error in
        guard error == nil else {
            // top-notch error handling
            return
        }

        print("Successfully updated user record with new avatar")
    }
}

In the snippet above, imageURL is a URL to a local file, if you try to initialize a CKAsset with a remote URL, there will be an exception and your app will crash.

The save method is used both to create and update records. When you pass it an existing record, CloudKit will update the keys that have been changed since the record was last saved. Using that method can be convenient, but as you’ll see later in this article, the recommended way to interact with CloudKit is by using operations.

As you can see, it's easy to add a new custom field to the user record and upload files to CloudKit.

Observing changes to the user’s iCloud account status

Something that can happen while your app is running is the user can open up iCloud preferences and change the logged in iCloud account, or just log out. Or maybe the user is initially logged out, then logs in while your app is in the background.

If your app changes depending on the logged in user, you must update its state to reflect such changes.

To be notified of changes to the iCloud account's status, all you have to do is register an observer for the .CKAccountChanged notification:

NotificationCenter.default.addObserver(self, 
                                       selector: #selector(userAccountChanged), 
                                       name: .CKAccountChanged, 
                                       object: nil)

When you detect that the current logged in user has changed, you probably want to remove all private data you have cached locally and fetch the data for the new user.

Queries

Now that you’ve learned the basics of how to store data in CloudKit, it’s time to learn how to retrieve that data. The simplest way to fetch a record from CloudKit is by using a record ID, like you’ve seen before when I showed how to fetch the user record.

More advanced queries which filter based on other keys will require the use of the CKQuery class. With CKQuery, a predicate can be specified to filter records. If you’re not familiar with NSPredicate, I recommend reading the docs. It's a very powerful class that's used a lot with Apple's APIs.

To run a query on CloudKit, you use the CKQueryOperation class. Performing queries is one of many things in CloudKit that uses operations.

Fetching all records of a specific type

This is the most basic one. To run a query to get all recipe records from the database, the first step is to construct a query with the record type and a predicate. Since all records should be retrieved, a predicate with the value true can be used.

let predicate = NSPredicate(value: true)
let query = CKQuery(recordType: "Recipe", predicate: predicate)
let operation = CKQueryOperation(query: query)

The operation accepts two closures: recordFetchedBlock, called multiple times during its execution, when a new record is fetched, and queryCompletionBlock, called after the operation is finished, and which may include an error.

let recipeRecords: [CKRecord] = []

operation.recordFetchedBlock = { record in
    recipeRecords.append(record)
}

operation.queryCompletionBlock = { cursor, error in
    // recipeRecords now contains all records fetched during the lifetime of the operation
    print(recipeRecords)
}

There are two parameters in the completion callback that deserve mention: cursor is an object of the type CKCursor that can be present at the end of the operation in case there are more results to be downloaded. If your query returns a very large amount of records, you'll have to run multiple operations to fetch all of them, passing the cursor from the last operation to each subsequent operation.

The error parameter is also very important — through it you'll know whether an error occurred and what's the nature of the error. Some errors returned by CloudKit are recoverable, which means you shouldn't just throw an alert to the user the first time an error occurs. Depending on the error, CloudKit will even tell you how many seconds to wait before retrying the operation — and you should definitely follow that recommendation.

Finally, after configuring our operation, you execute it by adding it to the database:

CKContainer.default().publicCloudDatabase.add(operation)

In a real-world scenario, you may prefer to create your own DispatchQueue to add CloudKit operations to, so that you can control and observe operations a little bit better:

private let cloudQueue = DispatchQueue(label: "SyncEngine.Cloud", qos: .userInitiated)

private lazy var cloudOperationQueue: OperationQueue = {
    let q = OperationQueue()

    q.underlyingQueue = cloudQueue
    q.name = "SyncEngine.Cloud"
    q.maxConcurrentOperationCount = 1

    return q
}()

You may also want to set the qualityOfService in the operation to .userInitiated to make sure it executes in a reasonable time.

Performing a textual search

Another very common type of query is the textual query. Users may want to search recipes by title. Fortunately, CloudKit deals very well with this and you can construct a simple predicate to take care of it:

let predicate = NSPredicate(format: "self contains %@", title)
let query = CKQuery(recordType: "Recipe", predicate: predicate)

The predicate self contains %@ means "look for this value in every key that contains text".

Performing a search based on geographical coordinates

CloudKit supports queries based on location. Suppose you have an app that stores points of interest with a location property, you can use a device’s location to search for points of interest within a 500km radius, using a predicate such as the one below:

NSPredicate(format: "distanceToLocation:fromLocation:(location, %@) < %f", currentLocation, radius)

In the snippet above, location refers to the key in the record, currentLocation is a CLLocation value with the device’s current location and radius is a Float with the radius (in km) to be used when doing the search.

Performing queries on the database gives you complete flexibility to get relevant information in your app, but CloudKit has something even nicer than this: it’s possible to create persistent queries that run every time a record on the database is updated and notify the app via push notifications. These persistent queries are called subscriptions.

Subscriptions

Remember I talked about sending notifications using CloudKit? That's what subscriptions enable.

Through subscriptions, you can register to be notified every time some change happens on the database. Therefore, when a new record is inserted, CloudKit will send the app a push notification. These notifications can be just content-available notifications (silent), or regular notifications that show alerts and/or badge the app's icon.

Using silent notifications, you can keep your app up-to-date with the latest data every time a change is made to the database, because this type of notification gives your app the opportunity to perform a background fetch.

Creating a subscription

To create a subscription that sends notifications, you first need to get the user's permission to send notifications and register the app for remote notifications:

UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound]) { authorized, error in
    guard error == nil, authorized else { 
        // not authorized...
        return
    }
	
    // subscription can be created now \o/
}

UIApplication.shared.registerForRemoteNotifications()

If you want to use only the silent notifications, it’s not necessary to ask for permission, you can just call registerForRemoteNotifications.

You can now create a subscription with CloudKit. The subscription is an object of type CKSubscription:

let subscription = CKQuerySubscription(recordType: "Recipe",
                                        predicate: NSPredicate(value: true),
                                          options: [.firesOnRecordCreation])

In the CKQuerySubscription, the recordType is the type of record you want to be notified about, predicate defines the query to be executed to determine whether a notification will be fired. Like I mentioned earlier, subscriptions are like persistent queries that run on the server after each update on the database, it's through this parameter that you determine which query this will be.

Remember the location-based query I showed earlier? You could register a subscription using that same predicate, making the user get a notification based on a record being created with its location within a 500km radius of the device’s location.

options is a list defining in which circumstances the notification will be fired. You can get notified when a record is created, an existing record gets updated or deleted, or all three at the same time.

Configuring the notification itself

With the subscription created, you have to define what the notification for this subscription will look like. To accomplish this, you set up a CKNotificationInfo:

let info = CKNotificationInfo()
info.alertLocalizationKey = "nearby_poi_alert"
info.alertLocalizationArgs = ["title"]
info.soundName = "default"
info.desiredKeys = ["title"]
subscription.notificationInfo = info

alertLocalizationKey is a key in the app's Localizable.strings file to be used as the format for the alert. This parameter is necessary when you want to include data from the record in the alert. In this example, I'm including the title of the point of interest, the localization key would look like this:

“nearby_poi_alert” = “%@ has been added, check it out!”;

alertLocalizationArgs contains the keys from the record that should be used to populate the placeholders in the text. I'm using the title key in the example above.

desiredKeys are the keys from the record that should be sent with the notification.

Saving the subscription

Now that you have created and configured the subscription, you just have to save it like any other record:

container.publicCloudDatabase.save(subscription) { [weak self] savedSubscription, error in
    guard let savedSubscription = savedSubscription, error == nil else {
        // awesome error handling
        return
    }
	
    // subscription saved successfully
    // (probably want to save the subscriptionID in user defaults or something)
}

Remember that the subscription must be saved on the database for which you want to be notified.

Custom zones and change tokens

What you’ve seen above are the basics of how CloudKit works and how to save and retrieve data. When you’re writing an app to sync private user data, you probably won’t be using queries or query subscriptions to fetch the data, you probably won’t be using just the save method to store records either.

The basic workflow for syncing private user data is as follows:

  • Ensure there’s an iCloud account available
  • Check if the custom zone has been created and create it if needed
  • Check if the record zone subscription has been created and create if needed
  • Upload local data that’s not uploaded yet
  • Fetch remote changes

That’s the initial setup. After that’s done, you should observe changes to the local data and upload them to CloudKit, and observe remote changes and fetch them when your app gets a silent push notification from CloudKit.

To fetch remote changes, you’ll have to store the last change token you received from the server. The change token is like a pointer in a timeline you can use to tell CloudKit to fetch what’s changed since the last time a particular client has had its local data updated from the server.

Here’s how you can fetch remote changes for a custom zone. The custom zone ID can be defined by you and stored as a constant in code. The change token should be stored by your app, usually in user defaults.

let customZoneID = CKRecordZone.ID(zoneName: "KitchenZone", ownerName: CKCurrentUserDefaultName) // CKCurrentUserDefaultName is a "magic" constant provided by CloudKit which represents the currently logged in user

let operation = CKFetchRecordZoneChangesOperation()

let token: CKServerChangeToken? = privateChangeToken

let config = CKFetchRecordZoneChangesOperation.ZoneConfiguration(
    previousServerChangeToken: token,
    resultsLimit: nil,
    desiredKeys: nil
)

operation.configurationsByRecordZoneID = [customZoneID: config]

operation.recordZoneIDs = [customZoneID]
operation.fetchAllChanges = true

operation.recordZoneChangeTokensUpdatedBlock = { _, changeToken, _ in
    // Store changeToken to be used in subsequent fetches.
}

operation.recordZoneFetchCompletionBlock = { [weak self] _, _, _, _, error in
    // Handle error if needed
}

operation.recordChangedBlock = { record in
    // Parse record and store it locally for later use
}

operation.recordWithIDWasDeletedBlock = { recordID, _ in
    // Delete the local entity represented by the record iD
}

operation.qualityOfService = .userInitiated
operation.database = privateDatabase

cloudOperationQueue.addOperation(operation)

Architecting for sync

If you’re working in an app that syncs private user data to CloudKit, you should strive for an offline-first architecture, so that a person can use your app normally even when there’s no internet connection available.

In order to achieve this, you should probably architect your code in such a way that sync is abstracted away from what your user touches. This means not firing CloudKit operations based on button taps in view controllers, for instance.

The exact shape of this will depend on which local storage solution you’re using. I have experience implementing CloudKit syncing with both Realm and CoreData, and in the sample app I provide with this article, I’ve used a simple plist file on disk to store the data locally.

The key is that your sync code should observe your local store for new records, updated records and deleted records. When changes are detected, they should be uploaded to CloudKit using a CKModifyRecordsOperation.

Error handling

It's very important to keep an eye out for errors when dealing with CloudKit. Many developers just print errors or show alerts for the user when an error occurs, but that's not always the best solution.

The first thing you have to check is whether the error is recoverable. There are two very common cases that can cause a recoverable error to occur on CloudKit.

Temporary error / timeout / bad internet / rate limit

Sometimes there can be a little glitch with the connection or Apple's servers that can cause temporary errors. Your app may also be calling CloudKit too frequently, in which case that server will refuse some requests to avoid excessive load. In those cases, the error returned from CloudKit will be of the type CKError, which contains the retryAfterSeconds property. If this property is not nil, use the value it contains as a delay to try the failed operation again.

On my projects, I always have a helper function that looks like this:

/// Retries a CloudKit operation if the error suggests it
///
/// - Parameters:
///   - log: The logger to use for logging information about the error handling, uses the default one if not set
///   - block: The block that will execute the operation later if it can be retried
/// - Returns: Whether or not it was possible to retry the operation
@discardableResult func retryCloudKitOperationIfPossible(_ log: OSLog? = nil, with block: @escaping () -> Void) -> Bool {
    let effectiveLog: OSLog = log ?? .default

    guard let effectiveError = self as? CKError else { return false }

    guard let retryDelay: Double = effectiveError.retryAfterSeconds else {
        os_log("Error is not recoverable", log: effectiveLog, type: .error)
        return false
    }

    os_log("Error is recoverable. Will retry after %{public}f seconds", log: effectiveLog, type: .error, retryDelay)

    DispatchQueue.main.asyncAfter(deadline: .now() + retryDelay) {
        block()
    }

    return true
}

This method helps dealing with recoverable CloudKit errors. It takes an error returned from a CloudKit operation, a block to be executed if the operation can be retried and it returns the error in case the operation can not be retried.

Conflict resolution

Another common error is a conflict between two database changes. The user may have modified a register on a device while offline and then made another, conflicting change, using another device. In this case, trying to save the record may result in an error of the type serverRecordChanged.

The userInfo property for this error will contain the original record before the modifications, the current record on the server and the current record on the client. It's up to your app to decide what to do with this information to resolve the conflict. Some apps show a panel for the user to choose which record to keep, some merge the content of the two records automatically, some just keep the most recently modified record.

It’s probably a good idea to have a helper that can delegate conflict resolution to your model, through a closure:

/// Uses the `resolver` closure to resolve a conflict, returning the conflict-free record
///
/// - Parameter resolver: A closure that will receive the client record as the first param and the server record as the second param.
/// This closure is responsible for handling the conflict and returning the conflict-free record.
/// - Returns: The conflict-free record returned by `resolver`
func resolveConflict(with resolver: (CKRecord, CKRecord) -> CKRecord?) -> CKRecord? {
    guard let effectiveError = self as? CKError else {
        os_log("resolveConflict called on an error that was not a CKError. The error was %{public}@",
               log: .default,
               type: .fault,
               String(describing: self))
        return nil
    }

    guard effectiveError.code == .serverRecordChanged else {
        os_log("resolveConflict called on a CKError that was not a serverRecordChanged error. The error was %{public}@",
               log: .default,
               type: .fault,
               String(describing: effectiveError))
        return nil
    }

    guard let clientRecord = effectiveError.userInfo[CKRecordChangedErrorClientRecordKey] as? CKRecord else {
        os_log("Failed to obtain client record from serverRecordChanged error. The error was %{public}@",
               log: .default,
               type: .fault,
               String(describing: effectiveError))
        return nil
    }

    guard let serverRecord = effectiveError.userInfo[CKRecordChangedErrorServerRecordKey] as? CKRecord else {
        os_log("Failed to obtain server record from serverRecordChanged error. The error was %{public}@",
               log: .default,
               type: .fault,
               String(describing: effectiveError))
        return nil
    }

    return resolver(clientRecord, serverRecord)
}

Conclusion

That's it! I hope this article was useful for you to get a better understanding of CloudKit and I hope this inspired you to use it for your next project.

Check out this article’s companion project on my Github.

Further reading

]]>
https://rambo.codes/posts/2020-02-20-mvc-with-sugarMVC: Many View Controllershttps://rambo.codes/posts/2020-02-20-mvc-with-sugarThu, 20 Feb 2020 15:00:00 -0300This article is basically the script from the talk I presented at dotSwift in Paris. If you prefer, you can watch the video.

INTRO

Today I want to talk about MVC. Model View Controller is the architecture iOS developers love to hate. I will be focusing most of this talk on the “C” part and what leads to single controller files getting out of hand.

But before I do, let’s address the elephant in the room: SwiftUI. Apple announced SwiftUI last year, which got the developer community understandably excited. I love SwiftUI, and I’ve been using it more and more for my internal projects, but I also need to keep supporting old versions of iOS - at least iOS 12 - with my apps, and SwiftUI still has lots of rough edges. So, regardless of whether you believe SwiftUI is the future or not, you probably still need be able to work in UIKit land for the foreseeable future. Besides, the way I like to split view controllers can probably be adapted to work in SwiftUI, even though it doesn’t have view controllers.

STORY TIME

1

I want to tell you my friend Elliot’s story. They’re quite young in the iOS development world and have been working on an app for their business for the past few months, using MVC. But now they’re facing some problems because view controllers are starting to become really massive, screens are coupled to other screens - which makes changing flows complicated - and they are having trouble writing good unit tests. Elliot believes the problem is their app’s architecture, so they decided to start searching for “the perfect architecture”.

3

Even though Elliot loves testing, they aren’t really a TDD developer - they’re an MDD developer. They search the web until they find a blog post on Medium that seems to address their problems. So they read some company’s post talking about this amazing new architecture called “PIE”: Presenter Interactor Entity, where everything in the app is represented as an Entity that can be manipulated by an Interactor and shown on screen by a Presenter. It’s composable, testable, functional, protocol-oriented… all the buzz words. There’s no mention in that blog post on whether such company is actually using the architecture, but they guarantee it’ll solve all of Elliot’s problems.

So naturally, Elliot rewrites most of their iOS app in this shiny new architecture. They also spend most of their time while they wait for their code to compile talking about app architecture with people on Twitter and on iOS development Slack groups.

A month later, WWDC happens, and Apple announces a bunch of new frameworks and features, but Elliot is still busy rewriting their app to use the PIE architecture, so they don’t have time to implement any of the new things.

September comes and Apple announces a new iPhone with TWO NOTCHES instead of one, which obviously changes the way things are presented on screen, but the framework Elliot uses to implement the PIE architecture will take a while to implement support for that.

Fast forward to the end of the year. Elliot is finally done with that rewrite, they’re quite happy about their architecture now, but they still haven’t shipped the app, and it still doesn’t support the latest devices and iOS features. Then, Elliot finds out that the company that published the PIE architecture abandoned it in favor of another one, and won’t be maintaining their framework anymore.

This is of course a fictional story about “Medium Driven Development”, but you know developers who work this way. By the way, the PIE architecture was made up by my iOS Architecture Generator.

Real artists ship

I love this quote, and I think many developers have forgotten about it. I’ve always been very focused on user experience as a developer, if something is not adding to the user experience, it’s probably not worth doing. I’ve found through experience that I’m the most productive and can iterate more quickly, thus providing a better user experience, when I use the MVC architecture… with some sugar on top.

Let’s talk about this sugar.

Not every view controller is created equal, I like to separate what Apple calls a ViewController into four categories:

  • Containers
  • Generic controllers
  • View controllers
  • Flow controllers

I’ll be giving you an example of each type of view controller and how it can be used, you can check out a Swift Playground I made with the full example for each view controller on my Github.

CONTAINERS

2

Pop quiz: how many view controllers are on this screen?

I know that many developers have a single View Controller for every screenful of content, but actually the answer is 8. This is one of the things you can do to make your app more composable and decoupled: use child view controllers.

The way you do it is very simple:

let child = MyViewController()

addChild(child)

child.view.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(child.view)

NSLayoutConstraint.activate([
    child.view.leadingAnchor.constraint(equalTo: view.leadingAnchor),
    child.view.trailingAnchor.constraint(equalTo: view.trailingAnchor),
    child.view.topAnchor.constraint(equalTo: view.topAnchor),
    child.view.bottomAnchor.constraint(equalTo: view.bottomAnchor)
])

child.didMove(toParent: self)

You can make it even simpler with an extension on UIViewController:

public extension UIViewController {
    func install(_ child: UIViewController) {
        addChild(child)

        child.view.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(child.view)

        NSLayoutConstraint.activate([
            child.view.leadingAnchor.constraint(equalTo: view.leadingAnchor),
            child.view.trailingAnchor.constraint(equalTo: view.trailingAnchor),
            child.view.topAnchor.constraint(equalTo: view.topAnchor),
            child.view.bottomAnchor.constraint(equalTo: view.bottomAnchor)
        ])

        child.didMove(toParent: self)
    }
}

It’s a very simple, but powerful technique. UIKit itself has lots of container view controllers, such as UINavigationController and UIPageViewController, but we can also create our own.

Let’s say you have a screen in your app where you need to show four discrete states: loading, loaded, empty and error. If you do all of them in a single view controller, it already becomes massive. What you can do is create your own, custom container view controller, that can be switched between those different states.

public final class StateViewController: UIViewController {

    public enum State {
        case loading(message: String)
        case content(controller: UIViewController)
        case error(message: String)
        case empty(message: String)
    }

    public var state: State = .loading(message: "Loading") {
        didSet {
            applyState()
        }
    }
	
    // ...
	
}

This way, you have a container view controller you can use for every time you need that behavior, saving you from having that code duplicated all over the place by using a reusable container.

GENERIC CONTROLLERS

When I say generic in this case I really do mean Generic<T>. We use generics in our model code all the time, so why not apply the same technique to our visual code? This is a very good way to provide common functionality to different parts of your app. In one of the apps I work on, we have generic table view and collection view controllers that let you create lists of dynamic content very easily.

The first step is to create a container collection view cell that can be populated with any view. This by itself is already kinda useful since you can now define your cell content as a regular UIView, which can be used outside a collection view, but can also be put in a collection using the container cell.

public final class ContainerCollectionViewCell<V: UIView>: UICollectionViewCell {

    public lazy var view: V = {
        return V()
    }()

    public override init(frame: CGRect) {
        super.init(frame: frame)

        view.translatesAutoresizingMaskIntoConstraints = false
        contentView.addSubview(view)

        NSLayoutConstraint.activate([
            view.leadingAnchor.constraint(equalTo: contentView.leadingAnchor),
            view.trailingAnchor.constraint(equalTo: contentView.trailingAnchor),
            view.topAnchor.constraint(equalTo: contentView.topAnchor),
            view.bottomAnchor.constraint(equalTo: contentView.bottomAnchor)
        ])
    }

}

Something else we need to cover is how to update the view that’s going to be inside the collection with its contents. There are lots of ways you could do it, but in my example I’m using a simple closure that takes the view as a parameter and modifies it - this closure is called when the cell is created or recycled.

class GenericCollectionViewController<V: UIView, C: ContainerCollectionViewCell<V>>: UICollectionViewController {

    init(viewType: V.Type) {
        super.init(collectionViewLayout: makeDefaultLayout())
    }

    var numberOfItems: () -> Int = { 0 } {
        didSet {
            collectionView?.reloadData()
        }
    }

    var configureView: (IndexPath, V) -> () = { _, _ in } {
        didSet {
            collectionView?.reloadData()
        }
    }

    var didSelectView: (IndexPath, V) -> () = { _, _ in }
	
    // ...
}

That’s just an example, but you can leverage generics to create some very powerful view controllers that can be reused in several places inside your app, through containment.

VIEW CONTROLLERS

This is probably the type of view controller you’re most familiar with. It’s the controller that manages a view, populating it with data (usually from a model) and responding to events.

But when I implement my view controllers, I avoid giving them too much responsibility. That’s why when I say “view controller” in this case I mean a view controller that simply lays out some views on screen and sets them up in the right way.

Maybe it also responds to some simple view events, but it usually doesn’t do anything with them directly, but relays that responsibility to a parent, either through a protocol or by using closures.

The point here is that these should be dumb. The dumber your view controllers are, the more ready you’re going to be when you want to transition to a different UI framework, such as SwiftUI.

Now, to address what’s in my opinion the worst problem with “traditional MVC”, which is the tight coupling between different flows in an app, I’d like to introduce the last type of controller.

FLOW CONTROLLERS

You can think of flow controllers as coordinators. They drive the flow of what’s happening with their children, so they can also be considered a type of container.

Many people use the coordinator pattern to handle things such as navigating between different screens. This approach is similar, but my coordinators are called “flow controllers” and they inherit from UIViewController. By inheriting from UIViewController we avoid fighting with UIKit and can take advantage of the responder chain and lifecycle events.

For instance, using this approach we don’t need to worry about what happens when a modal is closed. If you use something that’s not a UIViewController to coordinate a modal flow, you’re responsible for disposing of that after that flow is finished, but by using UIViewController, we can let UIKit take care of that for us.

So let’s say we have a screen in our app that downloads a list of geographical regions from an API, shows the regions to the user and then lets them select a region to see more information about it.

Instead of having a “regions controller” that is responsible for loading the regions, displaying them and reacting to selection to present the “detail controller”, we’ll have a “regions flow controller” that does the loading, then populates the “regions controller” when the data arrives. This flow controller will also react to selection by pushing a detail controller.

Since this controller manages that entire flow, it will not be inside a UINavigationController, but instead it will own a navigation controller that’s added as its child and push/pop view controllers inside of it.

With this approach, we managed to create a reusable flow, with view controllers that are completely decoupled from each other and can be used in different flows as needed. This also ensures changes to the flow are easy to make: if we want to present the detail as a modal or add another step in between, we only have to change the flow controller, its children are completely unaware of their environment.

What’s cool about this approach is that we can encapsulate extremely complex user flows - such as a checkout process in an e-commerce app - inside a flow controller, which can be presented or embedded in different ways to achieve the result we want.

With this, we have a reasonable way to structure our app without fighting with UIKit.

A FEW MORE THINGS

I hope you learned something about the way you can improve view controllers with the examples I gave. Now that we talked a lot about view controllers, I would like to focus on another common issue developers face when using MVC, which is more common with younger developers.

This issue happens when the developer thinks they must fit everything into either M, V or C, forgetting that they are allowed to create other types of constructs.

One that I’m a fan of is the view model, popularized by the MVVM architecture. I do use them a little differently tho. My view models are just like models, but they are tailored to a specific type of view or view controller.

They are usually created from a model, but they transform and format model data to something that’s more usable to a view or view controller. For example, a Post model in a blogging app would have a publishedAt property that’s a Date, but a PostViewModel would have the publishedAt date already formatted, ready to be displayed on screen. It’s something I do to prevent duplication of that type of code and also to keep views and view controllers as dumb as possible, it’s also great for testing.

So, no matter what you call them, try to create model-type classes that are tailored to your views and controllers, instead of moving models all over the place and duplicating untestable transformation and formatting code.

That’s yet another example, but there are many more types of entities you can leverage to make your apps more modular, testable and fun to work on. In fact, if I were to try and make a name for the architecture I use, it would probably be composed of way more than just three or four letters. And that’s why I prefer to just call it MVC.

]]>
https://rambo.codes/posts/2020-01-16-common-pitfalls-when-using-keychain-sharing-on-iosCommon pitfalls when using Keychain Sharing on iOShttps://rambo.codes/posts/2020-01-16-common-pitfalls-when-using-keychain-sharing-on-iosThu, 16 Jan 2020 11:00:00 -0300DISCLAIMER: this is not a tutorial on how to use the Keychain or Keychain Sharing.

I am a huge fan of app extensions, since they allow us to expose our app’s functionality to other parts of the system without requiring the user to necessarily launch the app if they want to get something done quickly.

Eventually, when you’re working on an app that requires some sort of authentication or stores any type of sensitive data, you’ll have to use the Keychain. If you want to be able to access those Keychain items in your app extensions, you are going to have to enable the “Keychain Sharing” capability in your project’s code signing settings.

Seems simple enough, just check a box and be done. Right? Right? Well, not quite.

That’s a problem with the “Signing & Capabilities” pane in Xcode: it is very easy to add and edit capabilities, but there’s very little information about how to actually use those capabilities in your code. I feel like every capability should come with a “Learn More” link that takes you to extensive documentation about what it does and some sample code, but I digress.

adding the keychain capability

Here are some of the common issues I’ve found while working with Keychain Sharing and their respective solutions.

Group name incorrect

When you add the Keychain Sharing capability, you can add different group names to the list, and a default one is also created for you. You might be tempted to just copy that name verbatim and paste it in your code, but oftentimes when you do that, it just doesn’t work.

What I found is that for some weird reason, you have to include the team ID prefix in code when referring to a Keychain Sharing group. Just in case you don’t know what I’m talking about, that’s the code you see below your name in the developer portal.

A simple way to check what the “full name” of your keychain group is would be to run jtool on the final binary. I often use it to make sure my entitlements are the way I expect them to be.

checking entitlements with jtool

So in my example, instead of using codes.rambo.KeychainDemo in code, I’d use 8C7439RJLG.codes.rambo.KeychainDemo.

Simulator

I’m not a fan of the iOS Simulator, unless we’re talking about quick tests on different device sizes or taking screenshots for the App Store. There are just way too many features that don’t work properly in the Simulator, or that behave differently from real-world devices. One of them is Keychain Sharing. So if you’re using Keychain Sharing, forget about using the Simulator.

Confusion

I’ve made this mistake before, so I thought it’d be a good idea to include a warning about it here. There is a separate, unrelated type of entitlement which is called “App Groups”.

An App Group lets you share preferences and other things with extensions and other apps from the same developer, but it does not add any keychain-related functionality. So be careful not to confuse the two.

Keychain items that won’t go away

When you delete an app that uses Keychain Sharing from a device, if there’s another app that includes that group, the items won’t be deleted. Even after you delete the last app from the group, they won’t be deleted immediately.

So if you need to test with a fresh keychain, I suggest adding a debug flag to your app that allows you to wipe the keychain group during development.

Capabilities are configured per-target

If you want to share your keychain items between an app and an extension, don’t forget that your app extension is a separate target from your main app. That means you need to go into your project settings, select your extension target and add the same Keychain Sharing group capability to the extension, otherwise it won’t have access to the shared items.

Oh, by the way, only executable targets such as apps and app extensions have entitlements, so even if you’re implementing your keychain code in a separate target, such as a framework, the capability needs to be added to the targets that are linking against that framework.

Debugging tip

Frequently when trying to use a system API, you’re going to come across error messages that are not very helpful since they don’t say much about the underlying problem that’s going on.

A simple debugging trick that’s helped me countless times in those situations is to use Console. Launch the Console app on your Mac, select your iOS device on the sidebar, enable info and debug messages in the “Action” menu and use the search bar to filter for a keyword related to what you’re trying to debug, such as “keychain”.

Below is an example of what happens if an app extension tries to save an item to the Keychain specifying a group that it doesn’t have the entitlement for. As you can see, the message is very clear about what’s wrong:

debugging with Console

This was just a brain dump of the pitfalls I’ve found while working with Keychain Sharing on iOS. If you have anything to add, feel free to reach me out on Twitter.

App identifier (added on 01/16/20)

Patrick shared an excellent insight about a quirk with the Keychain Sharing entitlement which doesn't happen with any other type of capability:

If you're not sure about what he means by "app-specific ID", he's talking about the way app IDs used to work before there was a single team ID prefix shared by all apps from the same development team. I'm not sure when this changed, but a long time ago each app would have its own bundle ID prefix. So if you are the developer of one of these "legacy" apps, you won't be able to use keychain sharing between separate apps because the app ID prefix will be different between them.

Sharing between app groups (added on 01/16/20)

As noted by @KhaosT, it is now possible to use an App Group to also share keychain items. This is explained on this document by Apple. As explained in the document, you should still use a Keychain Sharing group if you only intend to share keychain items.

]]>
https://rambo.codes/posts/2020-01-03-you-can-use-swiftui-todayYou can use SwiftUI todayhttps://rambo.codes/posts/2020-01-03-you-can-use-swiftui-todayFri, 3 Jan 2020 23:50:00 -0300Whenever Apple introduces a new technology for developers, there’s always the question: “Can I use this thing right now for my projects?”. The answer many times ends up being the good, old, and boring “it depends”, but more often than not it leans toward “probably better to wait for a few OS versions”.

With SwiftUI, it’s the same. While there are some brave developers using it to create production apps - especially on the Apple Watch - most developers seem to agree that it’s kind of like using Swift when it first came out.

I’ve been asked the question many times, and I tend to agree that going all-in on SwiftUI right now for an app you plan on publishing to the App Store is a little bit risky. The technology is very new, it has quite a few rough edges, and is bound to change significantly in the coming months/years.

But at the same time, just like I found it important for me to learn Swift when it was first introduced - even though my main projects were all still on Objective-C - I also find it important for every iOS developer to at least familiarize themselves with SwiftUI.

The question then is: given I don’t want to publish anything to my users that’s done with SwiftUI since I still have to support iOS 12 and I don’t want to risk getting something so new into production, how do I go about having enough experience with it to feel comfortable with the technology?

That type of comfort to me only comes when I create something “for real”. That’s how the unofficial WWDC app for macOS was born - it was an app I made to learn Swift. Following tutorials and making little test projects is great to learn the basics, but there’s a level of knowledge you can only get when you really get your hands dirty with the technology and start creating things with it.

I found a way to become comfortable with SwiftUI that I would like to share here since I think it might be useful to many developers. We often have our own internal tools that we built ourselves and use for different reasons. I have quite a few of them.

One example is the app that’s used to create content for ChibiStudio. It’s a Mac app that I and my illustrator use when creating the packs that are available and sold in the app. ChibiStudio also includes a “Learn” tab with content published by us, which is served as a static JSON file generated by another little app I made. I have another one that I use to test push notifications, one to generate JWT tokens for testing and quite a few others.

images of two of my internal tools

If you don’t have the habit of creating your own internal tools to make your job easier, I highly recommend it. They don’t have to be perfect, they just have to fulfill your own needs, and you should make them with no expectation that you’re ever going to make them available to anyone else.

So the trick I found for myself to get comfortable with SwiftUI was to start making all of my internal tools using the technology, instead of using AppKit or UIKit which I’m already familiar with. This not only makes me more comfortable with SwiftUI, but it also turns the task of developing and maintaining these tools more fun, since I’m learning while making them.

It’s also a completely safe way to do it since these tools are never meant to ship to “regular users”, so I can afford to have some weird UI glitches here and there and not have a perfectly designed app. Of course, nothing beats actually releasing something to real users in the App Store, but this is the closest I could get without actually doing it, and because I’m doing it, I feel like when the time comes to adopt SwiftUI for user-facing projects, I’m going to be ready for it.

So yes, you can, in fact, start using SwiftUI today.

]]>
https://rambo.codes/posts/2019-12-09-clearing-your-apps-launch-screen-cache-on-iosQuick tip: clearing your app’s launch screen cache on iOShttps://rambo.codes/posts/2019-12-09-clearing-your-apps-launch-screen-cache-on-iosMon, 9 Dec 2019 15:00:00 -0300Every time I’ve had to change something in the launch screen on any of my iOS apps, I’ve faced an issue: the system caches launch images and is really bad at clearing said cache, even after the app has been deleted.

Sometimes I’d change the launch screen storyboard, delete the app and re-launch, and it would show the new storyboard, but any images referenced in the storyboard wouldn’t show up, making the launch screen appear broken.

Today, I did some digging into my app’s container and noticed that in the Library folder there’s a folder named SplashBoard, which is where the launch screen caches are stored.

So all you have to do to completely clear your app’s launch screen cache is run this code inside your app (which I've conveniently packaged into an extension of UIApplication):

import UIKit

public extension UIApplication {

    func clearLaunchScreenCache() {
        do {
            try FileManager.default.removeItem(atPath: NSHomeDirectory()+"/Library/SplashBoard")
        } catch {
            print("Failed to delete launch screen cache: \(error)")
        }
    }

}

Note: tested on iOS 13 only

You can put it in your app’s initialization code behind an argument flag that you enable during launch screen development, then leave it disabled when you’re not working on your launch screen.

This trick has saved me a lot of time while messing with launch screens, and I hope it will save you some time as well.

]]>
https://rambo.codes/posts/2019-11-05-apple-has-locked-me-out-of-my-developer-accountApple has locked me out of my developer account (UPDATED)https://rambo.codes/posts/2019-11-05-apple-has-locked-me-out-of-my-developer-accountWed, 20 Nov 2019 10:00:00 -0300Update 2019, Nov 22: Apple reached out and resolved the situation.

I’ve been unable to access my Apple developer account since August. When I try to access any part of the developer portal, like the beta downloads page or the certificates control panel, I get redirected to a contact form that reads “Need assistance with accessing your developer account?”. My developer team doesn’t show up in Xcode anymore. I’m also unable to manage certificates or send builds of my employer’s developer team apps while logged in to my developer account in Xcode because it says it’s “disabled for security reasons”. The push notification service denies any requests I make to it. Back when the issue first began, I filled out that form and got a case number (20000057023991), with the promise that support would get back to me “in one to two business days”.

After about two weeks of waiting, I decided to call developer support, which sounds easy, but I couldn’t find any phone number to reach them anywhere on the public part of the developer site. The only way to get on the phone with developer support is to visit a page within the developer portal where you can enter your phone number for them to call you later. The problem is that I couldn’t even visit that page because it also redirected me to the aforementioned contact form.

Determined to get someone on the phone, I used my employer’s developer account to be able to reach the phone support page, where I entered my number. Developer support then called me, and I gave my previous case number to a nice person on the other end of the phone, who explained that my case had been escalated to a supervisor, who then escalated it to their supervisor, and that I would hear back from them “soon”. This was in mid September. In early October, I called again and was told I would receive an e-mail explaining the situation, I haven’t.

More recently, I tried calling again and got to talk with a supervisor, who said I would be getting an e-mail with instructions to get my access restored. During the call, they told me my developer account is currently “inactive”. I followed up over e-mail a couple of days later and got a generic response that “the internal team is still investigating the issue” and thanking me for my patience.

Like I mentioned before, the problem began in August. So far I’ve tried every possible private communication channel before deciding to make this story public. It’s worth mentioning that I didn’t get any e-mail or call from Apple warning about any sort of action being taken against my developer account. Apple always says that “running to the press doesn’t help”. Unfortunately, they haven’t responded in any way, even when I tried reaching out through internal contacts that I have. So the only option I have left now is to “run to the press”.

I've read about developers having their accounts shut down for all sorts of alleged reasons. The most recent case happened to my friend Ying, who got his account terminated for “fraud”, which he didn’t commit. After he shared his story publicly, Apple reinstated his account.

In my case, I believe that’s not what happened, since my apps are still in the App Store, ChibiStudio was even featured recently (after I was already locked out). The problem is that I’m effectively not in the developer program anymore. I can’t generate provisioning profiles for my iOS apps, I can’t send push notifications, I can’t even contact developer support, other than by using that form I’m always redirected to.

Until last week, I was still able to upload new builds to the App Store and notarize my apps, but now there’s a new developer agreement and I can’t accept it because I can’t access that part of the developer portal, which means I’m unable to upload new builds to the App Store or notarize Mac apps, which is a requirement if you want to ship apps for Catalina.

To give a brief overview of the impact this has:

  • My app ChibiStudio has been in the App Store since 2016, and it currently has more than 100,000 active users. I can’t release updates of the app until this issue is fixed.
  • My app AirBuddy, which I released earlier this year, has tens of thousands of users. I can’t notarize new builds of the app, which means I can’t update it.
  • The unofficial WWDC app for macOS, which is open-source and also has tens of thousands of users, is an extremely valuable resource for many Apple developers. I’m unable to archive, sign and notarize the app for distribution without access to my developer account, which means I’m unable to update it.

Besides leaving my users out in the dark if I’m unable to provide them with updates, this also severely impacts my income, and the income of my friend who makes the drawings for ChibiStudio.

I’m not sure if this is a deliberate act by Apple or some sort of bug (I remember facing a similar issue a few years ago, but it only lasted a couple of days). I see no reason for my developer account to be blocked. I’m not writing malware, I’m not misusing enterprise certificates like some companies were, I’m not performing any scams (like many apps are), I’m just an indie developer trying to make good products.

I've been an Apple developer since 2008 and have been publishing apps to the App Store since 2013. Apple has made tens of thousands of dollars with what I pay for the developer program plus the 30% cut they take from my sales. I know that's not much for them, but to me it is.

I would like for someone from Apple to reach out and explain what’s happening, and preferably fix the issue so that I can keep shipping my apps to keep my users happy.

Apple, is there anything I can do to get my account unlocked so that I can get back to work?

]]>
https://rambo.codes/posts/2019-01-11-hacking-with-private-apis-on-ipadHacking with private APIs on iPadhttps://rambo.codes/posts/2019-01-11-hacking-with-private-apis-on-ipadFri, 11 Jan 2019 12:00:00 -0200Using private APIs can be fun. When working with private APIs, I usually prefer to write code in Objective-C, since the runtime makes it a lot easier to use classes and call methods Apple doesn’t want you to. That’s of course when I’m working in Xcode on my Mac.

But what if I want to use my other computer? You know, the one I can carry with me a lot more easily than my 15” MacBook Pro. I just recently got the new 12” iPad Pro and I wanted to do some hacking with private APIs on the go. It’s not as convenient as using ObjC and Xcode on a Mac, but it can be done. I’m going to share what I found in this brief article.

Swift Playgrounds

The best way to interact with native iOS API on iOS itself is Swift Playgrounds. It supports most built-in frameworks and you can pretty much write anything you want using the app.

To use private APIs tho, you have to interact with the runtime, and that can become very verbose and ugly in Swift. The following is an example of how to load a private framework, instantiate and show one of its view controllers using Swift Playgrounds on iPad:

import PlaygroundSupport
import UIKit
import ObjectiveC

assert(Bundle(path: "/System/Library/PrivateFrameworks/AvatarUI.framework")!.load())

let AVTSplashScreenViewController = NSClassFromString("AVTSplashScreenViewController") as! UIViewController.Type

PlaygroundPage.current.liveView = AVTSplashScreenViewController.init()

This is what you get when you run the code above:

That’s great, but as soon as you want to do more complex things, you start having to use NSSelectorFromString, performSelector, etc and that can be quite ugly. I generally use Swift Playgrounds only for very simple tests when dealing with private APIs.

JSBox

JSBox is a great app that allows you to write javascript code and interact with the runtime using its bridging support. To reference something from Objective-C land all you have to do is use $objc("ClassNameHere"). You can invoke any method with invoke("method:name:here:", arg0, arg1, arg2) and the app itself provides lots of utilities for UI development and for calling into other iOS APIs.

Another advantage of JSBox is that it supports Shortcuts and has an intents UI extension, so you can write a JSBox script and invoke it via Shortcuts without having to launch the app for the script to run. You can even show UI inside the Siri interface. Here’s an example, showing how you can create a JSBox script that renders the Face ID unlock animation:

This is the result:

Pythonista

Pythonista is another great option since it allows you to call native APIs with an Objective-C bridge. Between the three options I’m showing in this article, I think Pythonista offers the best syntax for calling ObjC methods.

Here’s how you can write a Pythonista script that shows the iOS power down UI when run:

And here’s the result:

Conclusion

The best development environment to work with private APIs is still Xcode on the Mac, but there’s a lot that can be done on iOS, especially the iPad. Of the three options shown in this article, it is hard to name a favorite because each one has advantages and disadvantages, but the one I’ve been using the most, especially because of its flexibility and integration with Shortcuts, is JSBox.

I hope this article has been helpful. As always, you can reach me on Twitter.

]]>
https://rambo.codes/posts/2018-11-09-animations-are-assetsAnimations are assets: using Core Animation archives on iOShttps://rambo.codes/posts/2018-11-09-animations-are-assetsSun, 11 Nov 2018 12:00:00 -0200I should start by explaining what I mean by “animations are assets”. I don’t mean that every single animation in an app must be represented by an asset and can’t be done programmatically, since that would be dumb. What I do believe in is that complex animations, especially ones that are not very dependent on dynamic data that’s only known at runtime, should be assets.

I’ve always been in favor of letting assets be assets. If you have a button and that button has an icon, that icon should be an asset, either a set of PNGs or a PDF if you’d like to keep the vector data. Some people like to draw everything in code, even using apps such as PaintCode to generate it for them. I’m not a fan of that approach, and the same thing goes for animations.

One way to represent animations as assets is to encode them as video files and play them back using something like AVFoundation. That’s a valid approach depending on what you’re doing. If you don’t have to support a very large variety of screen shapes and dimensions, a simple video should work. If you need animated vector graphics that can be scaled and possibly transformed in other ways at runtime, you’re better off by using another technique.

The one I’m going to propose today is not used a lot by third-party apps, but it is used a lot by Apple’s apps and system components. There’s this thing called a “Core Animation Archive”, you can find one if you look into Apple’s apps and frameworks, usually represented by a file with the extension .caar. These files are actually fairly straightforward: they consist of a Core Animation layer tree which is archived using NSKeyedArchiver, resulting in a “frozen” layer tree you can store on disk and then load again at runtime.

If you’re not familiar with NSKeyedArchiver, all you need to know is that it’s a very old API (being old doesn’t mean bad or deprecated, it’s just old) that takes objects from memory and encodes them in a way that can be stored on disk and then transformed back into objects later. Storyboards and XIBs work this way at runtime: they’re kinda like freeze-dried objects.

So all you need to know to create yourself a Core Animation Archive is how to use NSKeyedArchiver and which format the archive should be in. CAAR files usually consist of a dictionary as the root object, this dictionary has a key called rootLayer, with its value being, you guessed it, the root layer of the archive that should be read by the application and drawn on screen.

Here’s a simple way to create a Core Animation Archive programmatically:

// Create a simple square layer

let layer = CAShapeLayer()

let rect = CGRect(x: 0, y: 0, width: 200, height: 200)

layer.path = CGPath(roundedRect: rect, cornerWidth: 5, cornerHeight: 5, transform: nil)
layer.frame = rect

layer.fillColor = UIColor.red.cgColor
layer.strokeColor = UIColor.white.cgColor
layer.lineWidth = 2

// Create dictionary required to comply with the CAAR format

let caar = ["rootLayer": layer]

do {
    // Use NSKeyedArchiver to "freeze-dry" the layer tree

    let data = try NSKeyedArchiver.archivedData(withRootObject: caar, requiringSecureCoding: false)
    
    // Write test CAAR file to the Documents directory

    let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
    let url = URL(fileURLWithPath: "\(path)/redSquare.caar")

    try data.write(to: url)
} catch {
	print(error)
}

If you open this file using my CAARPlayer app, this is the result:

To load and display this file programmatically, you have to do something like this:

let data: Data = // load file from disk

// Force-unwrapping for demo purposes, "!" is evil, don't use it
let caar = try! NSKeyedUnarchiver.unarchiveTopLevelObjectWithData(data) as! [String: Any]

let rootLayer = caar["rootLayer"] as! CALayer

// Do something with rootLayer, such as add it to a view

We’re just doing the reverse of what we did before: loading the data, unarchiving it as a dictionary with String keys and Any values, and grabbing the rootLayer key, which will be of type CALayer.

To make things nicer, we could write an AnimationArchive class, like this one:

final class AnimationArchive {

    let rootLayer: CALayer

    enum LoadError: Error {
        case assetNotFound
        case invalidFormat
        case missingRootLayer
    }

    init(assetNamed name: String, bundle: Bundle = .main) throws {
        let data: Data

        if let catalogData = NSDataAsset(name: name, bundle: bundle)?.data {
            data = catalogData
        } else {
            guard let url = bundle.url(forResource: name, withExtension: "caar") else {
                throw LoadError.assetNotFound
            }

            data = try Data(contentsOf: url)
        }

        guard let caar = try NSKeyedUnarchiver.unarchiveTopLevelObjectWithData(data) as? [String: Any] else {
            throw LoadError.invalidFormat
        }

        guard let layer = caar["rootLayer"] as? CALayer else {
            throw LoadError.missingRootLayer
        }

        self.rootLayer = layer
    }

}

Then loading the archive is as simple as:

do {
    let archive = try AnimationArchive(assetNamed: "redSquare")
} catch {
    print("Error loading archive: \(error)")
}

Notice the code will first try to find the asset on an asset catalog. That’s the way I prefer to ship assets with my apps, including animation assets. You can read more about asset catalogs and how to use data assets on this article. If it can’t find the asset in a catalog, it will try to load it from the bundle’s resources folder, assuming the caar extension.

Using Kite

The example above was used to illustrate how simple it is to create and read Core Animation Archives. It’s not always practical to create the animation in code, if we were always creating the animation in code there would be no need to archive it to disk and load it later (we could just use the code directly).

But there’s a tool that makes creating animations with CoreAnimation much easier: Kite. I think of Kite as “Sketch, but for Core Animation”. My workflow usually goes like this: create animation assets in Sketch, import with Kite, animate and export to CAAR.

Let’s say there’s a flow in my app the user must do and I want to reward them at the end with a nice haptic feedback and a custom, animated checkmark.

I start by creating a simple checkmark in Sketch, which is then imported into Kite. In Kite, I animate the strokeEnd property for both the checkmark and the circle around it, creating a nice little animation. Then I choose the Export > Core Animation Archive option, save the caar file and add it to my asset catalog.

I then create a simple UIView subclass that can be initialized with an instance of that AnimationArchive class I showed earlier:

class AnimationView: UIView {

    let animationLayer: CALayer

    init(archive: AnimationArchive) {
        animationLayer = archive.rootLayer

        super.init(frame: .zero)

        stop()
    }

    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    override func layoutSubviews() {
        super.layoutSubviews()

        CATransaction.begin()
        CATransaction.setDisableActions(true)
        CATransaction.setAnimationDuration(0)

        installAnimationLayerIfNeeded()
        layoutAnimationLayer()

        CATransaction.commit()
    }

    private func installAnimationLayerIfNeeded() {
        guard animationLayer.superlayer == nil else { return }

        animationLayer.isGeometryFlipped = false
        layer.addSublayer(animationLayer)
    }
    
    // ...
}

The code above is fairly straightforward. The AnimationView class is initialized with an AnimationArchive, from which it gets its animationLayer. I use layoutSubviews as the signal to install the animation layer on view’s layer tree and to also layout it according to the view’s bounds, the CATransaction calls are required to prevent CoreAnimation from automatically animating the changes we do to the layer in those methods.

Setting isGeometryFlipped on the animation layer is necessary because we exported it from macOS, but are using it on iOS, which has a different coordinate system.

The method layoutAnimationLayer is not shown, but you can find it in the sample project. It does some math to transform the animation layer so it fits the view’s bounds, without distorting its contents.

To control the playback of the animation, I implemented stop() and play() methods:

func stop() {
    animationLayer.timeOffset = 0
    animationLayer.speed = 0
}

func play() {
    animationLayer.speed = 1
    animationLayer.beginTime = CACurrentMediaTime()
}

Those are very straightforward: stop() rewinds the layer so it goes back to the beginning of its timeline, then sets its speed to zero, preventing any animations from playing. The play() method sets the speed of the animation layer to 1 and it’s beginTime to CACurrentMediaTime() to make sure the animation starts playing from the beginning immediately after it’s called.

That’s it! There are other things you can implement by messing around with the timeOffset and speed properties of the animation layer such as reversing the animation or driving the animation using a gesture recognizer, which is what I did for the onboarding shown on my previous article.

Background

So now that I’ve shown how it’s done, maybe I should explain why I prefer to treat animations as assets instead of coding them by hand or using 3rd party animation frameworks.

The use case that made me adopt Core Animation Archives wasn’t directly related to animations. When I was making the first version of my app ChibiStudio, I wanted a way to store vector data that could be manipulated at runtime (such as changing the fill color of a layer) for the items a user can pick to create their character.

I thought about using SVGs, but there’s no native way to turn an SVG into CoreAnimation layers on iOS, which means I would need to ship a large dependency such as SVGKit with my app to do it at runtime. Shipping SVGs with the app would also make the app a lot larger and have a performance impact because those SVGs would have to be parsed and turned into CoreAnimation layers at runtime.

Then I learned about Core Animation Archives while doing some reverse engineering of iOS and decided to use them instead. The app has been up since 2016 and this technique has been proven to work very well for its needs.

I’ve been asked before if this is using a private API, to which the answer is: definitely not. The CALayer (public) class adopts the NSCoding (public) protocol and we use NSKeyedArchiver and NSKeyedUnarchiver (both public) to save/read the archive. There’s no private API involved, we’re just using NSCoding for CALayer like we would for any other object such as NSString, NSNumber, etc. CALayer’s conformance to NSCoding (more specifically NSSecureCoding) is even documented.

So no, this is not using a private API and it’s not likely to break any time soon. I wish this technique was more widely known and documented by Apple because I think many apps could benefit from it.

Main advantages

These are, in my opinion, the main advantages of using this technique instead of code generation or a 3rd party animation framework:

  • You get to use Kite (a visual editor) but without having to use its generated code, which can be big and not necessarily pretty
  • It avoids adding another dependency to the app, which for me is always a win
  • Since the output is a CoreAnimation layer tree, manipulation can be done to change colors, transform layers or change the animation behavior
  • Being an asset means that it can be added to an asset catalog, directly to a bundle or even downloaded from a server

I hope this article has inspired you to try out this technique. As always, you can reach me on Twitter.

I’d like to thank my friend Natan for his help with this article.

]]>
https://rambo.codes/posts/2018-10-22-airpowerAn incredibly nerdy deep-dive into the AirPower charging animationhttps://rambo.codes/posts/2018-10-22-airpowerMon, 22 Oct 2018 12:00:00 -0300Maybe this is not common for all developers, but as an iOS developer, one of my favorite things to implement are beautiful user interfaces and cute animations. When the UX people at my job presented me with a new onboarding screen for our app, it consisted of static screens on Zeplin, without any animations specified. I asked for the Sketch file and came up with this:

Needless to say, they were happy about the result and the product manager didn’t mind me spending a couple extra hours to do that. In case you’re wondering: I made the animations in Kite and exported each step to a separate Core Animation Archive file (a feature implemented in Kite per my request), created an UIView subclass to switch between the different animations and hooked up the swipe gesture to it so they would follow the user’s actions.

But I’m not here today to write about that project, I want to write about Apple’s ~seemingly dead~ project: AirPower.

Even before its announcement more than a year ago, while digging through a leaked build of the iOS 11 GM, I noticed that a new component of the system would be responsible for presenting a charging interface and that it would have some sort of 3D animation.

When Apple announced AirPower a few days later, the video they showed immediately caught my attention.

That was clearly the “3D animation” I was talking about. I knew there was going to be an animation because the component in question was using SceneKit, but I didn’t know it was going to be exclusive to Apple’s own wireless charger.

How does it work?

iOS has several apps with names ending in “ViewService” that aren’t “normal” apps, they are apps that run triggered by something else on the system and usually draw UI on top of another context such as an app, the home screen or the lock screen.

An example of such a view service is SharingViewService, which is the app responsible for presenting things such as the AirPods and HomePod pairing UI and AirDrop. On my Tweet I included earlier I mention ChargingViewService. That’s an app that was included with the iOS 11 GM and the first public releases of iOS 11 but then got removed in iOS 11.2, probably since it was not needed yet given that AirPower was (and still is) not available.

ChargingViewService is the process responsible for showing the cool animation. When the device is connected to power, a daemon called sharingd detects the presence of a power source for the device, it then checks to see if the power source is a wireless charger manufactured by Apple and then triggers the presentation of the charging UI.

Initially, ChargingViewService creates a view controller called InitialViewController (a note: the lack of a prefix does not mean that this component is written in Swift, in fact it isn’t). InitialViewController receives information from sharingd about the devices being charged and their power sources. On viewDidAppear, it adds a child view controller to itself called CallistoViewController (Callisto is the codename for the AirPower project).

CallistoViewController is responsible for showing a list of devices being charged with their icon, name and battery indicator. Each item on that list is called a platter. But before the platter is shown to the user, CallistoViewController presents yet another view controller, called EngagementViewController.

EngagementViewController is the view controller responsible for displaying the 3D animation when a new device is engaged on the same wireless power source as the key device. As far as I can see, the “key device” is the main device in the group of devices being charged, that is, the iPhone. The first engagement animation to be displayed is the one for the key device itself.

When EngagementViewController appears on screen, it calls a method on itself called startEngagement, which in turn configures a view called EngagementView with several properties about the device being engaged.

To be able to display the engagement animation for a device, EngagementView needs to be configured with several assets that define how the animation is going to work and also provide the visual assets for the animation.

These assets are not currently present in any public build of iOS, they are downloaded on demand from Apple’s main entry software update server. This makes sense since most people won’t even have an AirPower and even then most people won’t have all devices that can be charged with an AirPower, so only the assets that are actually needed are downloaded and installed to /System/Library/PreinstalledAssetsV2/RequiredByOS.

Another thing that’s done during this process is a capture of the device’s wallpaper through a call to SBSUIWallpaperGetPreview . Yes, the virtual device on screen will show up with the actual wallpaper that’s set up on your device.

The most important asset for the animation is a video file, usually called Charging.mov (AirPods have other video files for Left-only, Right-only and Right+Left). This video file consists of two videos side-by-side: one of them is the color video of the 3D device animating into the screen and then rotating and the other one is the same content, but represented as an alpha mask.

Another asset is a SceneKit scene file that contains a plane matching the position of the device’s screen throughout the animation (it lines up with the video). When the engagement animation is presented, the video is sliced in half and is used as a texture in a SceneKit scene, with the color part being used as the diffuse texture and the alpha part being used as the transparency texture, resulting in a video with a transparent background. The wallpaper is composed on top of the video with the plane provided by the scene.

Only part of the animation is actually provided by the video and scene files. As can be seen in the video above, after the initial appearance of the device, it rotates quickly, but doesn’t shrink down as if it’s flying back, which is what it does in the actual UI.

That “flying back” animation is configured through a series of plist files which are also a part of the assets downloaded from mesu. There are different permutations of the files for each iPhone screen size and also variants for right-to-left languages.

The engagement view uses those plist files to drive the animation of the device flying into its platter, which is accomplished by transforming the SceneKit scene view containing the video and screen of the device. At the end of the animation, a snapshot is taken which becomes the static image of the device that can be seen on the platter.

Making it run on an iPhone 5s

I keep a jailbroken iPhone 5s around just for experimentation. The iOS version I’m running on that device (11.1.2) happens to include ChargingViewService. I had been able to activate the UI before, but only showing the platters since I didn’t have the animation assets.

Recently, I got my hands on those assets for the Apple Watch Series 3 and the AirPods with the wireless charging case, so I decided to give it a go again and try to make the engagement animation run.

The process to make it run was very similar to the one I described on my NSSpain talk this year. Initially, I made a framework which I called ChargingHack and modified the load commands for the ChargingViewService binary so it would include a LOAD_DYLIB call to that.

Later on, I figured out a more simple way of doing things: I made a new app in Xcode called HackyViewService and removed the “compile sources” build phase from it, added everything from ChargingViewService.app into the project, removing the code signature and changed the LOAD_DYLIB command to load ChargingHack from @executable_path/Frameworks . This works because ChargingViewService doesn’t require any special entitlements to present the engagement animation and platters and because I’m running on a jailbroken device. This was a much better way to tinker with ChargingViewService since I could run it directly from Xcode.

What I do in ChargingHack is again very similar to my NSSpain demo: I swizzle some methods to make things work the way I want them to, then instantiate InitialViewController and “make it believe” it’s seeing an Apple Watch being charged through AirPower.

I’m not going to post the code on-line (yet) because I plan on using this as a demo in workshops and other talks, so keep an eye out for that.

I hope you enjoyed this nerdy exploration of an unreleased feature of iOS, if you have any comments, you can find me on Twitter.

]]>
https://rambo.codes/posts/2018-10-03-unleashing-the-power-of-asset-catalogs-and-bundles-on-iosUnleashing the power of asset catalogs and bundles on iOShttps://rambo.codes/posts/2018-10-03-unleashing-the-power-of-asset-catalogs-and-bundles-on-iosWed, 3 Oct 2018 22:22:00 -0300Bundles and asset catalogs are features of Apple's systems every developer and app is using, even though many developers are probably not aware of their existence or how powerful they can be, especially when used together.

Today, I want to talk about what those two things are, what each one of them is supposed to do and how you can use both of them together to create a theming system for an app.

Bundles

Apple's systems, including iOS, use bundles to represent a collection of resources organized in a directory structure. Your app is a bundle, every dynamic framework your app is linked to is also a bundle, storyboards are bundles, there are many examples of bundles used in iOS, macOS, tvOS and watchOS.

A bundle is basically just a folder with some extension, which is displayed by Finder as if it were a file. Application bundles use the well-known .app extension. You can inspect the contents of a bundle in Finder by right-clicking and selecting "Show Package Contents".

Accessing a bundle's resources

You've probably used the url(forResource:withExtension:) at some point while working on an app, the bundle you call this method on is usually the main bundle, represented by Bundle.main. When running on your app, Bundle.main will return your app's bundle (the one that has the .app extension).

Accessing your app's own resources is cool, but you can also access resources from other bundles. Let's say your app has an embedded framework called Utilities and it has the bundle identifier codes.rambo.Utilities. Inside that framework, there's an image file named image.png. To access that image from your app, you can instantiate a Bundle using the identifier codes.rambo.Utilities and then use url(forResource:withExtension:) to get the URL for the image file inside that bundle:

guard let bundle = Bundle(identifier: "codes.rambo.Utilities") else { return }
guard let imageURL = bundle.url(forResource: "image", withExtension: "png") else { return }

let image = UIImage(contentsOfFile: imageURL.path)

print(String(describing: image))

Custom bundles on iOS

So far I mentioned only standard bundles which can be created using one of Xcode's built-in templates, but it's also possible to create your own, custom bundles, you can even do that without using Xcode at all (more on this later).

Since there's no Xcode template to create a custom bundle for iOS (only for the Mac), I decided to create my own. You can download the custom Xcode template here. To install the custom template, download it and run the following commands in Terminal:

cd ~/Downloads
unzip iOS_bundle_template.zip
mkdir -p ~/Library/Developer/Xcode/Templates/Custom
mv "iOS Resources Bundle.xctemplate" ~/Library/Developer/Xcode/Templates/Custom

Notice that the bundle template itself is a bundle, with the .xctemplate extension, how meta 😄. Restart Xcode to be able to access the new template.

Now that you have the custom template installed, with an iOS app project opened, you can go to File > New > Target and select iOS Resources Bundle.

On the next page, you can name the bundle, give it an identifier and optionally change the extension, or just leave the default extension which is .bundle.

That's it, now you have a custom bundle you can use to store resources. My template is called iOS Resources Bundle because I made it to specifically store resources like images and sound, but bundles can also contain executable code.

Adding a resource to this bundle is just like adding a resource to any other target, drag the file into its group in Xcode and make sure the target membership is correct.

Another important step is to make sure your bundle is being built before the app that's going to contain it and that it's going to get copied into your app's resources when the app is built.

That's it, now you have a custom bundle you can use to store assets, it can be accessed like so:

guard let bundleURL = Bundle.main.url(forResource: "Pictures", withExtension: "bundle") else { return }
guard let bundle = Bundle(url: bundleURL) else { return }
guard let imageURL = bundle.url(forResource: "image", withExtension: "png") else { return }

let image = UIImage(contentsOfFile: imageURL.path)

print(String(describing: image))

The way you access the custom bundle is a little different from the framework because the custom bundle is not loaded by the runtime, so it can't be looked up by identifier. To load it, we get a reference to its URL inside our app's bundle, then initialize it with Bundle(url:). After the initialization, accessing its resources works the same way as it did for the framework bundle.

Asset Catalogs

Asset catalogs are a way to store app resources by mapping between named assets and files. Each asset can be represented by multiple files, each file targeting a specific set of device attributes such as device class, memory, version of metal and color gamut.

Most iOS developers are probably familiar with the usage of asset catalogs to store images such as icons, but asset catalogs can be used for more than just that. Since iOS 11, you can also store named colors in asset catalogs. Another lesser known feature of asset catalogs is that you can store arbitrary data assets, which can be literally anything.

Suppose you have a set of configurations for your app, but you need to change those configurations depending on how much memory the device has. You could do all of that in code, but using asset catalogs is easier. Take this configuration struct as an example:

struct Configuration: Codable {
  let numberOfParticles: Int
  let isLowEndDevice: Bool
  let enableShadowEffects: Bool
}

The above struct can be encoded as a property list for low end and high end devices:

Now that you have the configurations, you can add a new data asset to an asset catalog by selecting the + button in the asset catalog editor, adding a new data asset, checking the memory variations and dragging the high end plist to the 3 and 4 GB slots and the low end plist to the 1 and 2 GB slots:

To load a data asset, you use the class NSDataAsset:

let asset = NSDataAsset(name: "config")

To access the underlying data, you can use the data property of NSDataAsset, when you do that, the system is going to give you the correct data for the current device's attributes automatically.

Note that all methods used to load resources from asset catalogs also have an optional bundle attribute you can provide in case you want to load assets from a bundle that's not your app's main bundle, like so:

let asset = NSDataAsset(name: "config", bundle: otherBundle)

Since Configuration is Codable, you can grab the property list from the asset catalog and use it to get the correct configuration for the current device. An extension on Configuration can make things easier for you:

extension Configuration {

  init?(assetNamed name: String, bundle: Bundle = .main) {
    guard let asset = NSDataAsset(name: name, bundle: bundle) else {
      return nil
    }
  
    guard let config = try? PropertyListDecoder().decode(Configuration.self, from: asset.data) else {
      return nil
    }

    self = config
  }

}

Then you can use the extension to load the configuration from your asset catalog:

let config = Configuration(assetNamed: "config")

Even better: instead of extending Configuration, you can extend the Decodable protocol itself so that any Decodable can be initialized from a data asset:

extension Decodable {

  init?(assetNamed name: String, bundle: Bundle = .main) {
    guard let asset = NSDataAsset(name: name, bundle: bundle) else {
      return nil
    }
  
    guard let instance = try? PropertyListDecoder().decode(Self.self, from: asset.data) else {
      return nil
    }

    self = instance
  }

}

This is a very simple example, but you can expand from here to take advantage of asset catalogs in your apps.

Case study: ChibiStudio

With the upcoming release of ChibiStudio 2.0, we're moving the item packs to bundles and asset catalogs. A very important aspect of ChibiStudio since day one is that we want it to work without an internet connection and we want users to be able to use purchased item packs right away, without having to wait for a download to finish, this means we need to ship every single item pack with the app itself.

Currently, the packs are stored in three separate files/groups:

1 - The chibipackx file, a file containing metadata for the pack, such as its name and availability conditions (some packs are only available during a limited time period, such as the Easter pack).

2 - Several data files for each item in the pack, the files contain the Core Animation vector data used to draw the item and some metadata such as the color slots and which ones can be customized by the user.

3 - Several png files for each item in the pack, containing a small preview image of the item which is displayed in the grid. Drawing the vector representation for several items in a collection view is too expensive, so we need those previews.

With 2.0, this changes to:

1 - A bundle for each pack, the bundle contains an asset catalog which has an asset for each item's vector data and another asset for each item's preview image. The preview image is compressed using the new HEIF compression available on iOS 12, which makes the files smaller. We ended up not using the compression because of performance constraints.

2 - A database with metadata for all packs available and other metadata describing how the items and packs should be organized in the UI. All content is loaded from the corresponding pack bundles when needed, based on the metadata (each item has a unique identifier).

The new setup has several advantages over the previous one:

1 - Asset lookup doesn't involve traversing directories and dealing with paths

2 - We can take advantage of the new compression available on iOS 12 and asset catalog thinning based on OS version and device We ended up not using the compression because it was too slow at runtime for smooth scrolling with hundreds of images in a collection view.

3 - The project itself doesn't have to contain all of those files for each item (tens of thousands), improving build times

4 - Moving the metadata to a database gives us more flexibility as to how we choose to organize the items in the UI

This is only possible through the use of bundles. You can't have multiple asset catalogs in a single bundle, if you create multiple catalogs in a single target in Xcode, they all get compiled to a single Assets.car file at the end. Separating the packs into their own bundles also makes it possible for us to add new features to packs in the future, such as metadata in the Info.plist file or even executable code.

Generating asset catalogs and bundles without Xcode

Packs for ChibiStudio are created in a custom Mac app we made specifically for this task. The artist draws the items in Sketch, exports them to the Chibi Pack Editor using a custom Sketch plugin, the editor then turns the Sketch drawings into Core Animation layers and assigns a category, index and layer for each item based on the layer name in Sketch. From there, the artist can customize aspects of the item such as the layer and color slots.

Since packs are created in the editor, it was necessary to add the asset catalog and bundle generator functionality to the app itself, the finished bundles are then imported into Xcode and added to the app's resources in the build phases.

Asset catalogs created by Xcode have a specific directory structure and use JSON files to configure their contents. I'm not going to dive into too much detail on this, but you can read Apple's documentation to understand the format.

The .xcassets folder created by Xcode is an editor representation of an asset catalog, which must be compiled in order to be used during runtime. Normally, Xcode compiles it for you, but you can also compile it manually using a command-line tool called actool.

To make the process easier in Chibi Pack Editor, I created a framework called AssetCatalogKit.

To generate bundles in the Editor, I created a template bundle which is embedded in the editor, after compiling the asset catalog for a pack, the editor copies this bundle template into the destination directory, moves the asset catalog into the bundle and creates its Info.plist file.

The result is a bundle with an Info.plist file and an Assets.car file, the bundle is located with the technique mentioned before, preview image assets are loaded using UIImage and vector data assets are loaded using NSDataAsset.

Practical example: theming

I didn't want to end this article without providing some sample code for you to play with, so I made a very simple app that uses bundles for theming. The app has two bundles: Light.bundle and Dark.bundle. Each bundle has its own asset catalog with color definitions and a config asset containing configuration for that theme.

The app's ThemeManager class takes care of loading the correct bundle for the selected theme and applying the colors and properties from that theme to the UI. You can find the sample app here.

This is a very simple example, with the same technique you could change more about your app depending on the theme, such as metrics (spacing, sizes, etc), images, or anything else that can be stored in an asset catalog or bundle.

In the sample app, I created the theme bundles using Xcode, but you could also create a simple Mac app as a theme editor for your iOS app and generate the bundles from that app, this app could then be given to your design team, giving them full flexibility when creating themes, without the need to install Xcode, deal with JSON or property lists and taking full advantage of asset catalogs.

This is precisely what we're doing for ChibiStudio 2.0, which will also support theming, I created an app that allows me to define spatial metrics, font metrics and colors for different themes. The app exports a bundle for each theme and it also generates Swift code to make it easier to access the theme's values. A theme can inherit values from other themes, similar to how CSS rules can be inherited.

I hope this article gave you some ideas of how to apply the power of bundles and asset catalogs on your projects, if you have any questions or comments, you can always reach out on Twitter.

]]>