<![CDATA[Not A Robot]]>https://tomhalligan.substack.comhttps://substackcdn.com/image/fetch/$s_!NkxT!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F864466c1-4e1e-4c9a-a68a-791599a89d0d_512x512.pngNot A Robothttps://tomhalligan.substack.comSubstackMon, 27 Apr 2026 18:57:15 GMT<![CDATA[Simple Unity Playmode Configuration]]>https://tomhalligan.substack.com/p/simple-unity-playmode-configurationhttps://tomhalligan.substack.com/p/simple-unity-playmode-configurationFri, 16 May 2025 00:26:14 GMTBuilding a game in any engine will inevitably result in a degree of complexity that requires management. If you’re working as part of a team, this complexity can quickly multiply, and as time goes on and features are added, you’ll invariably find that interests begin to compete.

Developers working on different systems may begin to step on each other’s toes: level designers may want the world to populate with enemies, but they might not necessarily want the player to be able to take damage whilst testing the flow of a level. The team tasked with implementing combat mechanics might want to make enemies behave particularly aggressively, or passively, whilst iterating on their systems, but your player character’s default health makes testing a chore. Perhaps your crafting system team would benefit from collectable items being highlighted with a bright green shader that renders through walls, but your GDD demands that your game is a dark, moody adventure where resources are difficult to find, and requires players to thoroughly investigate every nook and cranny.

In this post, I’ll walk through a simple pattern I’ve become fond of lately, that helps to allow different teams to specify their own preferences during development and test with ease.


When In Doubt, Make It Configurable

During the early days of development, it can be very tempting to hard-code certain values, or define them once in a prefab and then just modify your component’s settings before entering play-mode (or even afterwards!) to test particular scenarios.

For a solo developer or a very small team, this workflow is ‘OK’ for the most part, but any reasonably sized project or team may begin to notice that collectively, they spend an awful lot of time working around the game’s default configuration, manually tweaking values at runtime, or inventing team or scenario-specific tools and utilities which allow the developer to hack a path through the game’s various systems and set things up to more easily iterate on their own particular feature.

A screenshot of a Unity prefab, with a component named 'Entity' with various basic configuration options.
Yes, <x> is configurable’: Lies we’ve all told ourselves

This kind of ad-hoc (ad-hack?) or limited configuration process will only compound the complexity problem over time, and as the project grows, your developers will likely end up spending a not-insignificant amount of time fiddling and tweaking things every time they want to test a particular feature. This kind of wasted time quickly becomes routine, and therefore incredibly dangerous: you almost certainly didn’t budget for ‘messing about’ when you estimated how long a particular feature would take to implement, and even 20 seconds wasted every time you enter play-mode will very quickly add up.

Failure to grasp the nettle early on means it becomes more and more difficult to keep iteration times down as the project grows, so it’s wise to continuously think about the scenarios you and your development teams would like to be able to test, and to build your systems in a way that supports overriding the default configuration.


The Goal

What we’re aiming to achieve is a simple system whereby developers in a Unity project are able to launch playmode, and have the system detect their specific configuration preferences during startup.

Anyone who’s used Unity will be intimately familiar with the Playmode button. Press it, and within a few seconds, you’re in the game. It does one job, and it does it well! Unfortunately, game development usually requires a bit more flexibility than the Playmode button allows. In our case, we’re interested in entering playmode, but with a bunch of configuration overrides we can parse at runtime, in order to support whatever workflow we’re currently engaged in.

What we will achieve in this post is a very simple system which allows Unity developers to:

  1. Create a Scenario asset, which contains a few configuration options

  2. Click a button on the Scenario asset’s inspector which caches it for later usage, and then immediately enters playmode.

  3. Retrieve the selected Scenario asset at runtime, and use it to override our default configuration

Though the example I’ll demonstrate is very basic, you should be able to immediately identify how useful this simple setup could be for you and your team.

A flow-chart depicting the proposed flow, from selecting a scenario during edit-mode, to retrieving it and using it during play-mode
Scenario Flow

Scenario Assets

In order to define a Scenario, we first need to think about the kinds of things we’d like to be able to configure. In this example, imagine a simple ‘defeat the enemies to earn points’ style game. The obvious targets for configurability here are enemy and player health: perhaps we want to play through a level, but we only want enemies to have 1 HP so that it’s trivial to kill them. Or perhaps we’d like our player to have infinite health, so we can’t be damaged as we test how enemies react to the player’s presence or as we run and jump around the level.

Let’s define a simple Scriptable Object to contain some these configuration values:

A screenshot of some C# Unity code, defining a Scriptable Object containing numerous configuration options
A simple Scriptable Object containing a few configuration options for our game

You’ll notice I also added configuration options for player and enemy colours, just for fun.

In the Unity editor, we can now create as many instances of this PlaymodeScenario scriptable object as we like, by opening the Assets menu and selecting ‘Create/Velocity/Playmode Scenario’:

Creating a new Playmode Scenario asset

We can now select and modify our new Scenario asset. In this case, I’ve created two different scenarios:

  • Indestructible Enemies

  • Indestructible Player

A screenshot of the Unity Editor, showing two Scriptable Objects and their values in the inspector window
Our new Scenarios

In this case, a value of -1 for health means ‘infinite’, but the details here are irrelevant: our only goal is to make sure the values from the Scenario asset are used when appropriate.

Now that we have a few Scenario assets ready to go, let’s move on.


If you find my scribblings informative, helpful, or interesting, then please consider subscribing - either for free, or enjoy a 20% discount forever!

Get 20% off forever


Caching a Scenario

In order for the Scenarios we defined above to be effective, we need a mechanism for the developer to instruct the Unity editor to launch into playmode, and for the relevant code to make use of the selected Scenario.

Thankfully, this is pretty straightforward: all we need to do is utilise EditorPrefs for some temporary storage.

Let’s create a static class called PlaymodeScenarioUtils, and add the following code:

A screenshot of some utility code which stores the path to a Scenario asset in EditorPrefs
Storing and loading a Scenario based on its Asset Path, in PlaymodeScenarioUtils.cs

Note the user of the UNITY_EDITOR define. This lets us ensure that this code is only accessible in the editor. Though we might want to add support for different Scenarios in built versions of the game, we’re only interested in editor support for the time being.

In GetScenarioOrDefault, we attempt to retrieve the cached scenario asset path. If there’s no path available, then we return null. Otherwise, we load the asset at the cached path, immediately nullify the cached path value, and then return the loaded asset. Nullifying the cached value ensures that we don’t keep hold of the cached Scenario beyond a single run - any future calls to GetScenarioOrDefault will return null.

We also need a way for runtime code to get hold of the cached Scenario asset, so let’s add that functionality to our PlaymodeScenarioUtils class:

A screenshot of some C# Unity code, illustrating the mechanism by which a cached Scenario is loaded at runtime
Accessing the loaded Scenario

Here, our static constructor checks whether we have a cached Scenario, and loads it. If there’s no cached scenario, or we’re running a build of the game, then we just create a default instance of the PlaymodeScenario asset, which contains our default configuration values.

Whichever Scenario is loaded, be it the cached asset, or a default instance, the result is assigned to the readonly field Scenario - which can be accessed statically, making it easily accessible by any code that needs to use it.

Next, we need a way to actually cache the Scenario we want to use, and to enter playmode immediately after doing so.

Thankfully, that’s trivial! Still inside PlaymodeScenarioUtils:

A screenshot of some C# Unity code, illustrating a method which allows the user to cache a Scenario asset and then immediately enter playmode
Easy!

The final piece of the puzzle here is to add some basic editor functionality which allows us to call EnterPlaymodeWithScenario from the PlaymodeScenario inspector.

Create a new editor-only class (i.e - one that exists within an Editor folder, or within an editor-only assembly), that provides a custom inspector window for PlaymodeScenario assets:

A screenshot of some C# Unity editor-specific code, defining a custom inspector for PlaymodeScenario assets, which adds a "Play" button to the inspector that, when pressed, calls the method responsible for caching the asset and entering playmate
Custom Inspector for PlaymodeScenario assets

All we’re doing here is adding a “Play” button to the inspector for PlaymodeScenario assets, which lets us call PlaymodeScenarioUtils.EnterPlaymodeWithScenario with a reference to the Scenario asset we’re currently inspecting.

This means we’re now able to select a scenario in the editor, and launch directly into playmode from the inspector:

Now that we’ve closed the loop on our ‘create / cache / load’ functionality, all that’s left for us to do is actually use the Scenario asset at runtime!


Putting it all together

Now all of the ‘interesting’ work is done, all we need to do is grab the loaded Scenario asset at runtime, and do something with it.

In this example, I have a single class named ScenarioInitializer which performs my basic runtime initialisation: a player and an enemy Entity are spawned, and their initial values are configured based upon those found in the Scenario:

A screenshot of some C# Unity code illustrating the usage of a Scenario during runtime
Game’s done, ship it!

Pretty simple stuff. This game won’t be getting any Bafta nominations just yet, but we now have a nice, simple way to test 3 different scenarios:

  1. The default scenario, that players will see. This uses the values hard-coded into the PlaymodeScenario class itself.

  2. An editor-only scenario where the enemy has infinite health

  3. An editor-only scenario where the player has infinite health

Here’s a video demonstrating all 3 scenarios:

As you can see, with just a few simple scripts, we’ve built the foundation of a system that allows anyone working on the game to define their own custom overrides and to launch directly into the game without any runtime mucking about, or modifying any in-scene objects or prefabs.

With a little extra work, you can create scenarios that provide all manner of customisation options for your developers and designers, ensuring that their iteration time is kept as short as possible, and that they’re able to prioritise their own workflow requirements without negatively impacting anybody else on the team!


If you know a Unity developer who you think might benefit from these tips, feel free to share this post!

Share


Summary

As your project grows, it’s critical to ensure development remains as smooth as possible. Though there’s no silver-bullet for game development, and Unity is sorely lacking in some areas which could reduce development time and boost team velocity, we can be thankful that Unity is at least flexible enough to allow us to self-serve functionality that works with us rather than against us.

The tooling illustrated in this post is incredibly basic, and most serious projects will be vastly more complex than the silly example I’ve created here, but even in the most ambitious projects, this solution is flexible enough to be expanded so that it covers a wide variety of use-cases.

If you’re already a subscriber, you may have read my post about Bootstrapping your Unity Game with Addressables, in which I present a method to ensure that any time you enter playmode, all of your ‘system’ stuff is automatically instantiated and ready to be used. Though the bootstrapping system serves a different purpose than that illustrated in this post, it’s easy to see how the two might be combined, merged, and built upon in order to provide a robust bootstrapping and configuration system which supports a huge variety of scenarios and workflows, either in-editor or in builds. In serious projects, the number of parties involved in your game’s development can grow significantly; and investing some time into ensuring that a variety of interests can be served with minimal fuss will almost certainly pay off!

If you have any quick workflow tips and tricks up your sleeve, feel free to share them in the comments!


Not A Robot is a reader-supported publication. Your support means the world to me, so please consider becoming a free or paid subscriber!


Further Reading

If you’ve enjoyed this post, check out these previous posts with similar themes:

]]>
<![CDATA[Code Is Only Part Of The Story]]>https://tomhalligan.substack.com/p/code-is-only-part-of-the-storyhttps://tomhalligan.substack.com/p/code-is-only-part-of-the-storyTue, 26 Nov 2024 00:05:18 GMTThe best developers I’ve worked with all share a common trait: they are fantastic bi-directional communicators. In dialogue, written documentation, or even in casual conversation, their ability to explain their thinking or to listen to, quickly parse, and understand the thoughts, requirements, or interests of others made them incredibly useful assets not just to the development team specifically, but to the project and business more generally. These people were almost always very competent programmers, but their value was significantly enhanced by their ability to step outside of the code and view projects, clients, and business interests holistically.

Their code may not have always been the best code possible, nor was it the fastest or cleverest - but it was usually the code that made it into production with the least fuss. Typically, their output was easy for others to understand, robust enough to handle modification without falling to pieces, and provided just enough ‘cherry on top’ functionality to support integration with other areas of development without straying outside of its particular area of concern.

Though it takes all sorts to deliver a successful project - from those who knuckle down on complex systems for weeks at a time, to those who hop around different areas of the codebase, stitching things together and bringing order to chaos - I’ve found that there are a range of skills and traits that, with some encouragement and fostering, can improve the development experience, build greater understanding and cohesion between different disciplines, and significantly improve confidence, clarity, and a sense of direction.

These people may (unfortunately!) not be particularly visible; preferring instead to keep their minds focused on the task at hand rather than involve themselves in other areas without being prompted to do so, but what follows are some thoughts on the things to look out for, and the skills and habits to embrace and encourage in order to maximise potential.


Thanks for reading Not A Robot! This post is public so feel free to share it.

Share


The 10x Empath

Over the years, I’ve come to believe that a great indicator of a developer’s overall success is their ability to empathise with others. Developers who take an interest in the problem from the perspective of the customer / client / user, rather than rigidly following the brief, will often identify better solutions, make sure gaps in proposed functionality and features are caught early, and take an interest in the end user’s experience rather than handing over something which technically works, but is unpleasant or cumbersome to use.

People who are good empaths are usually fairly simple to identify, because communicating with them is easy. They listen well and will place themselves in the shoes of others in order to better understand a problem, rather than wait to be told what to build.

Their code is often well-structured, because they understand that others will be reading and using it, and they try to accommodate everyone’s interests (within reason) when carrying out their work. Their interactions with other departments (e.g. production, management, marketing, etc) are clear-headed and straightforward, even when problems arise, because they recognise that the bigger picture is what’s important, and want to make sure that everyone is on the same page and moving in the same direction.

Those with a good handle on empathy can make fantastic developers and team members overall, but it can be a double-edged sword: it’s impossible to accommodate everybody and everything, and it can be easy to lose focus if goals and tasks aren’t well-prioritised.

Though a developer with good empathic instincts can be a serious boon to any organisation, teams should be careful not to allow decision paralysis to cause delays. Empaths may struggle to commit to a decision which they feel doesn’t satisfy everybody, and might find themselves facing difficulty when attempting to reconcile conflicting or competing interests. It can be beneficial to ensure that empaths in decision-making positions have additional support to help prioritise the needs of the project and the business, and to act as a tie-breaker and forge a compromise when the natural tendency towards idealism becomes an issue.

Subscribe now and receive 20% off forever

Get 20% off forever

Socrates 2.0

Delivering a project successfully involves a great deal of interrogation and filtering to identify exactly what needs to be built, and for what purpose.

Though it’s often easy to rely on top-down diktats when it comes to identifying what to build and how to build it, it can be very useful (or, more accurately, critical) for proposals to be challenged and questioned in order to expose gaps in reasoning, to tease out design flaws, and to ensure that everyone understands the direction of the project.

Those who aren’t afraid to ask questions or to critique decisions need to balance worthwhile interrogation against the team’s patience and capacity, but the process of asking questions and seeking justification for design decisions can be a very valuable exercise that should be embraced (within respectful boundaries).

Those with the tendency to ask a lot of questions can sometimes come across as difficult - but it’s important to recognise that it’s almost certainly worse if nobody is asking questions!

Formalising the interrogation process is wise - though, in a fast-moving environment, this can be difficult: questions might arise at inconvenient times, and even with the best intentions, it’s practically impossible to have a complete understanding of what to build and how to build it before work actually begins.

The best approach to maximise the benefits of an interrogation is to block out time: this allows the questioner to extract whatever information they’re after, or to expose whatever potential problem they’ve spotted, whilst allowing the ‘defendant’ to give the questioner their full attention, rather than being blind-sided with a scattergun approach whilst their mind is elsewhere.

As with empaths, too much can be a bad thing here: no functioning business has infinite time or capacity to accommodate an endless list of questions and challenges. Formalising the process and ensuring that there is time allocated to interrogating decisions and plans is the best method to support both parties - and a good referee can help to keep things on track, ensure that there are boundaries to the debate, and that reasonable accommodations are made by both sides when disagreement occurs.


The Tolkien

Good technical writers can be hard to find, and programmers are not, by and large, known for their literary capabilities. Anybody who’s worked on a decently sized project has probably realised that ‘writing documentation’ is to programmers as ‘deliberately swallowing kryptonite’ is to Superman. Even when repeatedly beaten with large sticks in an effort to improve documentation discipline, programmers usually prefer to write code, rather than write about code.

There isn’t an easy fix to this problem other than maintaining strict standards, linting the codebase to ensure comments exist where required, and otherwise making a concerted effort to ensure that code is well-documented, but it’s also worthwhile remembering that documentation outside of the code is often just as important as comments within the code itself.

Design documentation, user manuals, and technical specifications all form part of a project’s deliverables, and the quality of these documents is critical - especially when they’re to be delivered to third parties or released publicly.

Even the best writers on your team might not naturally be particularly disciplined when it comes to writing documentation - but identifying those who have a talent for writing clearly and accurately will never be a wasted effort.

Though programmers are not typically hired for their writing skills, the ability to effectively convey information is incredibly important, and can drastically improve confidence in the project and the team overall, especially in an environment where documentation forms a critical part of the project’s contractual obligations.

Encouraging developers to practice their writing skills might be one of the best investments you can make in your team, and the anciliary benefits are impossible to ignore: improved communication, greater understanding, and the development of an in-house ‘voice’ all serve to strengthen a project and the team behind it.

Larger companies may employ dedicated technical writers to produce edit, review and maintain documentation that sits outside of the code itself, but for smaller teams, this is often impractical, if not impossible. Depending on the size of the project, it may be useful to nominate a handful of people with good writing skills to produce the relevant documentation, but in practice, producing documentation may simply form part of the responsibilities for every developer. In this context, it might be a good idea to ensure that your best writers are given enough space to act as editors, wrangling documentation of variable quality into shape, and ensuring that any gaps or inconsistencies are cleared up before they’re forgotten about.

If you have a good technical writer on your team, then fostering their abilities and providing an avenue for them to flex can be a great way to take advantage of a talent that might be in short supply - but in the ideal scenario, everyone is given the opportunity to practice their writing skills, since documentation so often forms such a critical part of a professional team’s output.

There aren’t many downsides to quality writing, but it can be difficult for multiple people to stick to a single style and voice without a strong editorial process in place. It may be wise to nominate a single individual to act as the final check on any written output, rather than allow inconsistent styles to pollute the documentation.


The Myth of Many Hats

The skills mentioned here, when identified and honed, can all provide an enormous benefit to any team. Though they can, if mismanaged, result in problems, it’s well worth trying to identify which tendencies the people on your team exhibit, and seek to find ways to make the most of their natural traits.

Though it would obviously be fantastic if everybody was great at everything, this is, of course, completely unrealistic. Developers, especially in smaller teams, are often used to ‘wearing many hats’, and are accustomed to finding themselves occupying a variety of roles at different times during a project. However, when this pattern is left unchecked, it can breed resentment and confusion over responsibilities and expectations. It is much better, overall, to seek to identify your team member’s natural strengths and weaknesses, and to position them in roles that encourage their growth in useful areas and take advantage of their innate abilities.

Expecting an individual to exhibit all of your most desired traits is short-sighted and a quick route to disappointment. Instead of assuming that everybody is capable of stepping into any role, it’s often much more productive to try to identify your team members’ natural talents, and to provide opportunities for their strengths to shine through.


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


If you’ve enjoyed this post, then please consider a free or paid subscription. If you’d like to read more tips and insights into maximising your team’s potential, then check out some of my previous posts:

]]>
<![CDATA[Unity: Behave Yourself]]>https://tomhalligan.substack.com/p/unity-behave-yourselfhttps://tomhalligan.substack.com/p/unity-behave-yourselfThu, 07 Nov 2024 01:15:42 GMTUpdate 17/11/2024

As of Unity Behavior v1.0.4 - the issue described in this post is fixed. Kudos to the dev team for their rapid turnaround!


I’m a big fan of tools that allow people to build fun stuff whatever their skill level, and that’s a big part of the reason why I like using Unity so much. Though it indeed remains pretty difficult to make a game without getting your hands dirty with some code sooner or later, I always find myself drawn to ‘no-code’ features - whether it be Visual Scripting or Unity’s recently-released Behavior (or ‘Behaviour’, if you like to spell things correctly) package, which allows developers to implement AI behaviour on characters or any other in-game system.

I like the educational and illustrative possibilities these tools provide, and they can also be an interesting way to approach a tricky problem from a new angle.

With the release of the Behavior package, Unity finally provides something that Unreal has had for years. Though the Unity asset store contains a fair few different implementations of this functionality, it feels only right that Unity would provide its own, so this is a welcome addition to the toolkit in my view.


Failing Successfully

In a recent post, I built something resembling a gladiator battle arena, and decided that this would be the perfect testing ground for the new Behavio(u)r package:

The behaviour I wanted to implement for my enemy gladiator is very basic, but would be enough to start building some battle mechanics around:

  1. When the player is not close to the enemy, the enemy patrols the arena.

  2. If the player comes within a certain distance of the enemy, then the enemy should abort its patrol and chase the player.

  3. If the player moves too far away again, then the enemy should revert back to its patrolling behaviour.

Here’s what that looks like with the new Behavior editor:

A view of the Unity Behavior editor window, showing a behaviour tree defining patrol and chase logic for an enemy character
Possibly not the best way to implement this, but we’re all learning together!

Aside from the ‘Repeat’ and the ‘Abort If’ nodes, this tree should be fairly self-explanatory to anybody with a beginner-level understanding of Unity.

Unfortunately, this tree, as simple as it is, fails immediately with a cryptic warning spamming the console:

Image displaying a Unity log message reading: FindClosestWithTagAction: No agent or target provided.
‘No agent or target provided’ is, I must stress, very much a lie.

If we jump over to our behaviour graph and debug the enemy gladiator object, we see the following:

A view of the Unity Behavior editor window, showing a node in a failed state with a red 'X' icon.
At least it’s easy to see which node failed…

To work out what the above warning message is talking about, let’s take a look at that failing node in its default state:

A view of the 'Find Closest With Tag' node in the Unity Behavior editor
Default ‘Find Closest With Tag’ node

This node requires three Blackboard Variables. In ‘AI lingo’, a Blackboard is a container for the data the AI system needs to do its job, and we can create multiple variables within a blackboard.

Nodes within the behaviour tree can read from, or write to, the variables in the Blackboard, and in this case, we need the following:

  1. A ‘Target’ variable

    1. This is the placeholder variable for the GameObject we want to look for.

  2. An ‘Agent’ variable

    1. This is the GameObject we’re treating as the ‘origin’ of our search.

  3. A ‘Tag’ variable

    1. This requires a string which corresponds to a Unity GameObject Tag

Let’s look again at how this node is used in our behaviour tree:

A view of the 'Find Closest With Tag' node in the Unity Behaviour editor, with all fields populated
Not an empty field in sight!

As you can see, all fields are populated with a Blackboard variable (with the exception of the ‘Tag’ field, which I just entered directly since I’m lazy).

The warning message Unity was incessantly spitting out at us reads:

FindClosestWithTagAction: No agent or target provided.

Clear as mud! We obviously have provided both an agent and a target, so what’s the issue?

Let’s look at our Blackboard:

A view of the Unity Behaviour Blackboard editor
A basic Blackboard setup

Here, ‘Self’ is a special, pre-defined Blackboard variable which always refers to whichever GameObject is running the graph. This makes it very handy to use as the input to our node’s ‘Agent’ field.

‘Waypoints’ is the container for our list of waypoints that the enemy should patrol between, and ‘PlayerObject’ is the container for the object we’re looking to find.

As you can see, both ‘Waypoints’ and ‘PlayerObject’ exist as variables, but do not have a value set. That’s ok, because we just need these to act as placeholders and will populate their value later. Or at least, that was the plan…


Thanks for reading Not A Robot! This post is public so feel free to share it.

Share


Surrealism In Practice

If I asked you to ‘go and find me the nearest snack’, I would expect that you would (if you were nice), go away, look in the cupboard for something tasty and unhealthy, and then bring it back to me.

In the same way, when we ask Unity to “Find the closest object with the tag ‘Player'“, I would expect that it would perform a search of all objects with that tag, and return whichever object was closest.

What I would not expect, in our ‘find me a snack’ scenario, is for you to refuse to look for anything at all until I first handed you a trombone, because that would be utterly insane, but guess what?! That’s exactly what Unity does!

That’s correct: in order for the ‘Find Closest With Tag’ node to do its job, you first have to provide it with an object - any object - before it will perform the search. It does absolutely nothing with that object, of course, except immediately throw it away, but provide an object you must.

What’s going on here? First, let’s take a look at the much more reasonable, perfectly usable ‘Find Object With Tag’ node:

An image displaying the code for the 'FindObjectWithTagAction' class
Perfectly reasonable

And to contrast, here’s the code for the very silly and functionally useless ‘Find Closest With Tag’ node:

An image showing the code for the 'FindClosestWithTagAction' class
‘No agent or target provided’, is it?

Can you spot the problem?

In the former (Find Object With Tag), the initial safety check ensures only that you’ve provided a blackboard variable:

An image displaying the initial safety check for the 'FindObjectWithTagAction' class
Good, perfectly normal code

This makes perfect sense: the purpose of this node is to find a value and assign it to the blackboard variable.

In the latter (Find Closest With Tag), the initial safety check is subtly different:

An image showing the code for the initial safety check in the 'FindClosestWithTagAction' class
Bad, terrible, and frankly despicable code

Here, the code checks that the values of the provided blackboard variables are assigned, rather than checking for the mere presence of blackboard variables.

In the case of the ‘Agent’ variable, checking for the value does make sense: we need to compare distances, so it’s only logical that we need an object to compare distances to. However, in the case of the ‘Target’ variable, it makes no sense at all to check the variable’s value, since that’s what we’re looking to populate in the first place!

Let’s see the practical implication of this logic:

Note that our enemy character waits around for us to assign a value to its blackboard variable ‘PlayerObject’, at which point it springs into life, and discards the value we’ve just assigned for the one it’s now allowed to search for. When we refer back to the code behind the ‘Find Closest With Tag’ node, this now makes logical sense, but I can’t believe that this is the intended behaviour. From a design perspective, it is incredibly unintuitive, and the warning message provided doesn’t describe the problem with enough detail that beginners would be able to understand the issue or how to fix it.


Not A Robot is a reader-supported publication, but a subscription is not for everyone. If you’d like to support my work without a monthly commitment, click below!

Donate without Subscription


Bug or Bad Design?

Though I assume that the behaviour of ‘Find Closest With Tag’ is a bug, the warning message confuses things a little. After all, the message is perfectly correct: you did not provide a Target. Unfortunately, it’s also ambiguous: is it referring to the Target variable, or the Target variable’s value? The only way to know for sure is to read the code - and if you’re a beginner, this might be daunting.

Though no-code tools are not strictly for beginners or casual users, they are attractive to those types of user for obvious reasons: not having to write or read a lot of code to make cool things is obviously a draw for any budding game developer without the experience or time to develop their coding skills.

As I’ve mentioned elsewhere, I love to see people create. There’s something incredibly satisfying about watching somebody build something, whatever their skill level, and for that reason, I feel that tools that help more people express themselves creatively is A Good Thing.

Unity’s Behavior package seems like a good fit here. Imbuing game characters with their own intelligence and behaviour can be a terrifyingly opaque task to someone with no background or experience in this area - and visual tools like the Behavior Graph can help to demystify the logic that underpins character AI in games.

Though it’s unfortunate that I hit a stumbling block with the Behavior package after playing with it for 5 minutes, I’m hopeful Unity continues to improve and expand the toolkit. I’ve submitted a bug report about this issue - and would encourage anybody else who tries the package to do the same if they come across anything similar. Unity has something of a reputation for starting work on new features and then abandoning them, but I am in favour of this kind of tool, and would love to see Unity continue to support it long into the future, so I would encourage anybody to have a play around with it, and share your creations here!


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


If you’ve enjoyed this post, take a look at some of my other Unity-related posts:

]]>
<![CDATA[Dear Students: Learn Version-Control]]>https://tomhalligan.substack.com/p/dear-students-learn-version-controlhttps://tomhalligan.substack.com/p/dear-students-learn-version-controlSat, 19 Oct 2024 23:59:49 GMTI’ve been working as a coder for (almost) twenty of your finest, British years now, and I am sometimes asked what the number one lesson I would impart to the younger generation seeking to break into ‘the industry’ is.

Here it is: Learn Version Control.

I wasn’t taught it at university, and I didn’t know it was ‘something I should know’ until Day 1 of My First Real Programming Job, when I was suddenly thrust in front of a computer and told to ‘check out the codebase’.

This was not, I quickly came to discover, an exhortation to be impressed at the genius of my predecessors, but was instead a very specifically phrased instruction to sign into the VCS (Version Control System), download the code, and get to work.

Thankfully, my supervisor was patient and understanding, and got me up to speed relatively quickly with their VCS - but not everybody has someone so patient as their first manager, and not every business is willing or able to spend time teaching new hires something so fundamental.

In this post, I’ll cover some (very) high-level basics about version control systems in general, so you can be armed and ready for your next interview!


The Wall

Though I don’t recall being taught anything even vaguely related to version control at university, I am willing to concede that as a student, I was not the best listener. It’s entirely possible that it was mentioned, maybe even repeatedly so, but it was certainly never treated as a fundamental concept, and I was never required or encouraged to use it during practical exercises. When code needed to be shared or moved around, it was done on a USB stick, or via email, and ‘different versions’ of the code were managed through the advanced wizardry of copy & paste to a new folder. This process was ‘how it was done’ for the entire duration of my university education. Even our professors would simply send us ZIP files over email!

This was, to put it bluntly, entirely unrepresentative of the industry: I have never, in all my years, worked at any development studio that didn’t use version control as standard. In contrast, I have repeatedly hired people fresh out of university who have never even heard of version control.

In the best cases, this is quickly rectified: once a person grasps the basic concepts, one version-control system is much like any other, and even basic usage is usually enough to convey the benefits, so it quickly becomes another part of the toolkit.

In the worst cases, however, the sudden introduction of a VCS is a barrier to entry that can cause people significant distress when their job depends on it. I’ve seen new hires literally start to cry when they realise that something they will be doing every day for the rest of their careers has been entirely omitted from their education. Without wanting to be overly critical of our educational institutions, it is alarming how frequently I’ve seen people hit this wall - and what makes it worse is that the core responsibilities and workflows of most popular version control systems can be taught in an afternoon!

Outside of formal education and employment, I’ve seen countless examples of aspiring programmers and game developers take to Reddit and other forums asking questions like:

  • How do I work on my project with my friend?

  • What’s the best way to back up my project?

  • My codebase is a mess and I can’t keep track of it all - how do you do it?

And, worst of all:

  • Help! My hard drive died and I’ve lost my work - what can I do?

The first, last, and best answer to all of these questions is: Learn Version Control.


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


The Basics

Regardless of which specific sector of the industry you’re aiming for, it is almost a cast-iron guarantee that you will be expected to use version control day in, day out. Whether you’re building software for financial institutions or making the next AAA game, version control will be a fundamental part of the process, and one that is not limited to coders. Artists, writers, and audio designers also need to engage with the VCS as part of their standard workflow, and at the highest level, the VCS is what enables each different discipline to share their output with the rest of the team.

Of course, different specialisms have different needs, and different companies have different workflows and processes. Though the core concepts of most version control systems are similar, and the knowledge required to use them is largely transferrable, some systems are specialised towards particular sectors and disciplines, whilst others try to generalise to suit the needs of the industry more broadly.

There are several basic ideas that every version control system incorporates, although the specific terminology and workflow may differ.

At the most abstract level, every VCS allows you to do the following:

  • Download a copy of the files you need to work on

  • Edit those files

  • Add or remove files

  • Make your modified, added, or removed files accessible to others

  • See the history of any file within the system

  • Roll back files to previous versions

With the above core features, a single developer may cast off their anxieties and learn to code with confidence, safe in the knowledge that they can move backwards and forwards in time, and that they can store snapshots of their work as and when they see fit. A team of developers becomes an unstoppable force of nature, sharing their work early and receiving feedback and support as they go.

A visualisation of a project under version control, using gource

One point worth keeping in mind: version control can add a lot of security and confidence to a project even with a solo developer, but it is not, on its own, a backup until a copy of the code is located somewhere other than your own machine! Most VCS options will make this process straightforward, but if you don’t want your code to be publicly available, and depending on the VCS you choose, you may find you need to pay for remote hosting of your repository.


Not A Robot is a reader-supported publication, but a subscription is not for everyone. If you’d like to support my work without a monthly commitment, click below!

Donate Without Subscription


Fear of Commitment

One of the most revelatory concepts any developer discovers when they first engage with a VCS is that of fearless coding. Without a VCS, developers need to worry about whether the changes they’re making are appropriate for the problem at hand, and run the risk of spending so long fiddling with one area of code that they forget what it looked like before they started.

Unless they have an extraordinary memory, developers can resort to all kinds of messy workflows in order to maintain their ability to undo their changes or refer to the original code whilst making modifications.

With a VCS, this issue is largely eradicated. Developers can create a snapshot of their work whenever they choose, and, if so inclined, commit those changes so that everyone else can see them.

The specific workflow around this process is different depending on the VCS in question, but the effect is largely the same: a developer can decide to commit their work at lunchtime, get away from their desk, and then later review the state of the code at the previous commit in order to refresh their memory before continuing.

The ability to work in this snapshot-driven way means that developers don’t need to keep the entire system in their heads at all times, and can freely make changes safe in the knowledge that they can always roll back their commits if they identify a better solution. Typically, each time a developer commits new work, the VCS will prompt them to write a brief message describing the work they’ve done. This message forms part of the overall log for the code-base, and allows developers to navigate the history of a project with ease.

The first commit is the sweetest

Alternate Universes

Aside from the basic functionality of adding, modifying, and sharing files, one of the most powerful concepts within a version control system is that of the ‘branch’.

A branch is effectively a snapshot of the codebase taken from a particular point in time, which allows the developer to work from that starting point and build up a parallel timeline to the main development branch.

This allows features to be worked on in isolation, and for speculative changes to be made which might otherwise cause disruption or risk for other ongoing work.

In some version-control workflows, branches are incredibly common, and may be exploited for even the smallest change. In others, branches may be less commonly utilised, but the underlying principle remains the same: a branch is an alternative version of the codebase and associated history.

An illustration of a main branch, and a feature branch, in a git project

In commercial development, regardless of the specific sector, it’s critical that developers can adapt to new requirements and circumstances, or fix an issue in a shipped version of the software without being forced to release unfinished features. Branches facilitate this flexibility and adaptability, and are likely to be a regular feature in most developer’s day-to-day work.


Merging Realities

As any good developer knows, the only real source of truth in any software project is the code itself.

When you’re dealing with branches, however, we regularly need to bring one branch up to date with all of the changes in another. Bringing two branches into alignment is known as merging, and we typically talk about this as ‘Merging Branch B into Branch A’.

In the ideal scenario, this goes smoothly, but in some cases, two branches cannot merge cleanly because developers may have modified the same files in the same place in both branches, and the system doesn’t know how to reconcile the differences.

This is known as a conflict, and it usually requires manual intervention to fix. Typically, we use a merge tool to resolve these issues: this allows us to view the differences between the file in Branch A, and the same file in Branch B.

Resolving a merge conflict using Fork

In the above scenario, we’ve modified the last line of the file in both branches, so we need to choose whether to accept the changes on the left hand side (the ‘incoming’ changes), or the changes on the right hand side (the ‘local’ changes). Alternatively, we can select neither change, and just hand-modify the merged file to resolve the issue.

The bottom panel shows the state of the merged file. The merge tool you end up using may look different to this, but the basic functionality will be similar.

It’s worth keeping in mind that some files cannot be merged in this manner. Binary files (images, audio etc) generally require somebody to identify the ‘correct’ version.

Even in the case where a merge goes smoothly, it’s always best practice to ensure that the work from both-branches remains as-expected and no issues have been introduced as a result of the merge.


Pax Automatica

A version control system is often paired with a suite of tools that provide a degree of automation to certain processes and workflows. At the highest level, this kind of setup is often referred to as a ‘Continuous Integration’ (CI) system, but the specific job(s) of such a system will vary dramatically based on context.

Such a system is frequently configured to respond to events within the VCS and to trigger some automated processes to execute as a result.

For example, when a user adds a new feature to the codebase, the CI system may:

  • Run a set of automated tests

    • If any test fails, the developer may receive an email or other notification alerting them to the problem

  • Compile the code and store the built version of the application somewhere.

  • Deploy the new version of the software to a device or server for user testing.

Typically, these systems will be maintained by more senior colleagues, but being aware of their existence and what they can do for you is a great way to avoid panic or anxiety about making changes to the codebase.

A robust CI system can dramatically improve the stability of a codebase, and reduce turnaround times for new features and fixes. Automated reporting of test results and build times allow developers and production staff to ensure that work is progressing smoothly, and to receive early warnings about potential problems before they become too serious.

When used effectively, a CI system improves the confidence and flexibility of the development team, and allows the organisation as a whole to take a great degree of pressure off the workforce.


Thanks for reading Not A Robot! This post is public so feel free to share it.

Share


Some VCS Options

There are a variety of different VCS options available, and it’s likely that your first employer will have built an entire infrastructure around whichever one they use.

One of the most popular version control systems used today is Git. It has a very small footprint and is a distributed VCS, which means it doesn’t require a central server to act as the main entry point or authority. Git is quite happy for two developers to push and pull changes to one another directly on their development machines, but in most practical commercial use cases, people still choose to host a central repository, which allows tighter security and easier control over access and permissions.

Alternatives you’re likely to come across include:


Interview Questions

When it comes time to start job hunting, demonstrating knowledge and experience around version control systems and practices will help to set you apart from the competition.

In an interview for a role, showing an interest in a prospective employer’s version-control system and workflows shows that you’re looking at the bigger picture and that you wish to understand how the organisation functions beyond the immediate scope of the role you’ve applied for.

Some questions you might wish to ask:

  • Which version control system do they use?

    • Some employers might not wish to divulge specifics around this depending on their security practices and concerns, but many will be happy to oblige.

  • Why did they choose <x> as opposed to <y>?

    • This demonstrates a wider base of knowledge around VCS systems in general, and their answer may reveal useful information about how they work and what their practices are which you can then build upon.

  • What does their review process look like?

    • In many organisations, this process is intrinsically tied to the VCS in one form or another, and code reviews are often oriented around pending merges.

  • Do they use a Continuous Integration / Automation system alongside their VCS and if so, what kind of benefits have they seen?

    • Any organisation seeking to continuously improve is likely to be constantly tinkering with their CI systems to add extra functionality or to provide more robust checks and reporting.


Summary

Whatever your goals are within the world of development, a VCS is a hugely beneficial tool for any project.

In a commercial development environment, the idea of developing any serious project without a VCS is unthinkable, and so the more confident you are and the more experience you have in this area, the more favourable you will appear to prospective employers.

If you’re currently working on a project in the hopes of bolstering your portfolio, and it’s not already under version control, then please heed this advice and start using a VCS at the earliest opportunity!

Of course, lack of experience with a VCS may not be the thing that prevents you from landing your first dream role - most good employers acknowledge that experience is gained over time, and are more likely interested in your coding skills and your overall attitude - but you will be using a VCS if you get that job, so familiarising yourself as soon as possible will only increase your chances and smooth out the onboarding process when you do get hired!


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Further Reading

If you’ve found this post useful, you might like to read one of the following:

]]>
<![CDATA[Splitting Keyboard Input in Unity]]>https://tomhalligan.substack.com/p/splitting-keyboard-input-in-unityhttps://tomhalligan.substack.com/p/splitting-keyboard-input-in-unityThu, 12 Sep 2024 02:27:50 GMTIn an era where online games dominate the multiplayer market, the simple joy of huddling around the family PC with your siblings and friends, and playing a local multiplayer game on a single, shared keyboard seems like a distant memory, but at Not A Robot, we’re in the business of joy, and old-school is the new cool.

Today, we’re going to solve a problem with Unity’s Input System package that has left many stumped, and led others to implement a mish-mash of weird and wonderful workarounds.

You can find the GitHub gist for the classes created in this post here.

Share


What’s The Problem?

Implementing local multiplayer is simple in Unity, with the PlayerInputManager component. This component is part of the ‘new’ Input System package, which replaces the built-in input system. It allows you to quickly set up a player prefab and define a ‘join game’ mechanism, and it takes care of split-screen behaviour.

It’s an incredibly useful component, and for most use cases, it’s perfectly suitable.

Unity’s Player Input Manager

Unfortunately, Unity’s developers made a very modern assumption - that there’s a 1:1 relationship between connected devices and players - which is decidedly un-retro of them.

The problem is as follows: when a player is added to the system via one of the automatic Join Behaviour mechanisms, the PlayerInputManager does one of the following:

  1. If the player is joining because they pressed a button on an unassigned device, then that device is registered to the new player, which means the device is now assigned, and will no longer trigger new player joins.

  2. If the player joins via an input action, then the system checks whether the device triggering this action has already been assigned to a player. If it hasn’t, then the device is assigned to the new player, but if it is, then the join request is ignored.

In both of these cases, the device that triggered the join request is prevented from ever triggering another join request, so it’s simply not possible to share one device between multiple players using the automatic joining method.

In most cases, this isn’t a problem. Many, if not most, games today support controllers, and you can support a wide range of controllers and input devices easily with the Input System package. It’s not completely unreasonable to expect your players to use separate controllers to play your game, but back in my day, before USB was even invented, we had to share a single keyboard, and we were grateful.


Not A Robot is a reader-supported publication. If you’re enjoying these guides, then please consider subscribing and helping me to keep them coming!


Let’s Get Lazy

Like all good programmers, you want to do the least work possible in order to achieve your goals. You could implement dual user keyboard and split-screen support yourself relatively easily, but that feels a little like going against the grain, and manually triggering the player join via code requires a lot of additional management of devices, player IDs, and more. We’re lazy - we’re not doing all of that rubbish!

As much as possible, we want to lean on the work the Unity developers have already done for us, and simply add support for shared devices into PlayerInputManager - which is the recommended way to handle players joining and leaving your game.

Thankfully, this is actually a very easy process, and though the solution is not ideal, it’s a fairly painless way to work around PlayerInputManager’s limitations.

Assuming you’re already using the Input System package, we first need to copy the package’s directory out of your project’s Library/PackageCache folder, and dump it into your Packages folder instead:

Moving the Input System package

This allows us to make a few tiny modifications to PlayerInputManager, letting us bypass the device registration check when an Input Action is performed.

Open your project in your IDE of choice, and find PlayerInputManager.cs.

Here, find the JoinPlayerFromActionIfNotAlreadyJoined method, and make it virtual:

Ensuring we can override the default behaviour

This will let us override this method in a custom class a little later.

While you’re in PlayerInputManager.cs, also find the CheckIfPlayerCanJoin method, and make it protected.

Allow our overridden method to call back into the parent class

Now that we’ve modified PlayerInputManager, we can create a new class, which inherits from it, and overrides the JoinPlayerFromActionIfNotAlreadyJoined method.

The key here is to intercept incoming players who would be assigned the keyboard, and skip the check that would ordinarily prevent it from being used by more than one player.

Intercepting keyboard players

We also add an additional step - RebindPlayer - which lets us map a particular control scheme onto the incoming player. For example, we might want our game to support two players, with one player using the WASD keys on the keyboard, and the other using the directional arrows:

Reassigning the player’s control scheme

And that’s pretty much all there is to it!

Let’s assign our new component in the editor:

Ugly default inspector :(

Well, functionally we’ve got everything we need, but since we’re now using an inherited class, we’ve lost the nice inspector layout we see on the standard PlayerInputManager component, so let’s quickly fix that up.

Back in your IDE, find PlayerInputManagerEditor.cs and change it from internal to public:

No code can hide from me!

Now, create a class in an Editor folder in your project called SharedDeviceInputManagerEditor. All we need to do here is tell Unity to draw the inspector for a PlayerInputManager instead of a SharedDeviceInputManager:

Easy peasy

Now we flick back over to Unity, and take another look at our SharedDeviceInputManager component:

Beautiful custom inspector :)

And finally, we’re done! This component can now be configured the same way we would the standard PlayerInputManager component from earlier, but now, instead of mandating a 1:1 relationship between a device and a player, we allow the keyboard to be shared. All other functionality should be as expected - split-screen will automatically work, and your player instances will receive the correct input events based on their control scheme, even if the players are sharing the keyboard.


Buy Me a Coffee


Bring It All Together

Now all that’s left to do is test. Assign a Join Action Reference and a Player Prefab to your SharedDeviceInputManager, and make sure you have two control schemes in your Input Actions Asset: ‘WASD’ and ‘Arrows’.

Input assignment

Make sure your Player Prefab has a Player Input component, and that the correct Actions asset is assigned:

Configuring the Player Prefab

Obviously, your player prefabs will need to actually do something with whatever input they receive, and that’s beyond the scope of this guide, but, all being well, you should shortly be in a position to enter play mode and try out your new, dual user keyboard capabilities!

With a little extra work, and with the help of some lovely free assets from Kenney - behold, a cosy, 2-player split-screen gladiator game, with both players sharing a single keyboard:


Subscribe now


Conclusion

In this guide, we’ve seen how we can lightly modify the Input System package to give ourselves additional functionality when building a local multiplayer game. The solution shown here isn’t ideal: we would much prefer not to have to modify the Input System package at all to ensure we can upgrade to any future version with ease, but the modifications required are minimal, and we’re still able to take advantage of existing functionality provided by the package.

Though the market may not exactly be screaming out for games that support multiple users on a single keyboard, it does seem a shame that the current version of the PlayerInputManager doesn’t support it by default. Local multiplayer today may well be neglected in the mainstream, in the shadow of huge online games, but there’s nothing quite like the enjoyment you can get from playing together in the same physical space.

You can find the GitHub gist for the classes created in this post here.

I hope you’ve found this guide helpful, and if it’s given you any ideas for your next game, then do let me know!

If you’ve enjoyed this post, check out my previous Unity tutorial, and feel free to subscribe and share with your community!

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Share

]]>
<![CDATA[Ship First, Build Second]]>https://tomhalligan.substack.com/p/ship-first-build-secondhttps://tomhalligan.substack.com/p/ship-first-build-secondFri, 06 Sep 2024 23:17:51 GMTWhen you begin working on a new project, it’s incredibly tempting to ‘start at the beginning’. You have some vision for how the user should interact with the software, so you start building whatever the user would first see, and continue from there.

Alternatively, you might start working on core features, before building the supporting framework around it.

In many cases, these choices are perfectly reasonable. There aren’t many rock-solid rules about the order in which you should build your software’s features, but it’s often the case that deployment is only considered at the tail-end of a development cycle (or, in the worst case, at the end of the project!). This short post explains why you might want to reconsider that, and tackle deployment first, rather than last.


Embrace The Pain

For any moderately complex piece of software in today’s technological landscape, it’s almost a certainty that you’ll be relying on third-party SDKs, libraries or services, hosting something on a cloud server, publishing builds to app stores, or targeting some platform you may not fully control.

As an illustrative example, let’s say you’re making a game. If you ever want anybody to play your game, then you’ll probably be publishing it via a distribution service, such as Steam, Google’s Play Store, or maybe Apple’s App Store.

Steam serves Windows, Mac, and Linux, the Play Store serves Android, whilst the App Store serves Mac and iOS devices. Alongside the obvious runtime platform differences, as a developer, you also need to consider the specific packaging and publishing requirements of each of those distribution services. With mobile platforms in particular, you may expect fairly stringent automated checks on builds you upload:

  • Your application is scanned to ensure it meets packaging constraints.

  • Automated checks are run to ensure your app doesn’t contain malware.

  • Your package is analysed to ensure it doesn’t depend on obsolete APIs or libraries with known vulnerabilities.

The worst time to discover that you’re failing any of these checks is at the end of a project, when replacing outdated APIs and upgrading vulnerable libraries might force significant changes in your codebase.

A perfectly ordinary development cycle, fraught with unknowable risks

A slightly-less-bad time to discover that you’ve got problems is immediately before you show somebody important - usually the person holding the cash - the fruits of your labour. Even if you’re not yet releasing the finished product, an eve-of-delivery panic caused by deployment issues will never inspire confidence, and in some cases, resolving these issues can cause a significant delay.

Even without the kinds of checks that distribution platforms may run, your software may still fall foul of any number of issues as soon as it hits a server, or a user’s machine, and it’s all but impossible to avoid these risks entirely. Getting visibility into these potential problems is critical to a successful launch, and the earlier you’re able to identify these issues, the earlier they can be addressed.

Embracing the pain early on allows you to get ahead of any looming deployment issues, and leave yourself plenty of time to investigate and rectify any risks before they become serious problems.


Thanks for reading Not A Robot! This post is public so feel free to share it.

Share


Build The Guardrails

The best way to mitigate the risks related to deploying or distributing your software is to set up your delivery pipeline as early as possible.

As soon as your app can be built successfully - even if it doesn’t do anything yet - you’ll never regret taking the time to ensure that it can be deployed or distributed without any problems.

By prioritisiing deployability, you will start receiving feedback from automated systems as early as possible, giving you an early warning when APIs / SDKs are deprecated, or when a vulnerability is discovered in some dependency you’re using.

You’ll also improve testability - ensuring that it’s trivial to access your software from test environments, allowing your QA team to start verifying builds as early in the development process as possible.

From a commercial perspective - having builds available from Day 1 is also a great way to inspire confidence in your capabilities and foresight, and the pressure around milestones and delivery deadlines is greatly reduced.

Get It Done

Whether you’re a solo developer, an indie game studio, or a development team in any sector, the end goal is always to deploy - so prioritising that requirement and ensuring it’s ready to go early in development helps ensure that you’re always working within the guardrails.

With modern tools, it’s never been easier to automate the building, testing, and deployment of your software - so integrating a reliable, repeatable deployment process into your automated systems is usually a matter of writing a few scripts, or setting up some configuration files and running a tool, such as fastlane for iOS and Android apps. Tools such as GitLab Pipelines, Jenkins, and others provide versatile and straightforward mechanisms to run automated processes against your codebase, making it trivial to get things up and running as quickly as possible.

Turning automated checks and deployment processes to your advantage early on is a smart way to avoid serious problems when you least want them. The sooner you’re able to prove to your team and your stakeholders that you can deploy your software safely, reliably, and regularly, the sooner you can get on with the real development work, with confidence, and get paid on time every time!


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


]]>
<![CDATA[Bootstrapping your Unity game with Addressables]]>https://tomhalligan.substack.com/p/bootstrapping-your-unity-game-withhttps://tomhalligan.substack.com/p/bootstrapping-your-unity-game-withFri, 23 Aug 2024 01:29:32 GMTIn this post, we’ll explore a straightforward way to assert control over your Unity game, keep your scenes clean, and lay the foundations of a flexible, extendable bootstrapper that leans on Unity’s Addressables system.

A flowchart illustrating a proposed flow for a bootstrapping system in Unity
Let’s do the heavy lifting early and avoid any fuss!

Read on for some background and reasoning about why such a system is beneficial, or, if you’re already convinced, simply skip ahead to the good stuff!


The Grim Fandango

At some point during your game’s development, you’ll likely begin to encounter friction between different systems. Managing relationships and dependencies between different areas of your game can become a challenge, and the further you are in development, the more fragile this process can become.

Failure to reign in complexity early on can result in a project collapsing under its own weight. The cost of rewriting, refactoring, or replacing systems to restore sanity can very quickly become unbearable, especially for solo devs and small teams.

Over time, as the game grows and things become more complex, it’s common for one or more of the following problems to appear:

  • Scenes become complex and cluttered with GameObjects that provide system-level behaviour

  • Entering Playmode requires some kind of arcane ritual to ensure that the right configuration is applied first

  • The evolution of some ‘blessed path’ that developers must go through in order to test their latest work

A common pattern developers use to overcome these issues is to create multiple scenes which are intended to be loaded simultaneously. One scene might contain the game’s current level - gameplay elements, terrain, enemy characters etc - whilst another may contain various managers and utilities which glue the game together, coordinate different systems, and provide debugging tools and information.

A diagram illustrating a potential multi-scene setup in a Unity game
Additive scenes allow the organisation of different areas of concern

This solution works, but it means that scene management becomes more complex, and often requires the development of additional tools to ensure that the right scenes get loaded at the right time. Developers are required to understand a potentially complex scene configuration, and in the absence of custom tooling to manage things automatically, friction may increase when testing, and new potential points of failure are added each time your scene configuration needs to change.

In a mid-to-large-sized team, it’s very easy for developers to step on each other’s toes, or for complex configuration requirements to be misunderstood or miscommunicated.


Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Command then Conquer

Over time, I’ve come to believe that the most basic requirement of a well-architected Unity game is that the Play button should always do exactly what its name suggests. If you can’t hit Play and dive into the game from wherever you are, without any additional thought, then there’s a good chance you’ve got a few time bombs waiting to go off, and your system could be improved.

Play Means Play

If I need to worry about loading ‘the right scenes’, tweaking some values in the scene, manually dropping in a prefab or doing anything other than pressing that lovely little Play button, then I’m not happy. I want to make changes and test, as quickly as possible, with no fuss. So do your designers and artists, and pretty much anybody else who is hands-on with the project.

To that end, I’ve come to favour the following basic rules:

  1. Your system should load what it needs when it matters

  2. Scenes should only contain things that are strictly relevant to that scene

  3. Play Means Play

Though I don’t claim that these rules are always applicable or appropriate for every project, I’ve found that they’re decent guidelines that you can easily refer back to in order to keep things running smoothly.

To my mind, the main implication of the above rules is that we should build some kind of boot-strapping system which allows us to define what to load, and do it automatically when required.


Buy Me A Coffee


Defining The Problem

By default, Unity will call Awake on all active GameObjects when a Scene is loaded. This is usually the first point in time when you can do any useful work, but there’s no guarantee that your GameObjects will receive their Awake call in any particular order.

A flowchart illustrating the default Start / Awake behaviour for Unity GameObjects
Default Unity behaviour

In the above scenario, and owing to the non-deterministic ordering of Awake calls, it’s easy to end up in a situation where you’re managing multiple singletons, sanitising your initial state, and otherwise trying to ensure that everything you need is loaded and ready before you can get on with things. It’s a frequent source of difficulty for new Unity developers and can lead to real frustration and difficulty if left unaddressed.

Common advice is to treat Awake as a ‘self-initialisation’ point, and the subsequent Start call as the first point in time when Components can safely communicate with one another.

Even taking the above into account, this still feels messy to me. I prefer, as much as possible, to keep scenes clean: free of system-level or non-scene-specific GameObjects. Worrying about whether things are loaded, available, or initialised feels like a chore because it is a chore, and if you don’t solve the problem at the source, it becomes more and more difficult to manage as your project grows.

A better solution overall is to build a system that injects what we need as we enter play mode, significantly reducing the amount of work we need to do to make sure things are where they’re supposed to be:

A flowchart illustrating a proposal to inject dependencies prior to a Unity Scene's startup logic
The Goal

By injecting our dependencies as early as possible, we can avoid the fuss we might otherwise be forced to engage with.


Defining The Solution

The bootstrapping system we’ll build is comprised of four main parts:

  1. A custom Project Settings file and editor.

  2. A ScriptableObject representing a ‘Custom Boot’ context (Editor Runtime / Build Runtime)

  3. Some light integration with Unity’s Addressables system

  4. An initialisation script, which does all of the heavy lifting at runtime.

Though there’s plenty of code to chew through here, I won’t be including every single line - if you’re familiar enough with the APIs I mention here you’ll probably be able to plug any gaps yourself, but the code for this demonstration is available on GitHub, so feel free to peruse at your pleasure!


Refer a friend


Custom Project Settings

One of Unity’s great strengths is the ease with which extensions and editor tools can be built, allowing a great deal of flexibility over your workflow and toolkit.

Since Unity already provides a Project Settings window, it’d be nice if we could add our own custom project settings there. Thankfully, we can! This can be achieved by implementing a SettingsProvider that will be loaded whenever the user opens their Project Settings.

The code required is fairly boilerplate, but Riley Bolen provides a great tutorial explaining each part of the process. It’s worth checking out Riley’s guide before attempting this, as my own SettingsProvider is almost identical.

The entry point to our SettingsProvider looks like this:

Source code for a custom Unity SettingsProvider
Starting point for our custom settings window

We perform some quick checks to ensure our settings asset is available, before returning an instance of CustomBootSettingsProvider.

If we need to create our settings file, we call GetOrCreateSettings inside CustomBootSettingsUtil:

Source code for a helper method to retrieve or create a settings object in Unity
Creating the settings file

Rather than saving our settings under the Assets folder, I prefer to store it alongside other project settings, in the aptly-named ProjectSettings folder. Note the use of InternalEditorUtility here: this is what allows us to easily serialise our ScriptableObject outside of the Assets folder1.

The CustomBootProjectSettings object stores two asset references - RuntimeSettings and EditorSettings:

Source code for a Unity ScriptableObject class containing references to Addressable assets

The AssetReference fields here are our first link to the Addressables system. Though we don’t strictly require the settings file to store AssetReferences, future extensions to this example could benefit from it doing so, so I’m leaving it as-is.

Creating those AssetReferences requires a few additional steps. First, we need to create the assets we’re interested in tracking in the usual way:

Source code to create a new Unity asset
Creating a CustomBootSettings asset

The CustomBootSettings object in this example is a ScriptableObject which stores an array of prefabs. In a real-world scenario, you probably want something a little more complex than this, but it’ll do for the sake of illustration:

Source code for a Unity ScriptableObject which would be loaded during the bootstrap loading phase
Simple!

Next, we need to add the assets we’ve created to the Addressables system. Addressable assets are assigned to Groups, and we have the freedom to manipulate groups as we see fit.

For the sake of keeping things organised, we’ll create two groups:

  1. CustomBoot_Runtime - for assets which need to be loaded in the editor’s play mode, as well as in standalone builds. This group is configured so that it’s included whenever you build your game.

  2. CustomBoot_Editor - for assets we only care about loading during editor play mode. This could be handy for debugging tools, helper scripts, or other development-related tools we might need. This group is configured so that it’s not included in builds.

Source code illustrating the creation of Addressable Groups in Unity
Creating Addressable Groups

Now that we’ve created our Addressable Groups, we can add our assets to them:

Source code illustrating the method of adding assets to an Addressable Group in Unity
Adding assets to Addressable Groups

The API to manage all of this is fairly simple, and you’ll notice that after we’ve added an asset to Addressables, we end up with a new AddressableAssetEntry. Referring back to GetOrCreateSettings, you’ll see that the asset entry’s GUID is what we use to finally create our AssetReference objects, which we serialize inside our project settings file:

Source code demonstrating the creation of AssetReference objects for serialisation
Creating AssetReferences

All that’s left at this stage is to make sure our project settings window is rendered properly. You can use either the old-school IMGUI API, or the new VisualElements API. In this instance, I opted for VisualElements, but the choice is yours. It doesn’t make for particularly interesting code, but feel free to check it out on GitHub.

Putting it all together, we can now click Edit → Project Settings and view our new settings pane in all its splendour:

A screenshot of the custom project settings editor
Chef’s Kiss 😙🤌

Ok, it could be prettier - but it does the job. What we’re looking at here is basically just a window that lets you edit the prefab arrays of our CustomBootSettings objects. Unity will take care of ensuring the asset states are saved if we modify them via this editor, so let’s cross this stage off our list and move on.


Not A Robot is a reader-supported publication. If you’re enjoying this post and want to see more, consider becoming a free or paid subscriber.


Runtime Initialisation

The whole point of implementing a boot-strapping system is to ensure that we load what we need as early as possible and avoid any irritating dependency-related problems elsewhere.

Unity provides a simple mechanism for hooking into the startup process:

Source code illustrating the usage of Unity's RuntimeInitializeOnLoadMethod attribute
Easy peasy

The RuntimeInitializeOnLoadMethod attribute is the key here, and it doesn’t require much explanation. Include this on a static method anywhere in your runtime code and it’ll get called when the Unity runtime initialises.

Next, we need to decide what to do when our Initialise method is called. Our plan is to load our Addressable assets and then instantiate the prefabs referenced within.

This is a simple task via the Addressables API:

Source code demonstrating the correct method to load an Addressable Asset asynchronously in Unity
Loading assets via Addressables

Notice the use of the asynchronous API here. More on this later, but if you’re not familiar with async / await, then it’s well worth reading up on it.

The handle variable is important - and we need to keep hold of it for cleanup purposes later, but first, we want to do something with the asset we’ve just loaded.

The Result property on our handle is a reference to the CustomBootSettings asset we created earlier, which means we can now call Initialise on it.

Here’s what that looks like:

Source code illustrating the initialisation behaviour of a bootstrap configuration
Finally loading our bootstrap prefabs

This is fairly typical prefab-loading code, but the thing to note is that we first create a container object and mark it as DontDestroyOnLoad so that it survives scene changes. We then loop over our list of prefabs and instantiate them one by one, assigning our container object as the parent. We also keep track of each instance we create by adding them to an Instances array, so we can explicitly clean them up later.

The Cleanup code is bog-standard object destruction:

Source code illustrating the cleanup process for a bootstrap GameObject
Buh-bye, losers

All we need to do now is make sure our initialisation and cleanup code is executed appropriately, so let’s dig into that.

The very first thing we want to do when Unity calls our initialisation method is to set up a listener so we can clean things up again when we’re exiting the runtime:

Source code demonstrating a potential method for unloading bootstrapped objects and references
Tidy desk, tidy mind!

The Application.quitting event is a fairly reliable way to detect a normal shutdown process, but there are caveats on certain platforms2. It’ll work fine in the editor and on standalone builds, however.

To ensure we clean things up nicely within the Addressables system, we need to call Addressables.Release and pass it our handle from earlier:

Source code demonstrating the correct method for clearing up Addressables in Unity
Cleaning up after yourself has never been so simple

We also call the Cleanup method on our CustomBootSettings object, ensuring that any instantiated prefabs are destroyed.

Now that we know how to load Addressables and clean up after ourselves, all that’s left to do is flesh out our Initialise method and we have a basic but functional boot-strapping system ready to roll!


Get 20% off forever


Just one more thing…

If you cast your mind back to the beginning of this tutorial, you’ll remember I made the following wild assertion:

The Play button should always do exactly what its name suggests

I also mentioned the use of async / await, which lets us do things asynchronously such as loading things from disk or over the network without blocking other processes.

In an actual build, the player isn’t usually unceremoniously dumped into a scene requiring a ton of dependencies to function. Normally, a splash or loading screen allows the game to load whatever it needs before the player can do anything. In that scenario, asynchronous dependency loading is exactly what you want: minimise the time it takes to load everything, and only continue when things are in the right state.

During development, you’ll want to enter play mode from anywhere. If we use asynchronous loading in this context, there’s no guarantee things will be ready by the time the current scene starts up properly, which defeats the purpose of our bootstrapping system!

In the GitHub repository for this demonstration, the SampleScene contains a sphere which is coloured depending on the initialisation state of our bootstrapper when the sphere receives its Awake call:

  • Red if the boot-strapper isn’t initialised

  • Yellow if the boot-strapper only becomes initialised after the sphere’s Awake call

  • Green if the boot-strapper is initialised before the sphere’s Awake call

Screenshot of a Unity project demonstrating the presence of a component used to illustrate the status of the bootstrapping system
Yup, that’s a sphere

Let’s take our boot-strapper for a test run and see what happens…

Screenshot of a Unity project demonstrating the presence of a component used to illustrate the status of the bootstrapping system, showing an unwanted state
That is not green, believe it or not

That’s no good! We want everything ready before Awake gets called on our scene objects. Our asynchronous boot-strapping code is executing as we expect, but it’s not finishing in time. To make matters worse, the more stuff we do during our boot-strap phase, the longer it’s going to take for the boot-strapper to consider itself initialised.

Under the hood, things are looking like this:

A flow-chart demonstrating the incorrect behaviour of the bootstrapping system due to asynchronous loading in the Unity editor
Asynchronous boot-strapping in-editor is still a minefield

To tackle this, we need to make our bootstraper check whether we’re in the editor, and if so, load everything synchronously so that it’s finished before the scene starts. Whatever small overhead there may be in synchronising our boot-strap initialisation is a small price to pay for the gains made elsewhere.

Source code demonstrating the fix for in-editor asynchronous loading problems
At last, perfection

This is straightforward enough: check if we’re in the editor, and if we are, then we need to call synchronous versions of our initialisation methods, which is really just a case of not using async / await for the most part. When it comes to loading the addressable asset, we can call WaitForCompletion on the handle object in order to ensure synchronous execution:

Source code demonstrating the correct approach for synchronous loading of Addressables
Addressables are async by default - WaitForCompletion allows synchronous execution

Now that we’ve added synchronous behaviour when we’re in the editor, let’s try out our magical sphere again and see what happens:

Screenshot of a Unity project demonstrating the presence of a component used to illustrate the status of the bootstrapping system, with the correct behaviour now implemented
Now THAT is definitely green

Success! Our boot-strapper has done all of its work before the scene objects received the Awake call, and we can now get on with our game’s development with full confidence and a nice way to keep our system dependencies separate from our scenes.

Let’s try it out in earnest: the demo project has a few prefabs we want to load during our boot-strap process: some UI elements, an EventSystem, a simple 3rd-person character controller with a camera, and a ‘debug’ button which doesn’t actually do anything, but acts as our ‘editor-only system dependency’ for illustration purposes:

Screenshot of the custom bootstrap configuration screen, showing a potential setup
A potential boot-strap configuration

Let’s set those up in our Project Settings window, and see what happens when we hit Play. Here’s a quick screen recording:

Very satisfying! Our scene contains nothing but the bare minimum, and everything we need to play our game is loaded dynamically as soon as we need it.

All that’s left now is to ensure our ‘real’ game’s loading scene waits for the boot-strapper to finish before continuing - so that in a build, we can assert full control over the player’s experience.

Source code demonstrating the correct method to ensure that the bootstrap system is initialised prior to allowing scene loads
That’s a wrap!

Share Not A Robot


Conclusion

We’ve seen why a boot-strapper is a beneficial addition to your game, and mentioned some of the problems it solves. With a little work up-front, we can simplify our development process and make sure that we don’t spend time fighting the wrong battles.

Unity provides great flexibility in your game’s architecture, but it’s always worth ensuring you control what gets loaded and when. Non-deterministic timing of critical lifecycle calls can, if you’re not careful, lead to all kinds of difficulties, so it’s worth taking some time to avoid the worst of it as soon as you can.

Please feel free to check out this demo project on GitHub. With a little extra work, you can turn this example boot-strapper into a powerful tool in your arsenal and add all manner of extensions. Personally, I’m keen to add support for bootstrap profiles, so I can modify the bootstrap process and the dependencies that are loaded based on whatever workflow I’m currently engaged in.

Let me know what kind of customisations and additions you make!

If you’ve enjoyed this post, and want to help support me in producing more Unity tips and tutorials, then please like this post, consider a subscription and share Not A Robot!


For a limited time, I’m offering a 20% lifetime discount for new paid subscribers, so sign up today to ensure you don’t miss out!

Get 20% off forever

Donate Without Subscribing

1

If we wanted our settings file to be modified, we’d need a more robust solution. For this demonstration, we never actually change the settings file itself once it’s been created: it’s just a pointer to the assets we do modify.

2

Mobile and Universal Windows Platform behave differently, so check out the Unity documentation for more information about what to do to ensure a fully production-ready cleanup process.

]]>
<![CDATA[Shattered Silicon Part 3: A Skill Issue]]>https://tomhalligan.substack.com/p/shattered-silicon-part-3-a-skillhttps://tomhalligan.substack.com/p/shattered-silicon-part-3-a-skillFri, 26 Jul 2024 00:10:27 GMTIntroduction

In Parts 1 & 2 of Shattered Silicon, I laid out some thoughts on the problems I feel we’re often overlooking in our relationship to - and reliance on - modern technology.

In Part 1, I discussed the Digital Divide and the risks involved in assuming that simplification is the right approach to making technology accessible to all.

In Part 2, I talked about ‘Broadcast Culture’; the threat of stagnation inherent in a world where the incentives encourage us to say more than we do, where our data is worth more than our works, and the consequences of a ‘free-to-play’ system.

In the third and final post in this series, I’ll talk about an issue which is, in large part, a symptom of those discussed in previous posts, but which we will need to tackle in its own right if we’re to ensure that people retain control over their own lives and have the agency and opportunity they deserve.

This post was originally much longer and covered a large swathe of issues, but, thankfully, I happened to procrastinate long enough for the universe to manifest a perfect illustration of the problem. Thank you, CrowdStrike!


A Skill Issue

Over the past few decades, we’ve seen countless iterations of hardware and software which have tended, broadly speaking, to hide the detail away from users and instead present a streamlined, user-friendly workflow for the task at hand. From writing documents to ordering online, we’ve seen continuous development geared towards making life easy for users and creators alike. This is a good thing, generally speaking: nobody wants things to be more difficult than they need to be, and hiding complexity or detail where appropriate is often the right thing to do, and is usually welcomed by users themselves.

That being said, I believe we should be mindful of the collective risks we take as a society when we foster the impression that simplicity is itself a goal to be pursued.

Given recent advancements in AI and hardware capabilities, and the ever-increasing reliance on systems which are - despite appearances - incredibly complex, I can’t help but feel that a threat may be looming on the horizon - and it’s not entirely clear what we can (or should) do about it.

Share Not A Robot


The Rumsfeldian Rubicon

Though I held no great love for the late Donald Rumsfeld, he did provide us with a quote that has the useful property of being simultaneously insightful, memorable, and widely applicable:

Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

- Donald Rumsfeld, February 12, 2002

Though Rumsfeld was talking in the context of military threats, the basic premise underpinning his statement originates from the world of psychology, in the work of Joseph Luft and Harrington Ingham and their development of the Johari Window.

To apply the basic premise to the technological world, we can make a few simple observations:

  • Known Knowns

    • There exists a vast and ever-growing sea of ‘things that can be known’.

    • ‘I know that computer viruses exist, and I know how to defend against them’.

  • Known Unknowns

    • It’s impossible to know everything, but you can keep abreast of things you should know, and take reasonable actions based on advice

    • ‘I’ve heard about computer viruses, but I don’t really understand them or know what I should do about them, so I’ll install an antivirus to be safe’.

  • Unknown Unknowns

    • When you don’t know that you don’t know something, you may not even have the language to articulate your problem.

    • ‘My computer’s acting strange’.

To this list, I feel we should add one more, in order to complete the set for the current reality of our technological world:

  • Unknown Knowns

    • When you know something relevant about a problem, but you misunderstand the implications or detail
      - or -
      When you know somebody else knows something, and you rely on them to take action on your behalf
      - or -
      False confidence

    • ‘I know about computer viruses, and my employer installed an antivirus on my computer, so I’ll have no problems’

Of the four possibilities outlined above, it’s the ‘unknown knowns’ - those factors, threats, and problems that we’re ambiently aware of, but which we’ve acquired some learned helplessness about - that may be the most serious threat.


CrowdStrike

I began writing this post an embarrassingly long time ago, and ended up semi-abandoning it as a series of irritating challenges befell me: a medical issue with my eyes which made it difficult to look at screens and to read or write, a prolonged period with a lack of free time, and finally, the unsanctioned and frankly rude decision of my laptop to undergo an enormous hardware failure and transform itself, in an instant, into an expensive paperweight.

Thankfully, I subscribe to the view that good things come to those who wait, and on July 19th, 2024, CrowdStrike - a security provider to businesses all over the planet - brought the world to its knees and gave me an opportunity to delete almost the entirety of this post, and instead point out the fundamental takeaway: we should know how stuff works.

Though the CrowdStrike issue was unlikely to have affected people’s personal machines, it caused chaos for many businesses and organisations who rely on the company’s software, with Microsoft estimating that 8.5 million devices were affected (I suspect this number is actually pretty conservative, but we’ll probably never know the true figures).

Though there were very dramatic and highly visible impacts on some businesses - banks, airlines, and healthcare services to name a few - my real interest in this story relates to the resolution to the problem, rather than the problem itself.

To summarise - CrowdStrike pushed out an update which contained a single bad / corrupt / incorrect file (the real specifics of how and why this file ended up on people’s machines may remain a mystery unless CrowdStrike is compelled to disclose fairly forensic data), that caused their software, running in a privileged context on the host machine, to crash during startup, taking the operating system with it. The Register recently provided a fairly concise overview of the problem if you’re interested in learning more. Ultimately, the cause doesn’t much matter for this post: CrowdStrike took 8.5 million computers offline and the only surefire way to fix it was with physical access to the machine.

If you are an average person, working from home on a company machine, this is where everything falls apart. You wake up, go through your morning routine, and get yourself set up to start your work-day, only to be confronted with the dreaded Blue Screen Of Death:

:(

No amount of restarting makes the problem go away, so you call into work, where, in the best-case scenario, your understaffed and overwhelmed IT department is now scrambling to identify the cause of the issue and guide staff one-by-one through Microsoft’s 12-step recovery process, which involves communicating things like ‘Safe Mode’, ‘Bitlocker Recovery Key’, and ‘Terminal’ to normal human beings who, by and large, don’t know or even care to know what any of those things mean.

In the worst-case scenario, you’re forbidden from even attempting to fix the problem yourself, and must either bring the machine to somebody who is authorised to fix it, wait for somebody to come out to your home to fix it, or box it up and post the machine off to the IT elves.

Though it would be extraordinarily unfortunate for the CrowdStrike problem to happen again any time soon, one major factor contributing to the severity and reach of the outage was (and remains, for now at least) the mandated use of specific security software to protect corporate assets.

Though there is value in securing all of your assets in the same way, there is also a cost: doing so means that they all implicitly share the exact same weaknesses and vulnerabilities.

I’m not, of course, suggesting that corporate policy should change to introduce a confusing new mixture of mandated security software, or that it should be left up to employees to secure their own machines however they see fit, but that we must understand the risks inherent in the system if we wish to avoid catastrophe.

It’s very difficult to imagine a way to avoid problems like this occurring in the future: software is written by humans, mistakes happen, validation processes fail, and the specific nature of cybersecurity threats necessitates a degree of privilege and trust in security software which leaves you open to a certain amount of risk. It’s a trade-off: choose your poison.

Thank you for reading Not A Robot. If you’re enjoying this post, please feel free to share it!

Share


Managing Risk

The best way, in my view, to mitigate these risks is for end-users to learn about the machines and systems they use. Understanding a few basic concepts and approaches to recovering after a failure can help reduce the impact of many potentially serious problems.

In the same way that companies now provide - and in many cases require - training around inter-personal behaviour and health & safety, I feel that there’s a strong argument to be made for training around best technology practices and emergency IT problem resolution, especially in the modern era where remote working is so common and physical access to machines is so much more difficult for IT professionals.

On a more fundamental level, we should perhaps consider whether ‘company-controlled hardware’ is the best practice to follow when significant parts of the workforce are now remote.

From the perspective of employers - it might feel like it’s in their best interests to retain control and ownership of remote hardware, but as we’ve just seen; it can be an expensive and debilitating risk when things go wrong.

From the perspective of employees - it’s obviously cheaper to let the company provide the hardware and absorb the costs of maintenance & security, but a prolonged outage preventing you from working can raise other issues: if the company takes a significant hit, and is forced to reduce headcount, you may be exposed to a real threat to your livelihood.

There are no easy answers - it’s a difficult problem and one that constantly evolves as new threats are identified. It’s undeniable, however, that a workforce decently trained in IT & security fundamentals would probably withstand widespread disruption much more effectively.

The temptation may be to rely on the hope that providers like CrowdStrike simply ‘fix the problem’ and ‘put processes in place’ to avoid it happening again, but the simple reality is this: it will happen again sooner or later, and the more centralised and dependent we make ourselves on technological solutions to technological problems, the worse the fallout will be. The more we allow our learned helplessness to be exploited as an opportunity for new products and services promising yet more ‘simplicity’ and unattainable security, the more we hand over control to others in the hope that they never make mistakes. Reader, I assure you: mistakes will be made!

For many of us, technology is the tool of our trade - and for all of us, technology is enveloping our homes, schools, workplaces, governments, and essentially every institution we might interact with, and rely upon, in our daily lives.

We should treat the CrowdStrike failure as a warning: these things will happen, and the broader the reach of technology into our lives, the worse it will be. The best defence is to be prepared; to learn about the hardware and software we use so that when something does go wrong, we know how to identify the problem, how to communicate with one another about it, and how to resolve it. At the very least, we should arm ourselves with the language to be able to seek help or advise others on how to help themselves.

I’m not a security expert - and I don’t pretend to understand all of the many ways in which services like those CrowdStrike provide do actually help to prevent a litany of potential disasters - but I do know technology well enough to know that an overreliance on a single point of failure is not a good idea. Diversification of the systems, services, and practices we use can at the very least help to avoid universal failure: it’s worth remembering that this particular outage was not caused by a virus or (as far as we know) some malicious actor, but a bug in the very software millions are relying on to prevent such a disaster in the first place.

At the risk of raising the hackles of the most ardent capitalists amongst us: perhaps the time has arrived when we should seriously consider the idea that people should own and be responsible for the tools they rely on to produce their value. It may be an expensive proposition, but the time is rapidly approaching when many people’s livelihoods are intrinsically tied to the use of technology they don’t own, don’t control, don’t understand, and are forbidden from fixing. This, to me, seems like an untenable risk. In a world where working from home is rapidly being normalised, where people work several jobs at once, and where the ‘gig economy’ is growing and reshaping the workforce, it seems only fitting to me that the idea that workers should own the means of production should see something of a revitalisation.

Not A Robot is a reader-supported publication. If you’d like to support my work, and help me finally fix my broken laptop, please consider subscribing!


We’re All Old

The breakneck pace of technological advancement means that knowledge and skills must constantly evolve to keep pace. The stereotype that ‘old people can’t use computers’ is no longer relevant: today, almost nobody can really use computers, even though they rely on them for everything - and the problem is only getting worse.

Jason Thor Hall from PirateSoftware provided a short but illuminating anecdote about his experience at a Minecraft convention, where he discovered that a huge number of young people, at a videogame convention, didn’t understand what a keyboard, mouse, or game controller was:

Though it may be unfair to extrapolate from the behaviour and technological skillsets of children, I’ve seen enough evidence with my own eyes that the endless drive for simplicity, the ‘iPad-ification’ of technology, and the predominance of ‘trivial’ technological usage is resulting in a situation where huge numbers of people become helpless when confronted with anything beyond business-as-usual.

We’re all old now - technology has reached deep into all of our lives, whether we wanted it or not, and even an adept user’s skills and knowledge can rapidly become outdated or obsolete if they don’t take active steps to remain engaged and to maintain their capabilities.

In a world where even the software we rely on to keep us secure can itself cause a catastrophic global outage, we owe it to ourselves and our future to ensure that we don’t allow ignorance to leave us helpless in the shadow of the singularity.


Related Reading

Strongly related to the idea that we should learn how stuff works, is the idea that you should at least have the right to fix stuff when it breaks. You might like to subscribe to Fight to Repair, a newsletter which covers the ongoing battle to secure consumer and user rights to repair the stuff we use every day.

]]>
<![CDATA[Shattered Silicon Part 2: Broadcast Culture]]>https://tomhalligan.substack.com/p/shattered-silicon-part-2-broadcasthttps://tomhalligan.substack.com/p/shattered-silicon-part-2-broadcastSat, 25 Nov 2023 16:55:42 GMTIntroduction

In Part 1 of this series, I described a problem I see in modern technology: the Tyranny of Simplicity - wherein the shift to digital services and products has stripped away the capacity of our institutions and companies to take into account the subtlety and nuance that defines human interaction.

Here in Part 2, I describe another problem which, left unchecked, threatens our ability to take ownership of the technology we use every day.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Broadcast Culture

You don’t have to look very far to find people bemoaning the state of both offline and online media, the dangers of social media, or the risks of sharing your entire life online - it’s evolved into a self-perpetuating machine at this point: talking heads or anonymous posters lambasting one another for being wrong about everything, offending everyone, and ruining society, the planet, minds, lives, and everything in-between. Regardless of the subject, whether it’s politics, business, health, welfare, entertainment or even just mundane personal preferences, there is absolutely nothing more certain in the 21st century than the fact that if you can think it, you can waste an entire day arguing about it on the internet.

We’re probably all guilty of this tendency, to some degree - and there’s always plenty to talk about - so we can be forgiven for succumbing to that most basic of human impulses: the desire to be heard. God only knows how much time I’ve spent arguing with people I’ll never meet, or getting into lengthy debates about topics I can barely even recall today. However, I’m increasingly realising that our time on this earth is better spent doing rather than saying. For many people - particularly those who are not naturally interested in technology itself - modern devices and the internet have become more akin to a broadcast medium than the tools of human liberation and emancipation that I believe they can - and should - be. Despite the enormous power and potential which is literally at our fingertips, many millions of people experience digital life as though they are compelled to either drown in an ocean of noise or to stand alone atop a mountain of opinions, hurling them pointlessly into the valleys below.

I am increasingly convinced that the ‘talking’ parts of the internet have largely been a stagnating force on humanity, despite all of the good that can - and does - come from communication. I believe it would be infinitely more useful - both on an individual level and the more abstract societal level - if people were encouraged to do more rather than to speak more. As the saying goes - talk is cheap. We have tools at our disposal to create things our ancestors couldn’t have begun to dream about - yet we all too often spend our time receiving or broadcasting, rather than building.

In How To Actually Use A Computer I aim to provide some practical information about how we can choose to exploit the raw power we have at our disposal: to improve our own lives or the lives of others, rather than simply scroll through endless opinions, arguments, and debates. It’s difficult - no doubt about it - to motivate yourself to act on the goals and dreams you harbour within, and there’s no shame in using technology for entertainment or to relax and kill some time; but we should also try to keep in mind the sheer potential afforded to us by the devices that have become a fixture in our lives.



Whether you want to educate yourself, build a product, open a shop, create art or entertainment, offer a service or share knowledge and insight - there has never been a time in all of human history when it’s been easier to do so. Where once you would have needed to physically travel to meet potential partners or investors, you can now communicate directly without leaving your house. Where you might previously have needed to hire a team of people to bring your vision to the ‘proof-of-concept’ stage, there’s now a good chance you can do much of the work yourself - or at least for a fraction of the cost both in terms of time, and money. AI tools and services are already being utilised to kickstart people’s ambitions and creativity, turbo-charging the ‘force multiplier’ effect that technology has brought to our fingertips.

Share

Ads Infinitum

Unfortunately, in the digital world (and, in fact, the real world), attention is a battlefield. Adverts are constantly shoved in our faces, algorithms assess our behaviour and drive content to us that we are likely to react or respond to, and huge companies gently corral us into analytically convenient demographics and offer us apps and products which they believe are a good ‘fit’. We lap it all up: after all, if it didn’t work, nobody would bother wasting time and money doing it!

This is how the internet sees you

There is an enormous market for your information: your likes and dislikes, your attitudes, the things you buy, the places you visit, the people you communicate with, and your browsing habits. What we often perceive to be passive, harmless activities online have become, for years now, a vehicle for others to become extraordinarily wealthy through the simple act of recording and then selling information about us.

Though it’s common to find criticism of ‘consumer culture’, I would argue that ‘broadcast culture’ is a major enabling factor: something we virtually all engage in either actively, by sharing our views and opinions publicly, or passively, via the simple act of existing in the modern digital world with all of its trackers, analytics, surveillance and subterfuge. Advertising is the lifeblood of the digital world, and virtually everything you do online is designed, one way or another, to either push ads to you or extract information from you that can be used to work out which ads to push to you.

Whilst taking a break during the writing of this post, I read “Machine Killer”, by of Hit Points, a newsletter about the videogame industry.

Hit Points by Nathan Brown
#235: Machine killer
It’s been another one of those months in games media. Hell, it’s been one of those weeks: layoffs at Kotaku, a firing and solidarity walkout at The Escapist, the effective closure of Uppercut, and the founding of two new independent outlets, Aftermath…
Read more

Nathan discusses the problem of ‘the web’ being driven by advertisements and the effect this has had on publications - and also touches upon a new revenue-damaging ‘feature’: Google using AI to provide a summarised response to user’s search queries:

This model may work for Google, and Google users, for games that are already on shelves. If someone searches for advice on Cyberpunk 2077’s Dex vs Evelyn decision in 20 years, Google’s ML models will be able to provide it. But what about the games of the future? What are you going to train the machine-learning models of tomorrow on when you’ve put all the guides teams out of work, the websites they used to write for have gone out of business, and no new ones have stepped into the void because you’ve shown there’s not a penny to be made from producing content for Google’s robot army to steal?

In many ways, broadcast culture - our tendency to share, whether knowingly or not, vast quantities of information about ourselves - has enabled the AI revolution which is currently taking place, and which now threatens publications like those that Nathan mentions. ChatGPT and other AI tools know how to talk to us precisely because we have, for years, poured anything and everything into their training data. Some services are now providing ‘opt-out’ capabilities so that you can choose not to have your data used to train AI models - an admission that broadcast culture is intrinsic to the business model. There is an assumption baked into the digital economy that there’s a free-for-all on your data and the words and works you produce. Even if you are not actively trying to make a living online, someone is almost certainly making money off your digital footprint, and to ensure that they can’t is to take part in a time-consuming, complex, and ever-escalating arms race against behemoths with infinite money and great influence over the technology you use every day.

From the perspective of the companies harvesting and trading in the data and information we leave in our wake, there is a resource which we, as humans in the digital era, can’t help but produce in vast quantities. Since this resource is not bound by national borders, these companies have erected their own, and marketed them back to us - and regulators - as ‘privacy controls’, all the while tailoring their software and services in such a way as to encourage us to produce even more data, to broadcast more information about ourselves, and to stay within their borders. Keeping you ‘on-site’ is a major ambition for services like Facebook, which goes a long way to explaining the scope creep of what was once a glorified contact list.

If, in the early days of the internet, we had decided that actually, we didn’t really mind paying for things like search engines and services like Facebook, then perhaps the reliance on ad revenue might not have had such a stranglehold over the digital economy, which in turn may have resulted in the internet being shaped in a different mould than the one we currently have. Companies may have been incentivised to encourage you to pay more rather than to say more - and in turn, perhaps social media platforms would have been designed to provide value to the customer, rather than being designed to bait users with outrage and division in order to keep them engaged and producing data for advertisers.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The Remedy

Whatever shape the internet takes in the future, it’s clear that something is changing. AI tools and services are already driving a wedge between creator and consumer, and even wannabe Bond villains like Elon Musk seem to acknowledge the writing on the wall, turning X (formerly ‘The Hellsite’ or ‘Twitter’, to non-users) gradually into a paid-for service. Whether that works out for him or not remains to be seen - but it’s at least a change that his competitors haven’t been particularly keen to embrace for the time being, reliant as they are on their free users churning out endless data.

Other services - such as Substack - are going with a business model that actively promotes a more traditional ‘get what you pay for’ style of social media: users can subscribe to the writers they like, and pay for those they want to support. It’s up to writers to market themselves and to determine their pricing and what users get when they do subscribe. The fact that this is often presented as some kind of ‘new’ business model is very telling - and reveals just how bizarre the digital economy (and, to an extent, the legacy publishing and journalism industry) really is. The only way Substack’s business model could be more traditional and ‘normal’ is if writers had to build a brick-and-mortar shop to sell their wares.

Social media platforms themselves are diversifying: the new buzzword in social media circles is ‘federation’: a set of protocols and systems which enable users to draw content from, and publish content to, a multitude of sources. Services are loosely connected via protocols which allow individuals, if they are so inclined, to more carefully isolate their information and ‘own their own data’. Networks like Matrix and Mastodon enable users to self-host their own ‘home server’ - and then connect it to any number of external servers depending on the content and communities they choose to interact with. This is in stark contrast to the bigger, centralised services which currently dominate the landscape, forcing users into a walled garden in which the content they consume is more carefully curated than they might suspect.

In Self-Hosting Shenanigans, I explained some basic approaches to self-hosting services and tools, and will return to the topic in the future to demonstrate how we might do the same for social media services.



A common theme in the ongoing shift in digital services is a return to deliberation alongside an enormous evolution in the arena of curation. Whether you are producing or consuming, or a combination of the two, we appear to be arriving at a fork in the road. We can choose to accept the digital landscape as it is: a handful of massive companies providing ‘free’ services in exchange for our data, or we can choose an admittedly ‘messier’ path, but one which affords more control, in which we as individuals take it upon ourselves to commit to the things we actually care about; to put our hands in our pockets to support the things we value, and to unshackle ourselves from ‘the algorithm’ which determines who and what we see based on the whims of people and businesses we have no relationship with, in favour of more carefully considered connections directly to the people, businesses, and communities we are interested in.

If we don’t take the second path, then the future is likely to look very much like the past - ever more tightly controlled, curated by unseen algorithms, and funnelling profits into the hands of those who deserve it least at the expense of your identity and privacy.

It’s unlikely that the tendency for online companies to trade in our freely provided information will go away overnight, but it seems to me that we can take practical steps right now to detach ourselves from such a pernicious business model and start to take things into our own hands.

Whether you run a business, write, build products or services, or just use the internet for entertainment, we should try to remember that we have more power than it might appear, and with a little work, we can establish ourselves today much more easily and effectively than our analogue ancestors.

When it comes to social media, we should consider more carefully the value we get from adding our voices to the maelstrom, and the value we get from listening to people who speak a lot and say little.

At your fingertips, right now, is a device which - with a little care and consideration - can empower you in ways you might never have considered. It’s your choice how it’s used, but I hope that we collectively choose to do more than we say, to create more than we consume, and to take back control where we have ceded it to those who have profited enormously from our great online experiment.

A return to the Wild West of the digital era is looming, I believe - and it will be hastened, for good or ill, by the advent of viable AI tools which shake up our relationship with the tech behemoths we’ve become accustomed to relying on. Our job, as curious, creative apes, is to identify and embrace what it means to be human in an age where the way we interact with the digital world is rapidly changing. We are the builders, and we get to choose.


Thanks for reading! Feel free to share this post, and if you’re not already subscribed, hit the button below. Any subscription at all is appreciated, but a paid subscription helps me to write more regularly!

Subscribe now


In Part 3, I’ll discuss the third, and possibly the biggest, problem with our modern world - one which is only going to get worse with the proliferation of AI tools and services: A Skill Issue.

]]>
<![CDATA[Shattered Silicon: Part 1]]>https://tomhalligan.substack.com/p/shattered-silicon-part-1https://tomhalligan.substack.com/p/shattered-silicon-part-1Sat, 04 Nov 2023 20:54:30 GMTWelcome New Readers!

Before I continue with today’s post, I’d like to welcome the influx of new subscribers I’ve had over the past few weeks. It’s been around a month since I had my 100th subscriber, and since then I’ve been averaging a handful of new readers every day. Thank you to everyone who’s taken the time to subscribe, share, and engage with my work here - it means a lot! I hope you all stick around and that you find something here that entertains, educates, or gives you something to think about.

In this 3-part series, I’ll lay out some of my thoughts about where we are with modern technology, and some of the problems I hope we can address.


Thank you for reading Not A Robot. This post is public so feel free to share it.

Share


Introduction: The Digital Divide

I first heard the term ‘Digital Divide’ many years ago, during a conversation about a problem facing the UK workforce: how to help secure jobs and improve the skills of workers during a period of rapid change in the world’s relationship with technology.

Back then, 20-something years ago, I understood the term to refer to the gap between an older generation which was generally unfamiliar with digital technology, computers, and the internet, and a younger generation that was growing up with the skills of the future (now, somehow, the present!) being fostered within them from an early age. As a teenager, I could understand in an abstract sense that ‘making people use computers’ - where previously there had been none - would lead to some people struggling; but I had no direct experience to truly understand it. The older people in my life may not have been particularly skilled with technology, certainly, but I didn’t get the impression that they cared all that much anyway. I wasn’t exposed to people trying - and struggling - to use computers: people seemed to manage as much as they felt they needed to, and that seemed about as balanced a situation as we could hope for. The ‘Digital Divide’ remained, for me, something that other people needed to worry about, and I got on with my life, blissfully ignorant of the impact - good and bad - of modern technology outside of my own bubble.

When I first became aware of the Digital Divide, the experience for most people - at least those who were lucky enough to have access to modern technology in the first place - was fairly uniform: If you ever used a computer, it was probably a Windows PC at work or school, and some families had a single PC at home, or, if so inclined, a games console. Internet access was far from ubiquitous, and those who could get online were mostly on a dial-up connection - preventing the use of the telephone whilst someone was online, and vice-versa. Software came on floppy disks - later CD-ROM - and though it was technically possible to download programs, doing so wasn’t a particularly streamlined or straightforward experience, and so many people never bothered.

Contrasted with today’s consumer landscape, the late 80s / early 90s felt like another universe altogether. Today, we carry computers in our pockets that outstrip even the most powerful home PCs of my childhood, and the only time we’re ever offline is when we’re out in the sticks (and even that is becoming less of an issue), or deliberately choosing to look up from our screens in an attempt to bring some reprieve from the onslaught of information that characterises digital life in the 21st century. We send and receive more data on a monthly, weekly, or sometimes even daily basis than home computers of the 90s could comfortably store on a hard drive, and we perform tasks with a few swipes of our thumb or clicks of a mouse that people just a decade or two ago would have thought unimaginable.

Share Not A Robot

The people needing help, back when I first heard the term, have by now likely left the workforce, and though we probably all have memories of helping an older relative or friend navigate some modern tech, it’s easy to think of the Digital Divide as a ‘problem of the past’. Modern tech is now all around us - at work, home, and school - and for those of us who grew up with it, we’ve watched those who may have struggled at work retire, and, outside of the worst cases, find some kind of harmony with the tech which at one point may have caused distress.

Today, the term ‘Digital Divide’ has become diluted into one of those somewhat ambiguous catch-all terms, that hints at a problem but doesn’t provide the context necessary to understand the present - and future - dilemma we are currently facing. Broadly speaking, it’s possible to point at any particular area within the technological landscape and identify those who ‘can’ and those who ‘cannot’. Those who are able to benefit, and those who are not. It’s easy to reel off problems with access, experience, and accessibility - and it’s not such a great problem, when those issues are identified, to imagine and implement solutions to them. Modern technology can be forgiven, I think, for not being perfectly suited for every person in every situation - it’s unrealistic to imagine that every potential problem can be anticipated in advance.

The problems that the term ‘Digital Divide’ was originally conceived to describe still exist today, though they may perhaps manifest themselves in different forms. Socio-economic, cultural, accessibility and educational factors which limit or prevent people from taking advantage of modern technology are all important problems which we should strive to address - but even for those for whom access is not the issue, there remain problems which are often more abstract and difficult to define.

I believe that the sheer pace of progress and the current ubiquity of modern technology have blinded us to a set of problems which I fear, taken together, risk a deep fragmentation of our relationship with technology and prevent us from taking full advantage of all that’s on offer.

The first of these problems - and the subject of this first part of Shattered Silicon - relates to how we design the modern systems and software that permeate every aspect of our lives and transform our relationship with services and institutions.

Not A Robot is a reader-supported publication. To join me on my quest to encourage and build a better digital future, consider becoming a free or paid subscriber.

The Tyranny of Simplicity

There’s an often-paraphrased quote from Wind, Sand and Stars by Antoine de Saint-Exupéry - that I’m personally very fond of - which goes:

In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away…

The application of this philosophy in the technological world is very common - and plain to see. Companies spend vast sums of money and time figuring out what not to build, and, speaking from direct experience, there’s often a great deal more satisfaction to be found in deleting code, rather than writing it. Stripping a system down to its most simple, elegant form is almost an art - but what we often fail to understand is that humans are not simple and - in my case at least - often not very elegant either. In my post The Warehouse Of Horror, I mentioned the problem of identifying edge cases in software design (circumstances which fall outside of the expected norms) and the problems which can be caused when developers fail to take them into account.



Our lives are full of edge cases. No two people share exactly the same circumstances, but modern digital infrastructure is often built not to accommodate our different situations, capabilities, desires or backgrounds, but instead to standardise and regulate the more flexible analogue systems they are ultimately designed to replace. Whereas once you could walk into a building and speak to another human being, who would (at least, in theory) respond to your specific circumstances and tailor their services to your needs, it is often the case that the human face of organisations, institutions or even the government is reduced to that of an interactive sign-post. They may point you in the right direction, but ultimately you will be routed through a series of unyielding digital checkpoints that will no more bend to your will than they might comprehend your common-sense protestations that ‘the system’ may well be ‘more efficient’, but is still incomprehensibly stupid.

Aside from the academic and socio-economic analyses of the Digital Divide, we should be mindful of the risks of building modern infrastructure and services that are so tightly regulated and standardised that their ‘perfect customers’ are the architects themselves: people who don’t need to use the systems they build but who nevertheless refer to a dashboard of statistics and focus groups that confirm the perfection of their designs.

Whether it’s a private company or a public service, the move to digitise services, simplify and standardise processes and ‘improve efficiency’ can - and often does - result in new systems and processes which eschew flexibility and ‘the personal touch’ in favour of a more rigid and impersonal approach. Staff trained on these systems are usually unable to step beyond the constraints imposed upon them, leading to brittle and often heated interactions with customers or service users who might be frustrated by the obstinate refusal of ‘the system’ to adapt to their needs.

This way lies madness. We cannot - and should not - expect humans to confine themselves to the constraints of the ‘perfect user’. The systems and services we build using modern technology should, instead, acknowledge that every user demands a unique approach - and that in the confrontation between man and machine, it is the machine which should adapt to our needs, rather than us contorting ourselves into the shape most easily digestible by the system.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Embracing Complexity

I believe there’s cause for hope here. Much of the frustration and inflexibility we may often find in the design of the modern digital world is, I suspect, a result of an attempt to convert complex human interactions into processes which can be navigated through interfaces that are, despite decades of research and improvement, fundamentally incapable of anything beyond a relatively basic set of inputs and outputs. It is simply not possible to convey human experience through a mouse click, nor to appeal to the compassion of a digital application form. The simplicity of the interface has no capacity for the complexity of our thoughts or needs. Though we should be careful not to assume that the solution to problems caused by technology is more technology, it’s worth considering that we currently stand on the precipice of an enormous shift in the way we interact with the digital realm.

In What The Vision Pro is Actually For, I argued that despite some hilariously dystopian overtones, the concept of ‘Spatial Computing’ may open up space for much more intuitive and flexible interactions with modern systems. Imagine a world where systems could respond to the subtle social cues we give each other every day, react to gesticulations and vocal tone, and where your hands are free to interact with a digital system in ways similar to those you might use for any other real-world process. It’s something of a sci-fi trope that the future of humanity is dispassionate, functional and practical - closely mirroring the cold logic of the machine. What if that weren’t the case? Instead - what if the future is one in which you’re free to interact with the digital world with all the emotion and complexity that is our gift as humans?



Developments in AI will undoubtedly help here too - despite the worst-case-scenario fears. It’s already impressive how freely one can talk to an AI system like ChatGPT, and how it interprets ‘ordinary language’ to produce results which make sense. Obviously, the problem of ‘understanding’ is still an enormous barrier - and it’s by no means certain that it’s one we’ll ever be able to overcome. However, we may still see improvements to the current status quo even if true comprehension from the machine eludes us.

To illustrate more clearly how I see a potential future - take a look at The Open Interpreter Project. Open Interpreter (OI) is software which allows you to ask something of the device, and, using the power of LLMs, watch as your machine springs to life, designing, building, and executing the right software to achieve the task at hand. Arthur C. Clarke once famously wrote in Profiles of the Future: An Inquiry into the Limits of the Possible that

Any sufficiently advanced technology is indistinguishable from magic.

The Open Interpreter Project - and systems like it - are, I believe, the closest thing to magic that we currently have. The idea that one day soon we will be able to speak to a machine the way we would any other human, ask it to perform some task the way you might a colleague or a friend, and have it design and implement bespoke software to do exactly what you asked of it, in the way you specified - is astonishing. Clearly - we are a long way from that reality right now, but the fact that we are seeing the assistant of the future taking its first steps today is incredible to me.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

A Human Revolution

The answer to the Tyranny of Simplicity is not, in my view, to reject the digitisation of systems and services, but to acknowledge that there is no such thing as the perfect user - and to build those systems with this fact in mind. We should strive to embrace complexity, diversity and ambiguity - to build systems which adapt to the user rather than imposing a rigid, simplified set of requirements and constraints. User interfaces should respond to the way the user thinks and moves - rather than assuming that there is one perfect design which everybody can navigate freely. Accessibility considerations should be applied seamlessly, rather than being tucked away in some hidden sub-menu.

Though we’re quite a way off from such a world right now - I’m excited to see how our interactions with modern technology, services and systems will evolve over the next few years. Certainly, I see no obvious reason why navigating modern technology should become more rigid and proscriptive than it may currently be - and though I’m not one of those breathless techno-evangelists who believes technology is always the answer, I do feel that the next few years will be an exciting opportunity for us to design and build a much more natural, expressive, and human-oriented digital world than we’ve so far been able to achieve.

Though it’s clearly unrealistic to imagine that we may end up with systems & interfaces that can be all things to all people, I do believe intelligent agents will play a huge role in the next phase of our technological evolution. Since we’re unable to design systems that suit everybody’s needs, we should instead build agents and assistants that can interpret our requests and instructions, and figure out how to get the result we want. Rather than building one app through which we expect users to input and manipulate information to achieve their goals - why not allow users to employ an artificial assistant capable of building the right software for them, at the right time? There is a paradigm shift on the horizon, I feel - and it’s my hope that the coming years will reveal a future where technology that embraces the complexity of human life is entirely possible.


Thanks for reading. In Part 2 of Shattered Silicon, I’ll discuss the problem of Broadcast Culture - where people have eschewed action for reaction.

If you’ve enjoyed this post, please consider subscribing. If you can afford it, a paid subscription is greatly appreciated - but if not, you’re more than welcome to stick around - I hope you find something you enjoy!

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

If you’re still here - why not check out some of my other, semi-related posts?

]]>
<![CDATA[How To Ruin Gaming For Everyone]]>https://tomhalligan.substack.com/p/how-to-ruin-gaming-for-everyonehttps://tomhalligan.substack.com/p/how-to-ruin-gaming-for-everyoneThu, 28 Sep 2023 21:53:04 GMTIt’s been a very noisy few weeks in the world of game development, after Unity - the long-time engine of choice for indie devs, freelancers, students and small studios - decided to bundle up all of the (admittedly never exceptionally stable) community good-will it had accumulated over the years, and kick it into a blender by announcing a new fee structure with no warning, and drastic implications.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Let’s Make a Mess!

The problem, according to Unity (the company), is that Unity (the game engine part of the company) is not profitable. The solution, apparently cooked up by people with candy floss for brains, was to start charging game developers a fee based on the number of installs of their game - the ‘Unity Runtime Fee’ - in addition to licensing fee changes and revenue share arrangements.

Whilst developers would always grumble over increased costs for licensing or imposition of revenue-sharing models - the per-install fee was received as an offensive, poorly planned over-reach - and that’s putting it politely.

Obvious questions were raised:

  • How are installation metrics calculated?

  • What happens when Unity’s figures don’t align with a developer’s figures?

  • Who is liable for the per-install fee - developers themselves, or publishers & platform-holders?

  • What protections are in place to prevent bad actors from simply installing a game millions of times and drowning a developer in fees?

  • What happens when a user uninstalls a game to save space, and reinstalls it later?

  • What about games which are delivered as part of a charity or promotional bundle - where the developers themselves may never receive any payment?

Satisfactory answers were not forthcoming, and so, in true 21st-century fashion, the game-dev internet exploded in fury. Developers were outraged. Much gnashing of teeth occurred. Studios and publishing platforms began issuing statements announcing their frustration and demanding answers. Indie devs declared that they were switching engines for current and future projects.

Yes, I stole liberated this image

Though it’s hard to argue that Unity should be prevented from making a profit, the imposition of the Runtime Fee - and the subsequent failure to communicate with developers impacted by it - has certainly secured a place in history as being one of the Worst PR Moves Ever. For almost two weeks, Unity’s embattled social media team tried, pointlessly, to respond to developers and the wider game-dev community, armed with nothing but, presumably, caffeine and prepared statements.

I think I’ve seen this exact response hundreds of times, I’m not joking

Predictably, every response invited further prodding and mockery. The lack of concrete answers to basic questions such as ‘How does Unity know how many people install my game?’, and ‘What happens if my game is successful and the runtime fee bankrupts me?’ led to Unity being variously accused of installing spyware on people’s devices, sacrificing privacy for greed, breaking numerous data protection laws, and opening up entirely new and novel attack vectors for malicious people to infect customer’s machines and bankrupt studios by running installer-bots. Enough hyperbole was generated to power the internet outrage factory well into the waning days of September. At its core, the runtime fee exposed two major issues:

  • Developers could not easily plan for install-related costs, and were left angry and worried about their ability to weather the cost if a game actually became successful (which may be an incredibly unlikely problem to have, but not one without precedent - and every indie dev dreams of, and hopes for, a viral hit).

  • Unity announcing such a change in this way, and with such poor follow-up, severely damaged trust and forced developers to rethink their relationship with the engine.

There was also significant concern over the ‘retroactive’ nature of these changes, with Unity’s wording suggesting that games released prior to the imposition of the runtime fee would also be affected. It didn’t help that Unity had, for a hilarious & nonsensical reason, removed from their GitHub repository the Terms of Service which protected developers from retroactive changes: a change which had actually occurred long before the fee imposition. Nevertheless, it become another stick with which to beat the now-dessicated remains of the Unity social media team.

Delete the law, nobody reads it anyway!

Console game developers were particularly irked about this issue: when developing a game for a console such as the PlayStation or Nintendo Switch, it’s not always possible for developers to simply choose a particular version of a game engine and stick with it. Instead, they must comply with whichever version of the game engine the console’s SDK supports. This meant that games which were well into development for console release had suddenly had the rug pulled from under them - there was no way to avoid the new fees even though development may have begun years earlier, before they were but a sparkle in Unity’s eye.

It didn’t take long for Unity’s CEO - John Riccitiello - to become the target of much of the ire directed at Unity. Riccitiello himself made headlines in 2022 for calling game developers ‘fucking idiots’ if they didn’t add monetisation to their games beyond the initial purchase price. He later apologised, but this interview, other decisions at Unity, and his history as CEO of EA (it was under his tenure that the controversial decision was made to add loot-boxes to FIFA 09) have done little to persuade the game-dev community that he’s the right person to be steering the ship. There’s also a comically evil leaked recording of Riccitiello on a shareholder call, mooting the possibility of charging players of the extremely popular Battlefield franchise a real-life dollar to reload their fictional guns:

When you are six hours into playing Battlefield and you run out of ammo in your clip and we ask you for a dollar to reload, you’re really not very price sensitive at that point in time.
- John Riccitiello, Joy-Sucking Capitalist

All of this culminated, terrifyingly, in death threats which resulted in Unity closing two of its offices. Police later revealed that the threats were made by a Unity employee, though no further information has been forthcoming thus far.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

The Unpology

On the 22nd of September, Unity issued an ‘apology’, in an open letter from Marc Whitten (who, by all accounts, seems to have been thrust into the middle of this debacle through no fault of his own), and clarified some of the issues raised around the runtime fee. Though there was to be no back-tracking on the Runtime Fee itself, Unity have responded more clearly to certain concerns raised, and do appear to have softened their stance a little on data collection - allowing developers to self-report their installation figures rather than simply asserting their own black-box metrics (though they do appear to reserve the right to calculate their own figures and charge you based on those numbers anyway). The issue of retroactivity of the new fees was cleared up - though not particularly satisfactorily: the new fees will only apply once a developer releases an update which happens to use “the next LTS version of Unity shipping in 2024 and beyond”.

What this means, in practical terms, is that Unity games released on an older version of the engine will simply stop being updated much earlier than developers and players would usually expect: which means more bugs and performance issues left unresolved. Unity already suffers from a (some-might-say deserved) reputation for a glut of low-quality games, and I would expect the practical impact of the Runtime Fee to do little to curb that perception.

The community response to the open letter has been guarded at best. Some, like Rami Ismail welcomed the clarifications, though left little doubt that trust had been severely damaged:

Rami’s response to the open letter

Others were rather less enthused - with Among Us and Cuphead developer Tony Coculuzzi stating that it was “Still a terrible change, if slightly better”:

Tony pulls no punches

Disappointed too, are asset and tool developers, who have taken the direction of Unity as a sign of things to come.

Freya Holmér, creator of the excellent Shapes and Shader Forge tools, made her exasperation with the situation clear. It’s not certain whether Freya plans to continue supporting the Unity development ecosystem in the future.

Kenney - creator of game assets and starter kits has made clear their intent to support alternative engines, and appears to be full-square behind the open-source Godot engine.

Kenney issues a statement of intent

All in all, it’s fair to say that though Unity may have poured a little water on the fire they started, it’s by no means certain that they will be able to restore their relationship with the community, as developers begin looking elsewhere for a more reliable development partner.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

Cui Bono?

Though it’s patently clear that Unity has brought upon itself an enormous trust issue - the simple fact remains that many developers and creators are, for the time being, too invested in the engine and its ecosystem of services and tools to cut ties immediately. Unity’s pricing changes bring it closer in-line to the Unreal engine, which means that for future projects, many developers will be comparing the two on a much more equal footing. Whether Unity manages to retain its share of the developer market in the long-term remains to be seen: Epic - the creators of Unreal, have invested heavily in ease-of-use in recent years, but Unity is still regarded as the more accessible engine by many.

In all the noise, open-source engines have seen a groundswell of support as developers explore alternatives to Unity.

Re-Logic, the company behind indie best-seller Terraria - issued a statement expressing their disappointment in Unity’s actions, and donated $100,000 to both Godot and the FNA engine, along with a commitment to donate an additional $1000 per month.

Re-Logic’s statement

AppLovin, who had attempted to acquire Unity in 2022, released a proof-of-concept tool leveraging OpenAI’s ChatGPT to allow developers to export their Unity game to Godot or Unreal, which is both a middle-finger to their erstwhile partners-to-be, and an interesting test-case for AI’s ability to increase dynamism within the game development process.

Finally, in an unexpected turn of events, Asbjørn Thirslund: the creator of the immensely popular but retired-since-2020 Brackeys YouTube channel - dedicated to providing tutorials and educational content for game developers - today released a statement hinting at a potential return, with a focus on Godot, and echoed some of the sentiment from Re-Logic’s statement:

Asbjørn / Brackeys statement

Brackeys holds a special place in the heart of many Unity developers, and their strong advocacy for Unity over the years, coupled with their accessible and informative tutorial videos and content, has played no small part in Unity’s growth, especially amongst the indie and student communities. Asbjørn’s statement stops short of confirming that Brackeys will be returning, but it’s an encouraging sign for the Godot development community that he is learning his way around the engine - and if Brackeys are able to do for Godot what they did for Unity, then it promises to be an exciting future for open-source game engine development.


Tidying Up

Though it’s been an exciting couple of weeks in terms of raw internet drama, the frustration and alarm caused by Unity’s botched fees announcement led to some ugly real-world consequences. With death threats and development chaos, Unity’s announcement threw a huge spanner in the works for many developers - whether individuals or studios - and has forced them to re-evaluate their business plans, re-negotiate terms with publishers, and reconsider their options. This is a costly exercise in the game development world. Re-tooling and retraining can be an expensive and time-consuming process, and Unity’s poor communication and planning does not imbue the development community with a sense of confidence and stability.

Many developers will continue to use Unity for the foreseeable future - willingly or otherwise - but it’s clear that Unity’s position is a game-changer for many; and the next few months and years will show whether developers are still prepared to play.

Not A Robot is a reader-supported publication. To receive more posts about game-dev and to support my work, consider becoming a free or paid subscriber.


Thanks for reading. If you’ve enjoyed this post, why not check out these related posts, in which I explore a little open-source game engine called Tiny:

]]>
<![CDATA[The Warehouse Of Horror]]>https://tomhalligan.substack.com/p/the-warehouse-of-horrorhttps://tomhalligan.substack.com/p/the-warehouse-of-horrorThu, 07 Sep 2023 00:20:36 GMTSometimes in software development, a project can seem to go perfectly well, but still result in disaster. This is the story of my first major taste of such a project, how I dealt with the immediate fallout, and the lessons I learned in the process.

This is a fairly long post, so I’ve tried to keep it entertaining - but if you’d rather skip to the end for the ‘Pro Tips’, feel free (though you will be missing out on a good story and some excellent imagery, if I do say so myself).

Names and some details have been changed to protect people’s anonymity.

One Angry Man

It was an unremarkable morning at the end of a grey British summer when The Call came. Though it wasn’t a daily occurrence, there was nothing particularly unusual about receiving a phone call straight to my desk - I had several clients who had my direct number - but in the world of consultancy, there were really only two possible reasons a client would reach out to me rather than their primary contact, which was usually a salesperson. Either the client had an idea for something new they wanted to add to their system - the good kind of call, which meant I could control what happened next and earn a little commission - or the client had a problem and knew that the fastest way to get it fixed was to start a fire - the bad kind of call. This, unfortunately, was the latter.

“Tom, we’ve got a serious problem here - this doesn’t work at all!” announced the thick, Scottish voice before I’d managed to place the phone to my ear. Terry. I winced. His voice was loud enough that everyone around me could hear.

I liked Terry - a big, burly man approaching retirement, who dominated whatever room he was in. His job was to manage the distribution hubs for a national retailer - and my job, in this case, was to make his life easier and build a system to allow him to digitally track the packages going in and out of the warehouses he managed. We had a good relationship, but he took no prisoners when it came to getting what he wanted - which was more of a problem for management than it was for me. Most of our conversations were based around relatively minor technical details and UX concerns: the overall design of the system we were building had been agreed months before either of us had been introduced to one another.

Though we occasionally had difficult conversations, I appreciated Terry’s blunt assessment of things he didn’t like, and I like to think he respected the fact that I was always ready to hear him out and give him an honest response, even if I had to disappoint him. He had a reputation amongst my colleagues for being difficult, but I kind of enjoyed the fact you could have a bit of a back-and-forth with him. It was more interesting than most of the meetings I had to sit through, at least. We’d deployed the system he was calling about just a day or two earlier.

“Oh - I’m sorry to hear that Terry, what seems to be the problem?” I replied, switching into that sickly-sweet Customer Service mode. Probably, in hindsight, a mistake.

“WELL IT DOESN’T F***N’ DO THE JOB!”, he bellowed, suddenly irate - probably didn’t appreciate my tone. All around me, heads turned. Eyes widened. Faces (well, mine) reddened. Uh-oh.

An accurate depiction of my reaction

If you’re in a technical role, there’s a good chance you’ve heard “It doesn’t work” before - and there’s an equally good chance that you’ve dealt with such complaints with a request for more specific information, error codes, logs - something that actually helps you understand the nature of the problem. It’s not unusual - but it’s not pleasant to have such a complaint shouted at you by a 7-foot man who could snap you in half without strain and sounded ready to do it.

“OK - can you tell me what exactly is going wrong? Where in the system are you seeing the problem?” asked I, naively. The project involved several major components - databases, web frontends, mobile applications, and domain-specific hardware. There were many potential points of failure - and something going wrong somewhere in the mix was to be expected on occasion. But not, ideally, days after rollout.

“The problem is the f*****g system, Tom! It’s useless to us! First of all…”

An hour and a half, pages of notes, and several pints of sweat later, we ended the call with an agreement that we’d set up a meeting to discuss what had gone wrong, and who needed to be thrown to the wolves as a result. Obviously, the unspoken assessment was that it I was I, in all my early-20s fresh-faced innocence, who was to become lupine luncheon.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

The System Is The Problem

“Are you okay?” was the first thing I heard after putting the phone down. My boss looked as though she was ready to cry on my behalf.

“I’m alright, but something’s gone badly wrong here” I replied, exhausted.

I was shocked at the sheer number of problems Terry had raised. A few were fairly minor issues that could be resolved in a few hours - UX improvement suggestions, and a couple of performance issues that manifested once they put the system under serious strain - but the vast majority of his complaints, and the source of his frustration and anger, were about missing features: things which had never been on my radar, for which I had no documentation, and which Terry and I had never discussed, but all of which suddenly seemed like such basic, obvious requirements - at least the way Terry had complained about their omission.

To my mind, it was as though the fuzzy image in a telescope had gently shifted into sharp focus, revealing a planet-destroying asteroid heading straight toward Earth.

Everything he’d listed felt reasonable, and seemed like the kind of thing which we would, ordinarily, have pitched to the client ourselves. Poring over the design documents and tech specifications for the project, I was aghast at the disparity between what we’d built and what Terry now claimed he expected. How could we have possibly ended up here? And why did Terry - the person I’d had the most contact with out of everybody involved in the project - seem to be under the impression that not only was I aware of these things, but that I’d simply neglected to implement them?

You see, we were meticulous when it came to documentation. Everything we built went through several iterations of design and planning - and nothing was built without accompanying documentation and explicit agreement between ourselves and the client. This served us very well in the industries we serviced: accountability was everything and for most of our clients, the ability to audit the entire system from top to bottom was a must.

Though the initial gut-wrenching reaction to Terry’s phone call subsided as I started to piece together what had happened, I started to feel my own sense of anger and frustration rising. There was one clear difference between this project and virtually all others we’d engaged in before: the involvement of a third-party consultant with whom I’d had little personal contact, but who nevertheless had immediately struck me as the kind of man who’d try to sell you tartan paint. I don’t recall his name, but for the sake of the story let’s call him Bill.

Normally, we would work directly with our clients. After the initial sales pitches and agreements were done and dusted, it was my job to travel to the client’s place(s) of work and spend time analysing their processes, identify areas that could benefit from digitisation, take on board the many, many quirks and peculiarities that every business ends up accruing over time, and at the end of it all, produce a design for a system which would bring them out of the stone age and into the digital era. I enjoyed this process a lot - I like seeing how things work, and I’d meet all kinds of people at every level in an organisation. It was the most interesting part of the job for me. Though I also enjoyed the actual implementation phase, the systems we designed rarely required anything technically challenging or included anything particularly interesting to build.

In this instance, however, Bill had done most of the preliminary work. He’d been hired by the retailer - Terry’s employers - to analyse their processes and build or buy a system that did what they needed. At some point along the way, Bill had decided that the new system would need to be bespoke and approached us to build it. This was unusual for us - normally clients would approach us, or we’d pitch to them directly. Bill was eager to position himself as the person ‘in charge’ of the project - though he was technically a third-party consultant. He seemed to enjoy brushing off our own suggestions and ideas in favour of his own - much to our frustration (we could earn a commission if clients bought things we suggested ourselves) - but hey, the customer’s always right!

BUSINESS BILL

Bill’s notes about what was required had been relatively light, but acceptable for our purposes. He had covered the broad brushstrokes of the system he wanted us to build, and, though we hadn’t done the initial in-depth process of discovery ourselves, we were assured by the client that Bill had done this kind of thing many times before and that as long as we followed his lead, we’d all be fine. They weren’t going to pay us to do something Bill had already done. Fair enough. We tightened up our understanding of Bill’s brief, filled whatever gaps in functionality we could find, and once we all agreed on what needed to be built, the almighty contracts were signed. Design documents were produced and technical specifications were agreed upon and signed off. Bill was happy. We were happy. The client was happy.

Share Not A Robot

Many months passed, and the system was coming along fine. Bill had gradually taken a back seat. He was kept in the loop with important decisions, documentation changes, and technical information, but essentially, his involvement had become something of a box-ticking exercise as time wore on. Occasionally he would appear and ask for a demonstration of the system as a whole, which we dutifully provided, and then he’d disappear again, content and with little comment, to report back to the client. No news is good news, as they say!

Terry, on the other hand, was much more focused on very specific areas of the system. He had a lot of suggestions and feedback on a handful of particular components, and he and I worked closely to make sure his feedback was taken on board and that those areas did exactly what he needed them to do. I drove out to the nearest warehouse several times to meet Terry and the staff who’d be using the system. Everybody seemed positive - things were going as expected, and it all looked like we were on track for a successful deployment.

The deployment date arrived, and everything went smoothly. Servers were up, databases were online, all of our tests and checks passed, and the warehouse staff began using the system. It all seemed to be going great. Everybody who needed to use the system had been given credentials and relevant documentation, and I sat back and watched as live data started to stream through. All systems go, no alarms, a textbook successful deployment.

Until…

Side-Effects And Edge Cases

As software developers, we become accustomed to watching out for edge cases - usage or circumstances outside of the normal expected operation - and building software that handles them gracefully. Sometimes you treat an edge case as an error because it is an error. That thing should never happen, and you want to stop it from becoming an issue. Sometimes you need very specific behaviour to handle a particular situation which doesn’t occur often but might occur occasionally. Sometimes, you need some combination of the two: specific behaviour to handle a situation but to raise it to relevant parties as a problem that should be investigated and avoided in the future. Context is often important when it comes to handling edge cases: we can’t build software that takes into account every potential situation, but we should strive to build software that doesn’t break whenever an edge case occurs.

Similarly, side effects should be identified and taken into consideration wherever possible. Usually, you want to make sure the code you write has as few side-effects as possible - and that the primary impact of rolling out your software is that the underlying goals are met. Secondary benefits are usually welcome of course, but sometimes, benefits collide with edge cases in unforeseen ways.

In this case, it became apparent that the primary cause of Terry’s angst was not that the system was breaking in any particular way, but that it didn’t cater for particular edge cases which, as a side-effect of the system rollout, had suddenly been elevated from occasional issues to continuous problems.

Just one example - amongst the many Terry alluded to - was how the system handled incorrect packages being sent down to the warehouse. To put it simply: it didn’t.

Though it was possible for users of the new system to identify a package as missing or damaged, we had no integration with the rest of the company’s fulfilment system, and attempting to scan an incorrect package essentially resulted in a ‘Package Not Identified’ warning, and little more. Absent a much closer integration with the client’s other software, we had no way of knowing whether an incorrect package was a mistake in the manifest - and that the package should be sent for delivery - or whether it had simply been sent down the conveyors in error (which was actually the case for the vast majority of such instances). To make things more frustrating, the client had no real process in place to reconcile the problem when it occurred anyway: it wasn’t supposed to happen in the first place, but it did. Prior to the new system being rolled out, throughput was low enough that this scenario could be handled easily enough: yank the package off the conveyor belt, and then send it back upstairs when things quietened down - the assumption being that whoever sent it down in error wouldn't make the same mistake twice, or that the recipient would eventually enquire about their delivery and next time round, the manifests would be correct. It wasn’t an ideal solution, but it did the job.

It turns out that increased efficiency in getting packages sent out for delivery very quickly resulted in an increase in overall throughput, which itself resulted in an increase in the number of entirely incorrect packages making their way down the conveyors and sitting there, uselessly, whilst staff scrambled to identify them and work out what to do with them. Now that everything was moving much faster, it became glaringly obvious that a serious problem existed elsewhere and that incorrect packages accounted for a significant percentage of the total number being sent down to the warehouse. This was clearly either some kind of admin issue elsewhere in the business, or just haphazard work from tired employees - but it wasn’t really something we were responsible for. It was a valuable insight, to be sure, but an extremely unhelpful problem for us to have made considerably worse.

Ok, it wasn't quite this bad, but it was very annoying

Of course, the possibility of incorrect packages had been known about all along and had been one of the first things we asked about. After we’d enquired about it, Bill had gone away to investigate, and later assured us that this problem wasn’t something that occurred often enough for us to spend time building a process to handle it - and so we all agreed that the reconciliation problem was out of scope for us. He was probably right, at the time - but neither we - nor he - foresaw the increased throughput resulting in a higher number of incorrect packages being sent to the warehouse each day. Now that everything was moving faster, this problem suddenly became very much in scope for our system. Unfortunately, it was too late. To fix the situation, we’d need a few weeks and more money: much too long for the client to wait and allow chaos to pile up in their warehouses - and they didn’t want to part with more cash for a problem they perceived to have been caused by us, even if we’d simply exposed an existing issue that we hadn’t explicitly catered for.

What had previously been considered an edge case and something we didn’t have to worry much about, had suddenly become a glaring issue that - to users of the new system - we obviously should have built a process to deal with. It’s very hard to argue otherwise when the client is roaring at you down the phone.

This was just one of the many ‘obvious’ issues which had been exposed thanks to the new system ‘improving’ things in a variety of different areas. The system we’d built - such as it was - worked fine: things weren’t breaking, servers weren’t going on fire, people weren’t being locked out and apps weren’t crashing. It handled the load well, all of our tests passed, and we had no reports of any serious issues with the software itself - but we simply didn’t provide ways to work around a litany of ‘edge case’ problems which became drastically worse now that everything moved much more quickly. Things that Terry and his staff had previously grudgingly dealt with on a case-by-case basis when they could fit them into their schedule had, overnight, become a tsunami of dysfunction that was preventing them from doing their jobs, and ultimately causing huge frustration and alarm for the client. They weren’t edge cases any more - they were just daily occurrences that they couldn’t afford.

This whole situation was incredibly frustrating to me. We’d had weeks of acceptance testing and improvements, we’d worked closely with the users of the system trying to identify as many potential gaps in functionality as we could. We’d asked for - and been supplied with - hundreds of megabytes of test data so we could put the system through its paces. We found optimisations wherever we could, and had the test users feed back to us whenever we made changes.

Ultimately, it felt like it had all been a huge waste of time. I took some comfort from the fact that the software ‘did what it was supposed to do’, but felt like the ground had suddenly shifted beneath my feet, leaving a client bewildered and angry as I sat there trying to avoid saying that these things weren’t our fault.

I knew that this wasn’t just a case of ‘bad software’ being rolled out - but in the heat of the moment, I couldn’t easily articulate the series of process failures that had led to this nightmare situation.

We booked ‘the meeting’. I was warned to expect hostile territory.

Give a gift subscription

A Phyrric Victory

You’re brave” scoffed Terry when he met me at the car park outside the main distribution warehouse, rain pooling on the ageing concrete. “I would have thought your boss would have come along to back you up. This isn’t going to be a good meeting for you.”

I knew he was half-joking, but only half.

“Maybe”, I replied. “It’ll be alright”.

Terry didn’t comment, but I could tell he was surprised at my apparent confidence.

I’d spent the days between Terry’s initial call and the day of our meeting investigating, going over paper trails, and building up a case for the defence. Though I felt more confident that I could squirm out of this mess, it had all left a bad taste in my mouth - ultimately I simply wanted the client to be happy, and to be proud of the work I and the rest of my team had done - but I had been thrust into a horrible position: I suddenly needed to protect my own employer from repercussions.

We walked over to the meeting room - a portacabin unassumingly situated directly next to a helicopter landing pad - and went inside. There were three men present already - polite with their greetings but clearly ready for an argument. Two were representatives of the company itself - I don’t recall their positions exactly but it was clear that their job was to work out why on earth such an expensive disaster had happened. We’ll call them Mike, and John. The other was Bill, whom I’d not seen in person for several months but who looked suspiciously well-rested and had the sun-stroked sheen of a man who’d just rolled off a cruise ship.

Introductions were made, cups of tea and coffee offered around the room, and then we got down to business.

“As you’re aware Tom, we’re not very happy with the rollout of this system.”, started Mike. “We’ve spent a lot of money here and we need to know what you’re going to do to fix this.”

“I appreciate it’s not gone well”, I replied. “When Terry called, I was shocked to hear about the problems you’ve had. I don’t understand how we ended up in this position, but we’ll happily do what we can to get any problems sorted.”

“That’s good to hear,” said John, “but you should know right now that we’re not prepared to spend any more money to fix things you said were already in hand. As far as we’ve been told - all we’ve heard for months is that everything’s on track, and now we find out otherwise. It needs sorting ASAP.”

Bill nodded in agreement. Terry sipped his tea and nibbled at a biscuit. I got the impression he turned up largely for entertainment rather than to actively involve himself.

“Of course,” I nodded, as I pulled out copies of all the documentation for the project I’d been able to find, and that I’d sat up all night printing. “I think it’d be helpful for us all to be able to review the system design as we work out our next move.”

I passed extremely thick bundles of documentation around the room and watched carefully for Bill’s reaction. He either knew where I was going, or he knew something I didn’t. He hesitated for a second as I handed him his copy, and that was all I needed. “Sorry, mate.” I thought to myself, as a wave of relief settled over me. I don’t particularly enjoy being underhanded, but I always felt that something was off about Bill - and, if my suspicions were correct, I was setting a trap that he couldn’t get out of.

“As you know, we work very closely to the specifications agreed at the beginning of the project. Any changes or deviations need to be communicated in writing and the documentation amended to take those changes into account. We’ve done this a fair few times during the course of the project - for example, Terry and I have worked very closely together on certain parts of the system and you’ll see those changes and additions documented wherever relevant. That’s right isn’t it Terry?”, I invited him to respond.

“Yep”, he affirmed. “I tell Tom what I want changed, and he writes it up and sends it over for confirmation.”

“Okay, that’s fine,” sighed Mike, “but what about the missing functionality? What are you going to do about that? We were told it’d do a bunch of things that it turns out it can’t. That’s just not acceptable.”

Time for the poker face.

“Well that’s the thing - this is what I’m confused about. I appreciate that the system doesn’t do what you need it to, but as I explained - everything we do is agreed up-front and signed off. If we can review the documentation and identify the parts we haven’t implemented then of course we’ll hold our hands up and get it sorted as soon as possible - but since these problems were raised I’ve looked over everything several times and as far as I can tell, we’ve built everything to the agreed spec. The features you’re mentioning now just don’t exist anywhere in the documentation.”

Ah, ‘built-to-spec’. The get-out-of-jail-free card. I’m not proud I had to go there, but it was me or you, Bill - and I can’t afford a cruise, so it isn’t gonna be me.

Mike and John seemed irritated but flipped through the wad of documentation before them. Terry thumbed through a few pages and returned to his tea. Bill hunted through his copy as though it’d just made off with his picnic. He knew what the score was.

“Terry”, said Mike, after what felt like an eternity. “You’re the one who’s been using this. Does all of this look right to you? Everything in here - we’ve got, right?”

“It does everything it says it’ll do, aye - but it’s what it doesn’t say that’s the problem. Everything works, it just doesn’t do everything we need.”

“If it helps”, I offered (It definitely wasn’t helpful, at least not to Bill), “we’ve always had a demo version of the system available throughout the project so any potential problems could be caught and fixed.”

“And who was responsible for reviewing?”, John asked the room. “Did you have a go, Terry? Who else has seen it?”

“Yes - that’s how I got Tom to fix things for me - I’d try it out and if anything wasn’t right, I’d just tell him”, replied Terry.

“But all of this works doesn’t it?” asked John - realisation dawning on his face. “We’ve got everything in here? But all the missing stuff - dealing with bad manifests, all that admin stuff - that’s not really your wheelhouse, is it? Who reviewed the other bits?”

Sorry, Bill.

En route to the show-down

I took the shot: “Well, other than Terry, Bill’s the lead on this project and helped us nail down the requirements. He signed off the spec and reviewed the system several times.”

What followed was one of the longest few seconds of my life.

“I see”, said Mike, finally.

Bill finally spoke: “Well, that’s right, I reviewed it, and it all seemed good - but we’re talking serious missing features here! We can’t work with this as specced. That’s the problem. It’s no good if things we need were never included in the spec, is it?”

I was amazed. Bill! You signed off the spec! Several times!

Nodding, I responded. “I agree - like I said, when Terry called and explained what was missing I couldn’t believe it. I was sure we’d missed something during development, but as you know - we only build what’s agreed. I really am sorry there are things that aren’t there - but unless it’s documented and specced up, we don’t build it. We’ve given regular progress reports and demonstrations. I can definitely see why these features are important, but we can only build what we’re aware needs to be built - and though we make every effort to capture every requirement - we can’t take responsibility for things that aren’t communicated to us after the spec’s been signed off.”

All heads turned to Bill - who seemed lost for words.

“I think we probably need to review this from our side” sighed John. Mike nodded in agreement. Bill said nothing but looked like he could do with another cruise.

Mike ended the meeting - at least for me. “I think you can go, Tom. Thanks for your time, we’ll be in touch if there’s anything we need to follow up on. Do you mind if we keep these?”. He waved his bundle of documents.

“That’s fine. Thanks for your time and let me know if you need anything else.”

Terry walked me back to my car. He didn’t say much, but he seemed to have enjoyed himself. I got the impression he didn’t like Bill very much, but he didn’t say anything to confirm, other than a laugh and a “You handled that well! See you later.”

On the drive back to the office, I felt equal parts relief and regret. I’d done what I had to do to take the spotlight off myself and my colleagues: it was true that we’d done everything ‘by the book’. Bill had been given plenty of time to spot any shortcomings or missing features, and though some of the problems would have been difficult for anybody to predict, there were a host of other issues which I can’t help but feel we would have avoided if we’d done the discovery process ourselves. I also didn’t like the fact that I had an unhappy client. Even if I could reasonably defer to the specifications to bat away their complaints, I don’t like to disappoint and the whole thing left me feeling very deflated for a while. Still, I don’t know what else I could have done. I wasn’t proud, even if I felt like I’d just gone toe-to-toe with four heavyweights and won.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share


Aftermath And Lessons Learned

In the weeks that followed, I had only sporadic conversations with Terry as we made a few fixes and improved things as best we could. Unless the client was willing to spend more money, which they weren’t, my hands were tied.

I came to understand that Bill had effectively washed his hands of the project at some point after it had been in development for months, and struggled to explain to his employers the massive disparity between what he’d been telling them, and what he was actually reviewing whenever he looked over the demonstration versions. I don’t really understand how he could have set himself up for such a nightmare, but I don’t think he was around for much longer afterwards. It turned out that Terry and Bill had rarely communicated throughout the course of the project - I’m not even sure Terry actually knew who Bill was or what role he played until our portacabin meeting.

Perfect Lines of Communication

The client continued to use the software and - as far as I understand, tried to deal with the problems as best they could. Once the initial frayed nerves had settled, the client had accepted that we had no responsibility to carry out extra work for free - and simply made the best of it for as long as they continued to run the system. Eventually - a couple of years later - they were taken over by a larger company, who presumably put in place their own systems.

Although from our perspective the project had been a ‘success’ on paper - we made a profit, everything we’d built worked and had no issues, and we’d delivered on schedule - it would be foolish to pretend that the real-world impact of the project was anything but a mess.

Failure to ensure solid lines of communication between all parties, and neglecting to adequately assess the likely practical impact of rolling out the new system beyond ‘make <x> go faster’ resulted in a head-on collision with reality, which was painful for everybody involved.

Ultimately, aside from what we rather generously labelled the ‘teething issues’, we all moved on with no lasting animosity or resentment; but I had absolutely no desire to ever go through such an experience again.

Since then, I’ve tried to take the lessons I learned from this escapade forward with me:

1. Everything In Writing

Though it’s incredibly common for software developers to avoid documentation like the plague, to make decisions as and when they become necessary, and to invent features on the fly ‘because it’ll be helpful’ - the simple, unavoidable truth is that if something goes wrong at any point and you end up in conflict (or even simply the threat of conflict) with a client, then making sure you have clear, complete documentation for the design and implementation of your software will help protect you.

Some helpful tips:

  • Make liberal use of branches in your version-control system, and feature switches throughout your software. Only fold new features into the main trunk (or enable the feature switch) when you have documented the feature, and your client is aware of it and has agreed to it. You don’t have to go overboard, but the goal should be for you to guarantee your ability to control exactly what ends up being deployed. The added benefit is that your VCS logs (which, of course, are well-written and relevant!) will help when it comes time to document those features properly.

  • Sketch things out first. I’ve found it’s easier to write ‘real, human-readable’ documentation if I have a visual plan of the software in diagram form to work from. Writing the code first can still work, but wireframes, diagrams, and notes are almost always the best starting point if you’re not just writing software for yourself.

  • Slow down. Holding off on new features and changes until the client signs them off is a good thing for everybody. Deploying new things too early or without agreement - even if you’re sure they’re rock solid or a good improvement - can be argued to be little more than a needless introduction of risk. Resist the temptation to show off your latest and greatest until everybody involved knows what to expect.

The exact format of your documentation will rarely be a one-size-fits-all deal. Some clients prefer ‘low-tech’ documentation; others demand extremely detailed, technical information. I’ve found the following questions work well to form the ‘skeleton’ of your documentation, regardless of the specific format:

  • What does the client want?

    • Can be very low-tech.

    • Collates everything you know about what the client is after.

  • What do we plan to build?

    • More detailed than the previous set of documentation

    • Includes wireframes, high-level diagrams, flow charts etc

    • Should aim to include everything you currently know you will be building

  • What have we built?

    • Similar to the above, but as detailed as possible

    • Annotated screenshots where relevant, detailed diagrams and charts

    • Technical information

    • Includes user guides and specific usage instructions for different scenarios

Whether you treat these as a series of versioned sets of documentation (my preference), updated and signed off whenever necessary, or as living documents which are modified over time, the important thing is that you actually do it, rather than spend all of your time in the programmer comfort-zone.

2. ‘Better’ Is Relative

Context is everything, and though it’s very difficult to predict every possible consequence of a change - you will rarely regret playing devil’s advocate and pushing at the extremities, even if it can begin to feel a little absurd.

Broadly speaking, the purpose of writing software is to improve something in some way. Whatever the project, it’s highly likely that the motivation behind it is to do something faster or better than it is currently being done. Clients often have targets, which we dutifully incorporate into our designs and tests, and build our software to meet those targets. We consider things a success when those targets are met or exceeded. This is, obviously, a good thing - but before we start patting ourselves on the back for making numbers dance in the right way, we should try - as much as we can - to adequately assess the real-world impact of meeting those targets. In some scenarios, this is incredibly difficult to achieve - because it’s not always practical or feasible to carry out a ‘true-to-life’ test.

Often, we get by with test data, test users, staged rollouts, and other tried-and-true practices to make sure we deploy something good and useful, but when it comes down to it - deploying any new system at scale comes with a fair amount of risk. Meeting all of your targets is great, but what did you, or your client, forget? You’ve improved efficiency in one area by 50% - brilliant! But what happens elsewhere as a result? The real world has physical, practical limitations that the digital world does not - and no matter how positive you might feel about how your new system works, there’s a good chance that there are side effects and edge cases you aren’t aware of. A screw coming loose at 5mph is decidedly less of a problem than one coming loose at 50mph.

It’s impossible to make sure you cover every possible scenario, but carrying out risk and impact assessments regularly is a great way to avoid being blindsided later. Get into the habit of asking your client questions about the future, post-deployment:

  • “What new risks exist when we meet these targets?”

  • “What are the practical limitations regarding this input / output when efficiency is improved here?”

  • “What is the impact on this part of the organisation when we optimise this process?”

Aside from giving your clients confidence that you’re planning well ahead, it will also force them to think about things that you may not even be aware of and to suggest changes or improvements that will make everything run much more smoothly later on.

3. Identify And Close Gaps In Communication

When a project has multiple stakeholders, it’s very easy to gain a false sense of confidence in how well everybody understands what’s going on. Attendance at meetings isn’t a guarantee that people are paying attention. CCing people into emails doesn’t mean they’re being read and absorbed. A signature doesn’t guarantee understanding.

Though you can’t guarantee that everybody involved in a project is always on the same page, and it’s often unreasonable to expect everybody to understand (or care about) every aspect of whatever system is being built, it rarely hurts to keep a list of the most relevant people involved in the project and to check in with them directly and regularly.

Unfortunately, organisational politics often plays a huge role here - and if the people you’re supposed to be communicating with aren’t holding up their end of the bargain properly, there’s not a huge amount you can do about it without causing upset somewhere. What you can do, however, is occasionally step outside of the regular meeting / demo / feedback cadence and invite a response. If you send out a fortnightly progress report, for example - try occasionally sending two in the same time period. Occasionally, send an email or call the client to ask about something - anything, really - just to ‘shake the tree’ and see what falls out.

People quickly become accustomed to routine, and attention to detail slips, especially during long-running projects. Occasionally changing things up is a harmless and fairly simple way to snap people back into attention. Overdoing it can quickly become irritating - so avoid becoming unpredictable or a nuisance - but a little low-friction prodding and poking will often serve you well and help to expose any gaps in understanding or communication.


I hope some of the lessons and practices I learned the hard way can be put to use in your own work - and fingers crossed you never have to receive the kind of enraged phone call I did after what seemed like a successful deployment!

Share Your Stories

Do you have any horror stories or tips you’d like to share on how to avoid them? Feel free to share your wisdom in the comments or by email.

If you think this story will be helpful to somebody you know, or even just entertaining, then please do share!

Refer a friend

Give a gift subscription

As always your support is much appreciated!

]]>
<![CDATA[Digital Doodle #2 - Random Quote Of The Day]]>https://tomhalligan.substack.com/p/digital-doodle-2-random-quote-ofhttps://tomhalligan.substack.com/p/digital-doodle-2-random-quote-ofTue, 29 Aug 2023 20:15:27 GMTWelcome to the second instalment of Digital Doodles, where I create something useless for your amusement and entertainment. If you missed out on the last one, you can find it below:

Got Something To Say?

For as long as the internet has existed, random quote generators have been with us; to provide clarity in moments of confusion, and to give our elderly relatives something to send us in the group chat. At the time of writing, Google returns roughly 40 million results for the term ‘random quote generator’ - and my extensive research (scrolling until I got bored) shows that each link is either an actual random quote generator, or a post from somebody seeking to make one for themselves.

If I could capitalise on this market, I absolutely would - but for you, dear reader, I’m going to give you something for free - and show how you too can create your own shareable quote images, with just a little work.

Finally, rounded rectangles with drop-shadows are within the reach of mere mortals

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Conjuring Images

I’ve been playing around with ImageMagick recently - for no particular reason other than that it seems like the kind of thing that’s useful to know about. ImageMagick is a powerful image-processing utility that has terrible documentation and enormous capability - which is either a horrible combination or an interesting challenge, depending on your outlook.

In standard usage, ImageMagick allows you to take an image file, and do stuff to it programmatically - whether that’s simple transformations like resizing and cropping, or more complex adjustments like merging images together, applying filters, or overlaying text.

In this case, I wanted to channel my inner Merlin, and conjure a nice image out of thin air code, and so I ended up with the following ImageMagick script:

Drawing has never been so much fun!

This script is not actually ‘valid’ ImageMagick script - the eagle-eyed and shell-literate amongst you may have noticed that I’ve included some shell expansion markup in this file:

  • GRAD_A: Representing a gradient start colour

  • GRAB_B: Representing a gradient end colour

  • QUOTE: Representing the quote text we want to use

In what appears to be a glaring oversight - or my own inability to read - it appears that ImageMagick itself does not understand environment variables in script files. If I’m wrong on that, I’d appreciate somebody correcting me!

Nevertheless, this is not a huge problem, so read on to find out how to make this work.

As an aside, you wouldn’t believe how long I spent trying to get ImageMagick to produce a rounded rectangle, filled with a gradient, against a drop-shadow. The final script is actually pretty straightforward, but it took a lot of trial and error to get there!

Most of the articles / tutorials I found online suggested horrible things like saving alpha masks to temporary image files, or simply starting with a pre-made image to build upon, but that seemed to go against the ethos of Digital Doodles, and I do not like defeat. Hopefully somebody tackling the same problem stumbles upon this post and realises that there is a better way!

Thank you for reading Not A Robot. If you know somebody who needs random quotations, feel free to share this post!

Share

A Quote Generator In Less Than 20 Lines

Since I’m not in the habit of maintaining my own database of interesting and inspirational quotations, I needed to source my data from somewhere.

Thankfully, there is no shortage of freely available APIs to query and retrieve quotes of any nature. In this case, I opted for the Quotable.IO API - since it’s trivial to use and returns data in an easy to parse format (JSON). Each call to the API made via the script below will return a single, randomised quote.

I also wanted to add a little variety and colour to the generated images, and so I used the excellent Colormind API - which returns a nice, coherent colour palette on each request.

With a little work to transform the returned data into something clean I could use via the jq utility, I was ready to go:

It’s actually less than 10 lines, but I added comments, to be nice

The trick to work around ImageMagick’s inability to understand environment variables can be seen in line 14 above: I read the file via the ‘cat’ command, and pass it to ‘envsubst’, which looks for the variable expansion tokens in the file and replaces them with the correct values.

Line 17 creates a ‘fake’ file from the MGK_CMD variable, which allows magick-script to read it as it would any other valid script file.

Running this script does everything required to produce what I’m sure you’ll agree are the most motivational, inspirational quote images ever to grace the internet:

Ok - admittedly, they’re not the prettiest things in the world, but considering I didn’t have to open an image editor once, it’s still pretty cool. I’m sure with a little extra work, I could improve the image quality and text clarity, and my algorithm for choosing the gradient colours (selecting the first and last colour that Colormind returns) could do with some extra thought. Occasionally, the quote itself will be too long, resulting in tiny, illegible text, but for the most part, it’s pretty passable, I’d say! Frankly, I’m impressed I got transparent backgrounds to work, so I’m calling it a win.

Make It Your Own!

If you’d like to play around with this generator, you can find both the shell script and the ImageMagick script in a GitHub gist. I’ve only written the shell script for ZSH (which you’ve got already, if you’re on a Mac), but it should be easily portable to other shells without too much fuss. For a more cross-platform solution you could try re-writing this in Python - or even make it available with a simple web-frontend and join your 40 million competitors on Google.

You could also use a different quote API entirely, if Quotable isn’t your thing. Why not try modifying the script to use the Evil Insult Generator instead?

As always, I’m interested in seeing any pointless little experiments you’ve produced and are happy to share - so if you’ve got something to show, or want to share a modification to the script I’ve shared, then feel free to drop it in the comments!

Not A Robot is a reader-supported publication. This post is free but if you’d like to support my work, then consider subscribing for the price of a coffee or two a month!

]]>
<![CDATA[How To Actually Use A Computer #1]]>https://tomhalligan.substack.com/p/how-to-actually-use-a-computer-1https://tomhalligan.substack.com/p/how-to-actually-use-a-computer-1Mon, 31 Jul 2023 00:17:53 GMT

The internet today is creaking under the weight of a seemingly endless barrage of auto-generated clickbait articles, irritating website design, and next-to-useless search-engine results. Even when Google does manage to surprise you with a relevant result, there’s a high chance that the website you end up visiting will attempt to sell you thirteen different things before you manage to spot the information you’re after, usually nestled conveniently behind a slow-loading cookie configuration dialogue.

There’s not much any single person can do about this mess, and it often feels easier to simply give up and accept that what we used to call the internet is actually now just a handful of websites owned by billionaires, plus the rapidly deteriorating husks of a handful of search engines, for whenever you fancy a little ad-supported disappointment.

Fortunately, if you’re reading this, then there’s a good chance that you’re a human, and that means you’re more than capable of eschewing the nonsense and empowering yourself! Your computer is a serious machine - orders of magnitude more powerful than the systems which landed us on the moon - and we shouldn’t allow ourselves to be blinded by a terrible online experience.

In an effort to convince people that we don’t have to wait around for real power and agency, I’ll be posting regular tips under How To Actually Use A Computer - with today’s episode featuring some broad brush strokes before I dig deeper into specifics in a later post.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

#1 - Don’t Fear The Terminal

The terminal is the powerhouse of your desktop machine, giving you access to a wide range of tools, with greater flexibility than the standard GUI of your operating system.

It’s important to note the distinction between the terminal and the shell. In simple terms, a terminal is just the interface / app that lets you type. The shell is the thing you ultimately interact with.

Windows

Mac

  • Recommended Terminal App: iTerm2

  • Default Shell: Zsh

The default look and feel of any terminal is pretty bare-bones, but you can, with a little work, customise things more to your liking.

I recommend using Oh My Posh on both Windows and Mac to improve the look and feel of your terminal. After all, if you’re going to be spending a lot of time there, you might as well make it look pretty!

iTerm2 with Oh-My-Posh
Windows Terminal with Oh-My-Posh

It’s worth getting to grips with the basics of terminal usage at a bare minimum. Tom Rankin at Make Tech Easier wrote a very friendly beginner’s guide to using Zsh. Many of the basic commands you use in Zsh are aliased (i.e. type the same thing, get the same result) in PowerShell, but if you want to explore further, then Tim Keary at Comparitech recently produced an excellent PowerShell cheatsheet.

Proficiency with the terminal is something that pays off continuously. Quickly navigating, searching, and performing batch operations across different sets of files and folders is trivial once you know a few basic commands, but is unnecessarily cumbersome if you’re limited to clicking around. Familiarity here will unlock a world of utility, and understanding some basic shell scripting - which I’ll go into in a later post - will be your first step to truly unlocking the potential of your machine.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


#2 - Use A Package Manager

A package manager helps you find, install, and maintain the software installed on your machine. Though package managers are not new, and standard for Linux users, I find that Windows and Mac users (even those who are technically inclined!) are often unaware that they exist, or what they can offer. You probably use a package manager already: Apple’s App Store, or Google’s Play Store. These are, fundamentally, just package managers - but there is an entire universe of software waiting to be discovered outside of these tightly controlled stores.

Package managers are more vital than ever given the mess that search engines have become. When all you want is to try some new software, being forced to navigate a parade of websites - apparently designed with the sole aim of frustrating you - can mean you’re more likely to constrain yourself to whatever’s on the app store, or, more likely, simply give up. Package managers have vetting and verification procedures to help ensure that software is safe to install and that users are able to trace the origin of their software easily.

The package managers below are free, and so is most of the software available through them. Particularly on Windows, I would strongly recommend that you perform an audit of your commonly-used software and then reinstall things via the package manager if the same software is available there: this will help ensure that you always know the origin of the software you’re using, and help you to keep things up to date.

Windows

Chocolatey Package Manager

Mac

  • Recommended Package Manager: Homebrew

  • Official Package List:

    • Homebrew Formulae (mostly command-line software, great for developers, academics, and technical users)

    • Homebrew Casks (macOS GUI apps - great for everyone!)

  • GUI Package Manager: Cakebrew

Cakebrew

Both Chocolatey and Homebrew are intended first and foremost to be used from the terminal, but both have GUI tools available to manage your packages if you prefer. In the case of Homebrew, there is no officially supported GUI tool, but Cakebrew is one of the better free third-party tools available. It, unfortunately, is no longer under active development, and so doesn’t support the installation of Casks - i.e. GUI apps. However, the Cork app does support Casks, and has a free demo. You can always build the app yourself from source, or development can be supported for $5/month which gets you access to downloadable builds.

An added benefit to package managers is that should you need to move to a new system or start your existing one from scratch, then you can simply export your currently installed package list and reimport it when you’re ready. In a work environment, this is a quick, low-cost way to make sure everybody has access to the same tools.

NOTE: Microsoft has (finally) gotten around to providing its own official package manager: WinGet - which is worth investigating due to its officially supported status, but it doesn’t yet compete in terms of what’s actually on offer.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share


#3 - Use Containers

I’ve mentioned Docker in the past, in a post about self-hosting:

It’s worth another mention, however, in the context of making the most of your desktop or laptop machine.

Though Docker can take a little work to set up on Windows machines, and may require you to tweak your BIOS settings to enable Virtualization, it’s worth the effort.

There are many, many Docker images publicly available, ready for you to use, on Docker Hub and other registries. You can think of docker registries as a little like a package registry, filled with software and services that can help you save time, improve your workflow, or just have fun.

Using Docker Compose, you can stitch together services into a single coherent stack, allowing you to run web apps, media servers, or anything else.

For example, perhaps you’d like to run Paperless-ngx on your home PC. This will allow you to drag and drop all of your important documents into a single import folder, and have them OCR-scanned, categorised, tagged and organised - saving a huge amount of time and effort.

Or perhaps you deal with a lot of data and want to run something locally that allows you to analyse, visualise and produce compelling reports. You could pay (the excellent) Metabase $85 a month (!) for their extremely user-friendly software, or you can just run it yourself, for free, whenever you need it and take advantage of their automatic insights and chart plotting.

To summarise - Docker makes it incredibly easy for you to run software which could otherwise be a pain to install and maintain or could cost you a significant amount of money to pay for a subscription. If you run an independent business or work from home and want to give yourself an edge, things like Paperless and Metabase can help you keep on top of things, without needing to spend a penny.

Metabase, running in Docker, at absolutely no cost

Although Docker is primarily used in server environments, it can be an incredibly powerful addition to your toolkit at home, allowing you to run entire services which you’d otherwise be paying somebody else for.

There are thousands of apps and services available for use with Docker, so whatever your needs, there’ll be something for you. Take a look at the Awesome-Selfhosted list on GitHub for ideas, and let me know in the comments what your favourites are!


Next time in How To Actually Use A Computer, I’ll talk shell-scripting and automation, with a few basic examples to get you started.

Over time, the aim of this series is to show that anybody, with a little work, can take advantage of the wealth of opportunity that exists out there. Whether for work or leisure, we have immense power and agency at our fingertips - and I hope to convince you that it shouldn’t cost an arm and a leg for you to get the full benefit of all that the technological world has to offer.

If you’ve enjoyed these tips, and want to receive more - or you know somebody who would, then feel free to share and subscribe! This post is free, but a paid subscription will help me to keep them coming.

Subscribe now

]]>
<![CDATA[Tiny Gardening]]>https://tomhalligan.substack.com/p/tiny-gardeninghttps://tomhalligan.substack.com/p/tiny-gardeningSat, 01 Jul 2023 01:04:41 GMTA few weeks ago, I posted an introduction to the Tiny game engine, which you can read at the link below:

Since then, I’ve been sporadically working on a little game called Tiny Gardening, which is intended to be something of a re-vamp of one of my first coding projects:

When I was first learning how to code (many years ago) one of the very first programs I wrote was a text-based simulation of an orange tree, where you had to water the tree and collect the oranges regularly. I don’t know why, but there’s something about simulating plants which keeps coming back to me, so why not continue the theme with Tiny?

Though Tiny Gardening is by no means complete - or even ‘a game’ - at this point, I’ve made enough progress to warrant a quick update, so without further ado, allow me the pleasure of taking you on a tour of my digital allotment!

Where We’re At

Currently, the game has a handful of basic features which need further fleshing out:

  • Players can plant seeds

  • Players can water their plants

  • Plants grow as their ‘water level’ increases

  • Plants lose water over time - currently shrinking back down to their ‘seed’ sprites, but in the long run, they should have some kind of ‘death’ sprite.

  • When all your plants are dead, it’s game over.

  • The current season is represented by the bar along the top, with a yellow circle (actually a photorealistic representation of our glorious sun!) moving along it to represent the passage of time.

Right now, time moves very quickly. Seasons pass in the blink of an eye, and your plants will ‘die’ if left un-watered for longer than about 10 seconds - so good luck trying to manage more than a few plants for any length of time!

To get some idea of how things currently work, see the video below:

Not A Robot is a reader-supported publication. This post is free, but if you’d like to support my work then please consider a paid subscription!

It’s a bit of a free-for-all with the placement of the plants - which I’m not too keen on. I may play around with some kind of grid-based layout, which will also help provide a nice system for more interesting rules and challenges later on.

Seasons will ultimately cause different effects, though right now they do nothing but change the colour of the ground. In summer, for example, it might be fun for plants to lose water more rapidly, or for certain types to grow more quickly. Autumn may have an increased likelihood of rainfall, but ‘summer plants’ may not grow so easily. I’d also like to introduce some pests and pollinating insects later on, but we’re a little way away from that kind of thing right now!

In the medium term, I need some kind of win condition for the player. Right now it’s just a case of surviving for as long as you can, but that’s not particularly interesting. I’d like to introduce some kind of farming-like mechanic: some plants could be vegetables you have to harvest and sell, and perhaps you need to meet a certain quota for some particular type of vegetable within the year. I’m not too hung up on that for now - there are plenty of things I want to implement just to explore what Tiny can do - so I’m holding off bothering too much with it for the time being.

Challenges

Tiny is a brand-new game engine, currently gearing up for its v1.0.0 release, so bugs and shortcomings are to be expected.

Generally speaking, the engine doesn’t offer all that much aside from the absolute basics: it’s up to you to build each system your game might require. There is no concept of 2D physics for collisions, no UI system, no entity-component system and very limited support for input and audio. In many ways, this makes it a brilliant engine for the programmer who has some idea of how to implement those kinds of things - but absolute beginners may struggle. I would say Tiny suits those who have some familiarity with other game engines, but who enjoy a challenge. Doing anything beyond the most basic of games will require some understanding of non-trivial concepts and systems, and you’ll need to work out collisions and any other physics for yourself. Thankfully, the maths involved is generally relatively simple and can be worked out in your head (or, more likely, looked up online!).

There are also bugs in the engine which require some attention. One I discovered very early on was in the engine’s implementation of math.clamp, which, for the uninitiated, is a function you can call which restricts a number so that it always resides between two other values:

How math.clamp should behave

This is a standard function available in virtually all math libraries - and a particularly useful function in game development. The bug, in this case, meant that values would only actually be clamped if they were greater than the upper bound:

How math.clamp was behaving

This kind of thing can be a painful bug to come across - the nature of it means you end up with unclamped values causing things to spin out of control - and since Tiny doesn’t offer any kind of debugger, the cause of the problem can be difficult to spot - particularly if you’re controlling and constraining values in multiple different ways.

Thankfully, in this instance, I was able to spot the problem easily since its impact was obvious: I’m not sure what happens in the real world when plants have ‘negative water’, but in Tiny Gardening, they become very dead very quickly. I fixed the issue in the engine’s source code and submitted a Pull Request on GitHub - which was promptly accepted and merged into the engine’s codebase.

Thank you for reading Not A Robot. If you know somebody who’d appreciate this post, feel free to share it.

Share

I’m sure I’ll find more issues as I continue - I have a sneaking suspicion there’s an issue with audio not looping correctly in builds which are exported for the web, but I need to test further to confirm.

Bootstrapping

Since Tiny is such a bare-bones game engine, it’s up to the programmer to implement systems and utilities to make the game development process more straightforward. Below, I’ll cover some of the things I’ve implemented to make further development more flexible and less time-consuming. Wherever I include a code snippet, keep in mind that I’ve probably omitted some parts of the code which aren’t strictly relevant. You can always take a look at the full source code over on GitHub if you want to see all of the nuts and bolts!

State Machinery

At the highest level, the game is broken up into three distinct states, which allows me to encapsulate the core loop and easily add or remove extra screens or switch out different implementations without too much fuss.

For those unfamiliar with the terminology, or state machines in general - for now, you can imagine that a ‘state’ corresponds to a ‘screen’ in the game.

The basic states I’m working with right now are:

  • Main Menu

    • This state represents the first screen players will see and does nothing but provide a big button to press to start the game.

  • Game

    • This is where players spend most of their time - all of the actual gameplay occurs when the program is in this state

  • Game Over

    • This is where players end up when they’ve won or lost the game. It will provide some kind of summary of how the player did and also provides a way to return to the main menu so players can start again.

Below you can see the skeleton of a State:

The basic outline of a State object

The State:new() function acts as a Constructor. Lua - the language Tiny games are written in - does not provide objects in quite the same way that other Object-Oriented languages do, and so some basic boilerplate code is required to give ourselves something akin to the objects and classes you may be familiar with. If you’re interested in learning more, you can read the lua documentation on the subject - but to summarise: every time I want a new State object, I can just use:

myNewState = State:new({})

From there, I can call functions on my new object instance as follows:

myNewState:update()
myNewState:draw()

Those who recall Tinkering With Tiny will remember that the engine calls an _update() and a _draw() function on every frame, so, in keeping with this pattern, there’s a corresponding update() and draw() function on virtually every custom object throughout the Tiny Gardening code.

Entities

There are some basic features which are common to pretty much everything on-screen in Tiny Gardening:

  • They all have a size and position

  • I need to know if they’ve been clicked

  • I need to check if they overlap with some other entity

  • I want some basic debugging functionality that I can easily turn on and off as I see fit.

For these reasons (and more!), it makes sense to build some kind of entity system which allows me to share these features across different kinds of objects. In Tiny Gardening, you can click on buttons and you can click on plants. What happens when you click on either is radically different, but they are both ‘clickable things’, and like any good programmer, I am incredibly lazy, so I don’t want to write ‘click detection’ code more than once if I can avoid it!

Generic click detection for an Entity

Using what passes for inheritance in lua, I can implement new object types which are based on this Entity object, meaning that every object created in this way can detect clicks.

User Interface

Tiny offers absolutely nothing in the way of a UI system. There are no buttons, sliders, text fields, or anything else that other game engines might offer. This is a little frustrating since user-interface code can be a pain even when the engine does provide something for you to use - but I’m not one to complain, so one of the first things I did was implement a Button, which features something akin to ‘click events’:

I am unreasonably proud of this

Here, the Button object is instantiated by a call to Entity:new(), which means that every Button object I create automatically benefits from click detection. The onClick property is itself a table, which can be thought of as an array, meaning I can attach multiple handlers to a single button, which are triggered whenever a click is detected. The Button:fireClickEvent function loops over those handlers and calls them one by one.

As an example of how the Button object is used in Tiny Gardening, here’s my generic ‘debug button’ which always lives at the top of the screen:

Callback heaven

Here, you can see that the onClick handler is itself an anonymous function - meaning it has no name and can’t be called except by accessing the value of onClick. This is a nice way of handling simple functionality which should occur when a button is clicked - but is not particularly well suited to more complex behaviour. For my purposes, this system works well and means I have a nice, easy-to-use, basic button.

Of course, you rarely need just one button, so I also implemented a ButtonBar, which allows me to lay out a series of buttons either vertically or horizontally:

Easy peasy

Tiny Gardening also utilises a variant of the base Button called ToggleButton, which allows me to implement such thrilling and revolutionary concepts as a selected toolbar:

Pushing the boundaries of UX design…

Wrapping Up

In this post, we’ve covered some of the basic components of Tiny Gardening. Admittedly, most of what I’ve mentioned here is a far cry from ‘game stuff’ or even touches on the main theme of the game, but in an engine like Tiny, it can pay dividends to build yourself a little toolkit of reusable components and functions which form the backbone of the rest of your game. Things like buttons and entities might seem trivial at first glance, but their absence can be felt keenly when you’re trying to build anything that requires shared functionality and predictable behaviour!

In the next progress update, I’ll dive into the core mechanics of the game so far - how I track plant growth and animate the sprites, the particle system for water and seeds, and how I’m implementing seasonal effects. I’ll also talk more about the pros and cons I’ve discovered about the Tiny game engine. In the meantime, you can explore the code as-is on GitHub, and try Tiny Gardening out for yourself here!

Not A Robot is a reader-supported publication. Join for free, or support this and future posts with a paid subscription.

]]>
<![CDATA[What the Vision Pro is actually for]]>https://tomhalligan.substack.com/p/what-the-vision-pro-is-actually-forhttps://tomhalligan.substack.com/p/what-the-vision-pro-is-actually-forSat, 17 Jun 2023 22:44:14 GMT

Earlier this month, Apple announced the Vision Pro headset - their first dedicated device for AR (Augmented Reality) / VR (Virtual Reality) - hereafter referred to as XR - after several years of speculation and rumour.

The response has been mixed:

Mark Serrels at CNET branded the Vision Pro a ‘dystopian device for a dystopian world’, whilst Bryan Lunduke declared it a ‘device designed to make you less happy’.

On the other side of the fence, Mark Spoonauer of Tom’s Guide posted a review of his time hands-on with the device and concluded it is ‘truly amazing’, and Alice Clarke at Gizmodo wrote that it’s ‘the most immersive headset I’ve ever used’.

So, nothing new there then. Like all new tech, the Vision Pro has its detractors and its evangelists. Apple is entering a market that has been hyped beyond all reason, but one that has repeatedly failed to deliver on the breathless promises of XR acolytes going back decades. The most recent high-profile embarrassment for the XR industry was Zuckerberg’s poorly-executed ‘Metaverse’, which delivered little more than an annihilation of sensible search engine results about ‘the metaverse’ itself. Paradoxically, Meta is also responsible for VR’s greatest success story: the Quest; though one could argue that they simply had deep enough pockets to bring the Oculus hardware to an eager gaming market at a not-unreasonable price.

On the AR side of things, the most common method ‘normal people’ use for engaging with AR apps and content is still the smartphone, though big tech companies have repeatedly tried and failed to tap into a consumer market for wearable AR headsets. Google Glass - touted as the futuristic wearable we’d never leave the house with, is now kaput, and Microsoft’s HoloLens has found a rather niche home in the commercial sector, but no ordinary person is ever likely to buy one for home use.

You see, both AR and VR have a problem - and after spending the better part of my career working in the mixed-reality sector, I believe I understand what that problem is: the two are intrinsically linked, and cannot comfortably exist if one-half of the partnership isn’t present, or presented as an embarrassing second cousin.

Not A Robot is a reader-supported publication. This post is free, but a paid subscription will help to support future work

The Problem With VR

VR headsets have found a reasonably successful home in the gaming market, and to some extent, the commercial market has found uses for VR in visualisation, engineering and architecture. Though devices like the Quest do not yet enjoy anything like the market share of say, the Nintendo Switch, it is undoubtedly the gaming sector which has driven VR to prominence in the XR landscape, and though the Quest is hardly rivalling a modern gaming PC for graphical fidelity and performance, it would be foolish to discount its place as a viable games console, which is, luckily, what most people view it as.

Unfortunately, gaming is not an ubiquitous pastime in the same way that, say, endlessly scrolling on a phone is - and so, just like Sony and Microsoft turned their games consoles into ‘media hubs’, there is something of a persistent push by VR companies to make their devices do ‘something’ that isn’t limited to gaming. The Quest Pro is pitched not as a gaming device, but as a ‘new way to work’. Its marketing essentially ignores its VR capabilities in favour of its passthrough mode (AR, to normal people). Meta wants you to see it as a device you’ll wear for a large percentage of your time, and, naturally, concludes that this means you want to use it for work. And that all of your colleagues will also want to use it for work, too.

It touts its support for Adobe Acrobat as an experience, rather than the ability to read PDFs.

PDFs you can reach out and touch!

Why would they do this? Given that there is almost nothing in the universe less exciting to ordinary people than the words ‘Adobe Acrobat’, why would Meta create a device, costing a cool £999.99, pitched so squarely at business users rather than recreational users?

The Quest Store’s ‘top selling apps’ page hints at the answer: the top selling non-game app is Virtual Desktop which, as the name suggests, allows you to use your PC or laptop in VR. For £15, and a few minutes of setup, you can connect your Quest to your PC, which means that not only can you stream PC-VR games to your headset, but you can also use any other software you already have installed. Obviously, there are constraints and limitations here - but the picture is clear: people want to do more with the Quest than Meta is currently offering: and they’re prepared to pay for it. They want to use it like a general-purpose device: something like a phone, something like a PC, but something altogether more than what it actually is: which is a VR games console that has a customer base whose ambitions far exceed the ability of Meta to deliver.

In this context, the Quest Pro begins to make sense. Correctly identifying that gaming is not enough of a sell to reach ubiquity, Meta aims the Pro at sensible people and businesses. Those who don’t really want to spend £300 to play games, but do want to spend £1000 to read PDFs and have meetings with cartoon characters of questionable ambulatory capacity. We all know somebody like that, right?

Why work from home when you can work at work, from home?

The advertised passthrough capability of the Pro, and Meta’s more general push in this area, also belies the prospect of VR-first headsets as the ubiquitous, mass appeal devices VR evangelists would like them to be. Without weighing too heavily on the social & ethical considerations of VR use, it is obvious to me - as somebody who both works with VR and uses it recreationally - that as humans we are simply not built for extended periods with a machine strapped to our faces, detaching us from our natural surroundings and those who happen to be around us. It is, except in rare, deliberate moments of shared play, with friends and family laughing at you waving your arms around in thin air, an almost comically antisocial activity. Idly thumbing through your phone whilst you have company is one thing: it’s quite another to metaphorically remove yourself from the room altogether in order to spend time in a virtual world.

Passthrough attempts to resolve this problem by giving the user the option of seeing the real world around them and thereby interacting with people whilst using the headset. Conveniently, passthrough also enables AR usage - where users can project apps into their surroundings, or augment their environment with information & utilities. When combined with the scanning technology designed for the device to accurately comprehend the physical environment in which it’s being used, the Quest Pro’s passthrough capabilities turn what is otherwise a VR headset into something which is intended to provide a more ‘natural’ feel.

In a kind of technological Catch-22: the solution to VR’s primary problems is AR.

Thank you for reading Not A Robot. This post is public so feel free to share it.

Share

The Problem With AR

Augmented Reality has a different, but possibly more serious problem. Most people’s direct experience with AR is limited to things like Snapchat or Instagram filters, the occasional game (Pokemon Go being the stand-out example here) or utility apps on a smartphone. Commercially, companies have offered AR software to customers for product visualisation (see what this sofa will look like in your living room) or simply for more overt marketing purposes (scan this QR code and watch our cool 3D advert). Artists have used AR to bring 2D artwork to life or build interesting 3D pieces that you can walk in and around. There is certainly an appetite for AR, and interesting and exciting use cases, but the current primary delivery mechanism - a small rectangle you have to hold out in front of you - doesn’t afford the same degree of flexibility and opportunity to developers to produce interesting, immersive software as a VR headset does. AR is also, therefore, usually an optional mode within specific apps that users must explicitly engage with, rather than an immersive, ever-present experience in the same way VR is.

Until now, the only commercially available AR headsets - the solution to this problem - are limited in functionality, expensive, and primarily aimed at fulfilling specific roles within a particular niche.

Just what every home needs

As a computing metaphor, AR suffers from extreme fragmentation. For most recreational users, there is no cohesive or coherent experience between apps. VR headsets come with an operating system, an app store, social features, customisable home screens, and platform-specific design guidelines. AR apps on phones implement a mish-mash of different features, wildly different interaction mechanisms, and extremely variable quality.

Although there have been several attempts to produce an AR headset which will help to solve these issues and bring some standardisation to the AR experience, none have yet made a significant impact on the mass market.

Enter Apple

Apple’s Vision Pro represents, despite the not-entirely-unwarranted mockery from some quarters, the most serious attempt yet at an AR-first headset which comes complete with well-defined user-interface guidelines, a sensible consideration for the headset’s place in the broader hardware / software ecosystem, and first-party support for integrating existing macOS and iOS software into a spatial computing platform.

Where the Quest and its PC predecessors established VR norms - the ‘way to do things in VR’ - the Vision Pro promises to do the same for Augmented Reality. If Apple’s marketing of the product so far is to be believed, then users can expect to hit the ground running with familiar apps from their phones and laptops as soon as they start up the device. Apple - despite the frustration it can cause developers, and the odd gap in quality or utility here and there - is famously opinionated about user experience and design; and their promotional videos around the visionOS interface suggest a similar look and feel to macOS and iOS, subtle gesture recognition, and direct integration with other Apple devices.

The decision to position the Vision Pro as an AR-first device is an interesting (but I would argue necessary) one for Apple to have made, given that the market here has so far failed to condense into something with even a potential for mass appeal. Unlike the iPhone or the iPod before it, there is no significant existing userbase of AR headsets for Apple to leap-frog. Most first-time Vision Pro users - whether they are simply trying the device out, or receive one from an incredibly generous Father Christmas - are most likely to have never experienced any kind of AR headset before, and so will be unable to compare it to other offerings. In some ways this is obviously an advantage for Apple - they will benefit from user familiarity with macOS and iOS, and if users are able to engage with the apps they’re already familiar with, then this will help to establish the norms and expectations that the AR experience requires. On the other hand, Apple is opening itself up to a potentially vulnerable position: it’s entirely possible that their assumptions about how users will wish to interact with AR software are wrong, and that a more nimble competitor may identify a more fitting user experience and force Apple to adapt, which could prove a costly scenario when a core part of the Vision Pro’s promise is standardisation and compatibility with their wider ecosystem.

The Vision Pro will be directly competing with the Quest Pro first and foremost - and though I would probably bet on Apple, I don’t think it’s a foregone conclusion that they will come out on top here.

Just hanging out on the stairs, spatially computin’ with my terrifying future-goggles

The Quest Pro is, for one, significantly cheaper than the Vision Pro. It’s also most likely to appeal to VR enthusiasts who have already invested in one or more VR headsets and who are willing to spend more - but not too much more - on an AR equivalent. The attraction of the Vision Pro is in the raw power of the device, and its integration with the wider Apple suite of devices and software. Whereas I would imagine Apple’s primary customers for at least the first few years will be hobbyists and developers, the Quest Pro may, with time and some work, benefit from an established base of Quest users and provide a more affordable alternative, without needing to worry as much about the consequences of experimentation with interaction and UX.

Meta has not, at least in my opinion, invested enough in the operating system of their devices so far to provide the same kind of experience that visionOS claims to offer, but I would expect to see much more development in this area and a real push to provide compatibility with other devices and services, in order for the Quest Pro to remain viable as a direct competitor rather than a ‘lite’ version of an AR headset, or a VR headset with ‘some’ AR functionality.

Apple also stands to benefit from an army of developers, many of whom have already established a healthy income from macOS and iOS software, and who are familiar with Apple’s development ecosystem. Though the same could in theory be said about Quest developers - it’s worth considering that the Quest’s existing reputation as a gaming headset rather than a ‘spatial computer’ could mean that there’s a lower likelihood of non-game software being developed for Quest headsets in the first place.

Share Not A Robot

Unification

Whatever the future of the Vision Pro and its Quest counterpart - one thing is clear: big money is being spent on closing the gap between AR and VR. Though I don’t get the sense that Meta has understood the need for this in quite the same way as Apple appear to have done, I see Apple’s announcement of the Vision Pro as their attempt to insert a crowbar into the space and to allow some light to shine on the fundamental problem: VR gaming and a sprinkling of AR / VR commercial applications is not enough, and a more general purpose offering is required if AR and VR are ever to reach a critical mass. AR-first headsets are, in my view, the way to achieve this: allowing people to remain oriented in the real world whilst using immersive software, and giving users the ability to tune the real world out as an option for things like games or other entertainment in VR mode (or something akin to it).

Even in the best-case scenario for Apple, the price tag of the Vision Pro means it’s unlikely to reach a critical mass for a number of years yet. For that to happen, the price will need to come down: particularly if Meta or another competitor manages to provide a comparable experience for a lower price. I suspect heads at Microsoft are currently being turned back towards the HoloLens too: it is not beyond the realm of possibility that a much more consumer-oriented HoloLens appears in the not-too-distant future.

The key thing that Apple - and to a lesser extent Meta - is offering is the unification of AR and VR. AR-first headsets are likely to remain a hobbyist or luxury purchase for a while yet, but Apple has plenty of money to throw at the hardware and software for a good number of years in order to create the market it appears to believe will exist. This is not without precedent for Apple: the Apple TV (the hardware) successfully bullied its way into enough homes to justify the creation of Apple TV+ (the streaming service) - which now competes directly with other major streaming services and has helped to broaden Apple’s ecosystem.

Where the Quest Pro is pushed along by its VR cousins, the Vision Pro is attempting to meet it from the other direction: by providing a fully fleshed-out AR experience with a smattering of tasteful VR features. The natural convergence point still lies some way off - one device that can do both AR and VR well, and seamlessly - but as a longtime user and developer for the XR ecosystem, I’m happy to see that something I’ve felt was needed for many years is now being addressed by companies with deep enough pockets to make it happen.

Whether or not a market for AR-first headsets is viable remains to be seen. Despite the jokes about Apple’s dystopian future vision, and that stupid, stupid ‘look at my eyes!’ feature, it’s clear that Apple is willing to shovel more money than you or I are ever likely to see into this potential pit, and I doubt they’re doing it for a laugh. The prospect of spatial computing has long been the stuff of sci-fi, but in the Vision Pro, we, at last, have the first glimpse of what it could look like with a serious, consumer-oriented approach.

If my gut is correct, and Apple does manage to pull this off over the long term, then I believe spatial computing will establish itself as a much more accessible and natural-feeling way to engage with modern technology than we can currently imagine. Ignore the bulky and ridiculous headsets for now - nobody wants to wear something like that for any serious length of time - but focus instead on the combination of hardware and software. Establishing spatial computing as a viable proposition for ordinary people is, I believe, going to be a long-term project, and though there are plenty of reasons to worry about the worst-case scenario of a world where we’re looking through each other rather than at each other, there is also reason to be optimistic. It’s not difficult to imagine that gaze & speech could be a more accessible alternative to the disabled and infirm than tapping on a screen or moving a mouse - and the potential for contextually-aware assistance and information without the need to hold up a phone seems to me, at least, an attractive proposition.

Vindication

On the day of Apple’s announcement, my workplace held a WWDC watch party. As part of the proceedings, and because rumours of Apple’s announcement of some kind of headset had become something of a relentless background noise, we decided to place bets on the price and ‘killer app’ of whatever was announced. I’m happy to say, dear reader, that I absolutely nailed it, aside from the apparently impossible dream of ‘avatar legs’, which remains, for now, an elusive goal even for tech companies with infinite money. You’ll have to forgive the handwriting: it turns out writing on a whiteboard is not a skill I possess.

If only we’d bet real money :(

Ultimately, only time will tell whether the Vision Pro will be a success. I’m certainly feeling rather smug at my accurate prediction for where Apple was heading with the Vision Pro, but, if it all comes to nought, then I will take comfort in the fact that I was only as wrong as Apple were - and that it didn’t cost me several billion pounds to find out!

Not A Robot is a reader-supported publication. Since I can apparently predict with 100% accuracy the future moves of tech giants, a paid subscription may ultimately prove a wise investment!

]]>
<![CDATA[Tinkering with Tiny]]>https://tomhalligan.substack.com/p/tinkering-with-tinyhttps://tomhalligan.substack.com/p/tinkering-with-tinyTue, 06 Jun 2023 01:06:28 GMTWhile browsing Hacker News recently, I came across a post about a new game engine called Tiny. The pitch had me intrigued from the get-go:

The virtual console that offers an easy and efficient way to build games and other applications with its Lua programming support, hot reloading, and 256-color maximum.

I’m always on the lookout for new tools to prototype things: making any kind of game at all is usually a big investment of time and effort, and there’s nothing worse than spending too much time on something you don’t end up enjoying yourself - especially if you’re just playing around!

I’m also not an artist, so attempting my own ideas in my free time without the support of a talented team often just leaves me irritated at my own inability to make things look passable, never mind good, so a hard constraint on graphical capabilities is, frankly, a relief! 256 colours? Be still, my beating heart!

What’s more, hot-reloading and lua support suggest that Tiny is, above all else, aimed squarely at programmers looking to do cool things quickly, rather than amazing things ‘properly’, and that, for me, sealed the deal. It also helped that the example game on Tiny’s documentation website is a nifty little version of one of my favourite retro games: Breakout

Glorious retro vibes - how can I resist?

In this post, I’ll take us through the first basic steps of starting a project with Tiny. Though this post will be rather code-heavy, I’ll try to keep everything simple so anybody can follow along.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

First Look

Tiny is simple to get up and running with. Once you’ve downloaded the command-line tools and added them to your PATH, you get started in the terminal with the comically straightforward:

tiny-cli create my-game 

This command starts up a small configuration process which asks you to fill in a few basic details and then plonks the necessary files into the newly-created ‘my-game’ folder.

Perfection

In the ‘game.lua’ file, you’ll find the ‘stub’ of a Tiny game, which is composed of three functions:

  • _init()

    • This is where you perform any initialisation your game requires.

  • _update()

    • This is where you update your game’s state. This function is called on every frame.

  • _draw()

    • Like the _update() function, this is called every frame. This is where you should write code which draws things to the screen.

You can see this for yourself (and play around) in the Tiny Sandbox - though it would probably be more helpful if the Sandbox came pre-populated with something functional for you to tweak, rather than landing you with a blank canvas!

The Lua Language

Lua is a scripting language which has become something of a de-facto standard for indie developers and modders, and has found a home even within large development studios owing to its flexibility and rapid iteration times. Even if a game isn’t written purely in lua, it’s common for developers to integrate a lua interpreter into their game, so that ideas and features can be prototyped and tested quickly. Indeed, this is essentially what the developers of Tiny have done: the game engine itself is written in Kotlin, and lua scripts are interpreted at runtime.

This setup avoids irritating compilation times, and allows you to make changes to your code and see the results immediately, without needing to draw a distinction between ‘edit-time’ and ‘play-time’.

To run the game as-is, execute the following command in your terminal:

tiny-cli run my-game

This will open up a new window where you’ll see your game running. Unfortunately, since you haven’t actually made a game yet, you’ll just get a black screen:

Thrilling…

Drawing Something

Undeterred, we push on! Now that we have our stub ready to go, let’s get something drawn on the screen.

To do this, we need to open our game.lua file and populate our _draw() function with something. Let’s try this:

Let’s break it down:

  • gfx.cls(0)

    • This means ‘Clear the screen’. We want to do this every time the draw function is called (i.e. on every frame), since if we had any objects animating, they would leave a trail behind if we didn’t clear out whatever the last frame drew. The parameter 0 (zero) passed into the gfx.cls function just translates to ‘black’ - so this means “Start with a black screen on every single frame”.

  • print

    • This prints some text on the screen. In this case, the text we’re displaying is “Welcome To Tiny”, and the following parameters are just positional coordinates. The screen size is 256x256 pixels, so all we’re doing here is roughly positioning the text halfway along the x-axis (allowing some space for the text to appear centred), and a quarter of the way down the y-axis. If I’d used zero for x and zero for y, then the text would appear tucked up into the top-left corner of the screen, since the coordinate system starts in the top-left corner.

  • shape.circlef

    • This draws a circle, filled with a colour. Here, the parameters are as follows:

      • X position

      • Y position

      • Radius

      • Colour

        • Colours are a bit odd in Tiny - they are defined in the _tiny.json file which lives alongside your game.lua file. This is apparently not actually documented anywhere, so it took a little mucking about for me to discover this! In this case, ‘5’ corresponds to hex code #8FB347 - which is a grassy-green colour.

What does the above give us, I hear you ask? Behold!

It’s not much, but it’s a start!

Share

Animation

Drawing a static image is all well and good, but it’s not very interesting, so in the spirit of my Digital Doodle from a few weeks ago, let’s get our circle moving!

We’ll replace our code with the following:

Flicking back over to the Tiny game window, we can see our game has updated instantly with our new, glorious animation code:

Ship it!

In our new code, we’ve filled in our _init() and _update() functions, and modified our _draw() function:

  • _init()

    • We set up the starting Y position - startY - for the circle we draw earlier

    • We set a new variable - circleY - specifying the current Y position for the circle

    • We give our circle a bounceSpeed - governing how fast it should move

    • We give our circle a bounceMagnitude - which controls how far it should move

  • _update()

    • We use a relatively simple calculation based on a Sine function to determine an offset from the starting position (startY). You can find plenty of explanations for what this actually does elsewhere, but essentially the end result is that you end up with a nice, repeating periodic value which gives a smooth, pendulum like effect when rendered visually.

    • Note the use of tiny.t used as a parameter in the call to math.sin: this provides the current time since the game started. This is a very useful value to keep track of, alongside it’s little brother tiny.dt, which represents the time between frames.

  • _draw()

    • Here we’ve just replaced the y coordinate of our circle with a reference to the circleY variable we set up earlier.

How does this all come together?

  • _init() is executed once, as soon as your game begins

  • _update() is executed on every frame

  • _draw() is executed on every frame, after _update()

Input

A game wouldn’t be very interesting if the player couldn’t do anything, so the last basic piece we need to figure out in order to start making something interesting is user input. Thankfully, Tiny also makes this easy too, so let’s update our code once again:

In our _init() function, we’ve added:

  • A new variable called moving, which lets us control whether the circle is animating or not

  • A new variable called colour_moving which we’ll use to specify the colour of the circle if it’s moving

  • A new variable called colour_stopped which we’ll use to specify the colour of the circle if it’s stopped

  • A new variable called current_colour which we’ll use to keep track of the correct colour to use on each frame.

In our _update() function, we’ve added:

  • A check against ctrl.pressed

    • We pass in keys.space as the parameter, which means we’re asking ‘Was the space key pressed?’

    • If the space key was pressed, then we negate (invert) the value of our moving variable. This means that if moving is currently true, then we’ll set it to false, and vice-versa.

  • A check against the value of moving

    • We will now only update the circle’s position if moving is set to true.

    • If moving is true, then we set the value of current_colour to that of colour_moving

    • If moving is false, then we set the value of current_colour to that of colour_stopped

And finally, in our _draw() function, we’ve replaced the colour parameter for our call to shape.circlef with a reference to our current_colour variable.

You’ll notice that the animation is now a little janky and doesn’t smoothly resume from the circle’s current position as you toggle between moving and stopped. This is because our animation calculation is based on the time since the game was started. If we want to fix this, we need to create a new variable which keeps track of time, but only increases when we want it to. As a challenge, I’ll leave it to you to work out the solution! Astute readers may have noticed a subtle hint earlier on.


Final Thoughts on Tiny

I’ve been enjoying mucking about with Tiny. It makes a nice change from the big game engines, and there’s something relaxing about stripping away all of the bells and whistles that other game engines provide and just working within some strict limitations. There are a few frustrations: the documentation is incomplete and doesn’t explain some things particularly well, and I’m fairly sure I’ve found a bug in one of the math functions which led to me writing my own replacement. No doubt there’ll be other issues I discover along the way, but it’s still early days for this game engine and I’m enjoying what I’ve played with so far.

If you’ve ever fancied making a game, or would just like to play around with something almost pathologically retro, then you won’t go wrong with Tiny. If the authors keep up development, I can see this engine being a great fit for the indie retro scene. They’ll need to resist the temptation to keep adding new features if they really want to emulate the old-school console feel, but they’ve done a great job so far and I’d recommend it to any hobbyist or interested indie dev!

Wrapping up, and next steps

By putting together everything we’ve explored above, you can see how a game can start to take shape pretty quickly! You won’t be getting any super fancy graphics or incredible physics simulations any time soon with Tiny, but with a little experimentation, you’ll be able to pull something together in no time!

I’ve decided to start a project using Tiny to make a small game, just for fun. It’s called Tiny Gardening - a game where you need to keep your garden alive by watering your plants whilst battling against weather, pests, and the inevitable passage of time, and I’ll be posting regular updates as I make progress. When I was first learning how to code (many years ago) one of the very first programs I wrote was a text-based simulation of an orange tree, where you had to water the tree and collect the oranges regularly. I don’t know why, but there’s something about simulating plants which keeps coming back to me, so why not continue the theme with Tiny?

There’s not much to see so far - just the basic skeleton of a game - but I’ve done enough work on the internals that I’ll be able to start making some decent progress soon: some generic Entity classes, a debug tool to check object boundaries and (soon) collisions, and a very basic state system to allow me to easily manage the different states the game is allowed to be in.

If you’d like to follow along, you can check out the source code on GitHub, and I’ll upload the latest version here for you to play whenever there’s any meaningful progress.

As always, if you’ve enjoyed this post and would like to receive future updates (and, in time, a game that even your gran might like to play) then hit the Subscribe button below. If you’d like to support this newsletter then a paid subscription will help me to keep writing this kind of content and more.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Are you interested in playing around with Tiny? Let me know your thoughts in the comments, and feel free to share this post for anybody who may be interested!

If you enjoyed this post, check out the follow-up:

]]>
<![CDATA[Self-Hosting Shenanigans]]>https://tomhalligan.substack.com/p/self-hosting-shenaniganshttps://tomhalligan.substack.com/p/self-hosting-shenanigansSun, 28 May 2023 02:53:37 GMTI’ve been thinking recently about self-hosting - running your own servers and services to suit your needs and facilitate your personal and professional life. In an era where technology creeps into our homes from every angle (even the humble doorbell is now an internet-aware communications device!), it seems to me that seeking to coalesce and control this influx of binary butlers, silicon servants and artificial assistants on our own terms would be A Smart Thing To Do.

Whether we want to control and monitor devices in the home, back up our photos and videos, or synchronise notes, recipes, and documents between devices, our corporate overlords are keen to offer us ‘cloud’ solutions by the bucketload: naturally incompatible with their competitors and usually expecting you to forget any privacy concerns you might have about uploading your entire personal life to what is, and will forever be, just somebody else’s computer.

With the low barrier to entry and enticingly smooth user experience, most people are happy to tap the ‘Send to Tim Cook’ button and forget all about it.

Of course, privacy concerns aren’t the only factor which might push you towards self-hosting. Perhaps you want to take advantage of the choice and freedom to try any number of software solutions and services before settling on whatever suits you best, or maybe you want the ability to ensure you can still access what you need if the power goes out or the network goes down.

For those of us who enjoy tinkering with tech, self-hosting also provides an opportunity to try out new ideas, stitch things together in unexpected ways, and have fun without constantly having to worry about an endlessly growing list of service subscriptions breaking the bank.

In this post, I’ll walk you through a little of my setup - and read to the end for a little Substack-related surprise!

Share

Taking Control

The first thing any self-respecting self-hoster needs to do is research which hardware is available to suit their requirements. The second thing they need to do is forget everything else and buy a Raspberry Pi instead - since you’ll be hard-pressed to find a more suitable and affordable piece of kit to start your self-hosting adventure. Of course, if you have an old laptop lying around then this could also serve the same purpose - but I favour the Pi for its tiny footprint (mine sits on top of a speaker) and low running costs. If you do opt for the Pi, it’s worth investing in a high-capacity microSD card, especially if you’re planning on using it for file backups.

Tiny machine, big ambitions

As an aside - if you’re not using a Raspberry Pi for your self-hosting experiment, I’d still highly recommend using Linux as your operating system. Though you’ll likely be able to use a Windows or Mac machine, most software you’re likely to want to play around with will be expecting a Linux machine.

Once you have your server machine ready, you’ll certainly want to give it a fixed local IP address. Your home WiFi router should allow you to assign a fixed IP to a specific device on your network, even if your external IP changes (most consumer ISPs don’t provide fixed IPs as standard - and acquiring one usually comes at an extra cost, if they’ll provide one at all).

A fixed local IP means you will always be able to connect to your self-hosted services at the same address. This is very important, so make this one of the first things you do!

Docker

Docker is a container system which allows you to quickly and easily deploy software in a standardised way. It enables software developers to provide ready-to-run versions of their software and all of its dependencies to their users, whilst greatly reducing the likelihood that an individual’s particular setup will interfere with things. This is ideal for software which provides networked services - since if you have a lot of users or your hardware changes often, you want to be able to adapt and scale with minimal fuss.

Most things you’re likely to want to play with initially will provide Docker images, though the specifics vary. You may find:

  • Docker images on Docker Hub

    • This is the ideal scenario - allowing you to download and run the software with no fuss

  • A Dockerfile in a developer’s GitHub repository

    • A little more involved - the Dockerfile contains instructions for Docker to build the required image, which can then be run as though it were pulled from Docker Hub

  • A docker-compose.yml file

    • For more complicated software, which uses third-party dependencies e.g. a database system, a docker-compose file tells Docker which different pieces of software to run and how to connect them together. For a lot of self-hosting software, you’ll want to either find or write a docker-compose file to keep things nice and tidy.

Since the whole point of self-hosting is to run services within your home network, there can be a little network-related fiddling to get things up and running: mostly keeping track of which ports different services are listening on, but nothing too complicated. Docker will allow you to configure things easily - but you’ll need to refer to the setup guide of each piece of software to know what you need to do.

You can install Docker through your system’s package manager - though things will likely be easier if you use the terminal and simply run the following command (assuming you’re on a Raspberry Pi):

sudo apt-get install docker docker-compose

Nextcloud

Nextcloud is a file sync tool, collaboration tool, chat system, and office suite all rolled into one. It’s certainly not without its quirks - but it’s highly extensible and provides a ton of functionality which will get you started with the basics of self-hosting. It’s not the quickest software on the planet when running on a Raspberry Pi, but it does the job and is ridiculously simple to set up given all that it provides.

If you want to try Nextcloud, copy the text below into a docker-compose.yml file and change the passwords / usernames wherever you see something like ‘enter_password_here’. Even though you’re self-hosting and not (currently) exposing any of this to the wider internet, it’s still worth sticking to good security practices and picking a good, strong password.

version: '2'

volumes:
  nextcloud:
  nextcloud_db:

services:
  db:
    image: yobasystems/alpine-mariadb:latest
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - nextcloud_db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=enter_root_password_here
      - MYSQL_PASSWORD=enter_user_password_here
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=enter_username_here

  app:
    image: nextcloud
    restart: always
    ports:
      - 8080:80
    links:
      - db
    volumes:
      - nextcloud:/var/www/html
    environment:
      - MYSQL_PASSWORD=enter_user_password_here
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=enter_username_here
      - MYSQL_HOST=db

Next, you’ll want to open the terminal, change to the directory where you’ve saved the docker-compose.yml file, and type

docker-compose up -d

You’ll then be able to open a browser, visit http://localhost:8080 and start playing around with your Nextcloud installation.

This is a very basic setup but will give you a taste of what Nextcloud offers. In addition to the base software, Nextcloud also has an integrated app store which allows you to add extra features; many of which are entirely free though some do require paying for a third-party service. Of the free offerings, one of my favourites is Cookbook - which allows you to paste a link to a recipe you’ve found and produces easy-to-follow instructions and timers:

This recipe is actually amazing

For a more up-to-date and in-depth guide to getting Nextcloud set up so you can access it from anywhere, check this guide from HowToGeek - though wherever you see a reference to Apache (a web server), I would recommend you instead use Caddy - which is much easier to configure.

Nextcloud also provides a mobile app so you can sync things directly from your phone and access your shared documents on the go.

Not A Robot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Home Assistant

Home Assistant is where self-hosting really starts to shine. At its core, HA is intended to provide a central hub from which you can manage any of your smart devices such as sockets, lamps, switches, and essentially anything else with a network connection.

It can be a little opaque when you first use it; working out the difference between devices, entities, and helpers requires a bit of experimentation and reading - but once you’ve understood the basics, you’ll find that beneath its somewhat clunky surface, it’s actually a pretty elegant system which allows you to manage your home for any situation.

On top of its direct support for devices, there are also dozens of integrations with external services, meaning you’re able to do things like:

  • Adjust the brightness of your lights / control your curtains or shade based on the elevation of the sun (yes, really)

  • Turn on your smart TV when you walk into your living room

  • Monitor and control your energy usage based on your bank balance

There’s also a vibrant community sharing tips and tricks on how to get the most out of your setup. Check out the Community Blueprints for inspiration!

Just like Nextcloud, Home Assistant is simple to set up with Docker Compose:

version: '3'
services:
  homeassistant:
    container_name: homeassistant
    image: "ghcr.io/home-assistant/home-assistant:stable"
    volumes:
      - /PATH_TO_YOUR_CONFIG:/config
      - /etc/localtime:/etc/localtime:ro
    restart: unless-stopped
    privileged: true
    network_mode: host

You can find more information about installation and configuration in the official Raspberry Pi guide.

A simple lamp brightness control in the Home Assistant dashboard

You’re also able to set up your own ‘helper’ entities which you can use to keep track of different values provided by other software or services - though this will require a bit more in-depth fiddling to become useful.

Home Assistant is just one of several available smart-home hubs, but it’s one I’ve stuck with for a while now. The flexibility it provides, and the community of users are fantastic - and combined will allow you to chain together devices, software and ‘the rest of the world’ in ways no paid-for service is currently able to offer. If you want to be the master of your own domain, then Home Assistant is for you!

As with Nexctloud, Home Assistant also provides iOS and Android apps for when you’re ready to expose your self-hosted installation to the wider internet.

n8n

A recent discovery for me, but something I’m definitely excited to play around with more, n8n (Which is, by the way, a terrible name) is software which allows you to build automation workflows using a dizzying number of services. What this means, roughly, is you can define a workflow using simple ‘if-this-then-do-that’ logic, using a simple drag-and-drop interface to connect nodes together.

Every day, pull down an RSS feed and transform it into an email-friendly newsletter, then send it

You can create workflows using a wide variety of trigger events - actions which occur on external services, or regular, scheduled tasks. There really is no limit to how complex you can get here - and if the integrations included with the base system don’t provide what you’re looking for, then there are a large number of community additions which you can install to provide extra functionality. It’s also relatively simple to write your own if you’re so inclined!

I haven’t yet explored everything n8n has to offer, but what I’ve seen so far has me excited to play around and create a digital Rube Goldberg machine of epic proportions.

n8n is probably only useful if you’re willing to expose your self-hosted services to the wider world, so refer to the instructions for a valid docker-compose setup. In my case, I removed the dependency on Traefik since I just use Caddy to manage proxying.

A Treat for Substack

In all my excitement at discovering n8n, I decided to see what I could do when it came to Substack.

As I’ve mentioned elsewhere, the fact that Substack doesn’t offer an API or really any integration options at all is a source of great frustration to me, but ever the optimist, I decided I’d get something working, by hook or by crook.

To that end, I’ve created a little system which does the following:

  • Allows me to define subscriber goals in Home Assistant

  • Monitors for ‘New Subscriber’ emails from Substack in n8n

  • When a new person subscribes, increments the counter in Home Assistant, which then sets the brightness of a lamp to the percentage of my subscriber goal, and then starts a playlist on Spotify to celebrate!

Here’s what the n8n workflow looks like:

Look on my Works, ye Mighty, and despair!

And here, in all its glory, is my ‘Substack Subscriber Goals’ dashboard in Home Assistant:

Yes, I am a gigantic nerd

In the above image, you’ll see an input field which lets me set my current subscriber goal. If I set it to say, 1000 subscribers, then my ‘Subscriber Goal Progress’ immediately drops down to 3%, which also sets my lamp brightness down to 3%, and my meagre home-office setup becomes decidedly more gloomy.

I can only get any work done if you subscribe!

So, if you enjoyed this post, and want to see more like this, then help me keep the lights on (literally), share & subscribe!

If you don’t want me to have to work in the dark, in silence, then please consider subscribing

If you’ve got your own self-hosting setup, or have any questions about mine, then feel free to comment below and let’s see what we can do to take back control of our tech!

]]>
<![CDATA[Introducing: Digital Doodles]]>https://tomhalligan.substack.com/p/introducing-digital-doodleshttps://tomhalligan.substack.com/p/introducing-digital-doodlesSun, 14 May 2023 15:50:53 GMTIn this post, I’d like to introduce a recurring feature: Digital Doodles. Every so often, I’ll be posting a tiny code project aimed at bringing a little light to your day and, hopefully, inspiring you to try something new! Wherever possible, I’ll be sharing what went into it, and encouraging you to share your own; or even to amend and improve on what I’ve shared.

The brief for a Digital Doodle is (loosely) as follows:

  1. Easily shareable: if you can produce GIF, video, or audio output from whatever it is you’ve made, then that’s great for sharing!

  2. Quick to produce: you should only spend as much time on a Digital Doodle as you’re willing to. In my case, it’ll probably be no more than an hour or two for each.

  3. Fun for you: the goal is not to impress others, or produce something amazing. It’s to play around, flex your brain a little, and enjoy yourself.

These are not ‘rules' I’m going to hold myself (or anybody else) to particularly strictly: but ‘Shareable, quick, fun’ seems to me like a good set of constraints to at least try to abide by!

If you’ve never coded before, or are just starting out, then I’m hoping this feature will be a good place for you to practice, feel inspired, or just have some fun!

Share Not A Robot

So, without further ado(odle)…

Welcome to BoopLand

A world populated entirely by colourful circles, wandering until they hit walls. Why would you create such a thing, I hear you ask? Because I can, my friends. Because I can.

Inspired by to give P5.js a whirl - I decided to brush up on some of my incredibly rusty JavaScript skills and implement the most pointless thing I could think of. This will win no prizes for imagination or technical skills, but it was fun to try something new. Audio is implemented via the excellent ZzFX synth library - which is well worth trying out for all of your web-based zip-zap needs.

You can try out BoopLand for yourself over at CodePen. It will start automatically - and you can stop or restart it by clicking anywhere in the bottom window. You may need to click in the canvas area to hear the audio.

I’ll probably play around with P5.js some more - there are some very cool and fun examples in their 2022 showcase, so if this kind of mucking about is your cup of tea on a rainy Sunday afternoon, then I can think of worse things to spend your time doing!

I think it’s probably been around a decade since I last used JavaScript in any meaningful sense, so this was a nice way to revive some deeply buried knowledge and get back in the swing of things.

If you’d like to fiddle with BoopLand, just visit the CodePen - I’ve left some variables near the top of the JavaScript file which will let you tweak some basic things, and then you can dive further into things if you’d like to make some more serious changes.

Show Me What You Got

If, like me, you like to try things out just for the fun of it, then please do get in touch in the comments and share whatever random, utterly pointless gems you’ve been working on. I’d like to make this a recurring feature with contributions from subscribers, so if you’d like to modify BoopLand, or share your work, feel free!

If you'd like to take part in Digital Doodles, have something to share, or just fancy a semi-regular dose of pointless creations dropping into your inbox, then please subscribe below!

]]>
<![CDATA[Inverting The Tower]]>https://tomhalligan.substack.com/p/inverting-the-towerhttps://tomhalligan.substack.com/p/inverting-the-towerMon, 08 May 2023 03:29:03 GMT

I'm not asking you, I'm telling you. These creatures are the only sentient race in the sector and they're made out of meat.

- Terry Bisson [They’re Made out of Meat]

Humanity has long struggled to reconcile its differences. Across borders, faiths, cultures and time itself, we’ve invented uncountable and unfathomable punishments to mete out against one another; to wage war against the barbarians, the heathen and the alien. Though we often like to think that we in the modern era are more civilised, peaceful, and more enlightened than those in the ignorant past, you don’t have to look very far, wherever you may be in the world, to find bigotry and violence being played out over and over again.

“Wouldn’t it be better”, goes the common lament, “if we could all see that underneath, we’re the same?”.

This idea is attractive; if reductive. We are all the same in some sense: we’re made of flesh and bone, and we all need shelter, food, companionship and comfort. We all lose something of ourselves when our basic needs aren’t met - we may become harder, more prone to aggression or anger, or we may become depressed, more sullen and reclusive. In that respect, we share a common human experience. And yet, we stand alone. Our thoughts and emotions stubbornly defy our ability to express them. Our internal worlds are as rich and varied as the earth beneath our feet, but our ability to convey ourselves to one another is fraught with danger and ripe for misinterpretation.

Behind every great text is an author frustrated with their inability to find the right words. Every canvas invites the artist to describe - but not quite - their vision. Every soaring orchestral masterpiece is a scream from the soul of the composer: please understand, please hear me.

Subscribe now

An Unfathomable Overture

We are all the same; we are all islands, trying in vain to build bridges.

The Tower

“Behold, they are one people, and they have all one language, and this is only the beginning of what they will do.”
- Genesis 11:5–7

Most of you will have at least some passing familiarity with the story of the Tower of Babel. For those who don’t, the story goes something like this:

Humanity, united after the flood (of Noah fame), and sharing a common language, decided to build a great tower up to the heavens. God, seeing this, and being affronted by their pride, decided to destroy the tower and decimate their ability to speak to one another, resulting in the varied languages and cultures we see today.

Putting the theological considerations to one side, it’s not difficult to imagine the potential positive implications of a world where all of humanity spoke a single language. If you’ll allow the indulgence - surely we can agree that fewer opportunities for confusion and more opportunities for collaboration would result, at least in theory, in a more pleasant and cohesive society? Obviously, this is a wild oversimplification - we would still, of course, find reason to disagree - but a common language would allow us, despite its imperfections, to at least share our thoughts more easily with one another; to work together and to navigate the world in confidence.

Instead, we are condemned to our imperfect reality: we must work to convey meaning to one another, to overcome differences in language and culture in order to make progress and find peace. Our internal world, too, must be translated into some approximation of its true meaning. We condense our intangible thoughts and feelings into wet ink on a dry page; forever trying and, often (if not usually), failing to capture it perfectly. How arrogant were the ill-fated builders of Shinar, to believe they could reach the heavens, when we, as humans, can’t even reach each other?

Talking to The Machine

In a post-Babel world, humanity has stumbled through to the modern era relatively unscathed. We haven’t, as of yet, annihilated ourselves (though we’ve come pretty close), and we’ve done a pretty good job of connecting to one another and overcoming our language barriers. The age of the internet has arrived in all its glory, allowing people separated by entire oceans to speak freely with one another, aided by translation tools, the ability to share art, music, poetry and pictures. Never before have we had the means to express ourselves in such a varied and complex manner, and to reach such a wide audience. If God is real, they must be absolutely furious. A tower to heaven is one thing, but animated GIFs? Surely, we’re due for a smiting.

This technicolour dream would be impossible, of course, were it not for the humble computer. Where once we may have been restricted to the spoken word or the written page, we are now unencumbered by audible distance or the physical medium; our thoughts now massaged into electrical impulses via the tap of a keyboard, squeezed into cables traversing the sea floor, blasted into space, beamed back down again, and transmogrified for consumption by hungry eyes and ears, to millions of people, at the speed of light.

All of this is made possible by an emulation of God’s wrath against the architects of Babel’s tower: a myriad of programming languages, utterly foreign to the vast majority of humanity - yet several orders of magnitude less complex than the languages we speak and think in daily - bridge the gap between man and machine, constantly evolving and improving, bringing us ever closer together and yet still, somehow, keeping us as distant from one another as we’ve ever been.

As we’ve introduced machines to the world, and sent our thoughts and dreams off, hitched to a thousand digital wagons, we may have inadvertently set ourselves on a path to increased isolation and separation from the natural world and, by extension, each other.

The Mule of Insufferable Content

The internet is awash with subcultures and micro-communities, each with their own lore, history and dialects. We are freer to communicate than ever before, and as a result, the breadth of our human languages has grown enormously whilst our mastery - our ability, or even our will - to express what we actually mean to one another, seems to diminish in the face of the sheer volume of information we are confronted with.

Rather than finding the words to express derision at a corporation’s latest fumbled attempt to appear like a relatable human, we post pictures of a crab shooting lasers from its eyes. Instead of congratulating somebody for some achievement in their life, we send them tiny icons of cartoon people with party hats. For all the rich possibilities we have at our disposal, it’s often the case that we find humanity wading in the shallows, rather than diving into the depths.

Silence, brand
It’s still pretty funny, admit it

In a post on , writes beautifully on the problems we may face in conveying our thoughts, feelings and senses to one another, particularly in the fast-paced, digital town square. The full post is worth a read and a subscription, but this passage sums up ‘the problem’ neatly:

An abundance of words, of course, does not necessarily imply an interesting and compelling diversity of words. The quantity of words does not guarantee the quality of what is said. We encounter a mass of words, but it is a stark and monolithic mass, composed of abstractions, generic terms, and words that have lost their power to convey a distinct sense to the imagination. We’ve asked too few words to do too much, and now they are tired.

Despite our new-found capacity for communication, we often resort to babbling at each other, saying little of any worth or consequence. We mistake quantity for quality, whilst our inner being, in its lifelong struggle to be seen, heard, and understood, compels us to keep scrolling, to keep posting, and to keep consuming. Surely, somewhere in all this noise, we will find that spark of connection: the acknowledgement, at last, that we are here, part of this world.

Share

The Machine Talks Back

Out of the maelstrom emerges a new presence. The tech world, and increasingly the wider public, is getting to grips with what we have tentatively described - though opinions vary - as Artificial Intelligence.

Though the technical limitations are still a fair way off from the kind of all-knowing, all-seeing AI you might see in a sci-fi film, the current iteration is still compelling and impressive to all but the most cynical among us. Those who don’t know or care about how it all works are often amazed or alarmed in equal measure at the experience of ‘chatting with an AI’. Even those who do understand the technical specifics are impressed at how human-like the experience can be. In stark contrast to the incoherent jabbering that has come to dominate online interaction, we are now seeing people sharing with astonishment the unconscious hallucinations of machines. From poetry to academic articles, long-form essays, images, and music, the online world is awash with humans obsessing over the creative output of a system which cannot even comprehend their existence, let alone it’s own.

We are witnessing a fascinating phenomenon - expression-by-proxy. People who haven’t written so much as a haiku are finding joy and at least some degree of fulfilment in compelling a computer to produce for them the poetry they would write, if they could. The picture they would paint, if only they could master a paintbrush. This is not to diminish the pleasure they find in exploring the possibilities of AI: it is certainly fascinating, and it’s a tool like any other. Who am I to criticise anybody for how they express or entertain themselves?

“Electric muses

Robots crafting verse sublime

Artificial soul”

- ChatGPT

There is, I believe, something fundamentally interesting about instructing a machine to do something for us. Those of us who are programmers by trade or hobby are familiar with the particular itch being scratched: making a computer bend to your will opens up a world of possibilities. As a programmer, I find the new wave of AI interest to be enthusing and absorbing. The possibilities of AI writing computer code, rather than raising concern for my place in an industry that is already incredibly fast-paced, have instead opened my mind to any number of potential projects and opportunities.

Indeed, it is AI writing code which has led me down the philosophical avenue which inspired this post. For decades, the ability to instruct machines has been relatively tightly constrained to a small number of people who actually use those machines. Programming is complicated - and it takes a long time and a lot of practice to become proficient. Over time, new tools are produced which make programming easier and faster - which in turn results in ever more complex systems, and a greater number of things to learn and master. The complexity grows, even if the accessibility improves. New programming languages are invented - often to suit very specific technical tasks, but also to facilitate more general use cases.

In a paradoxical way, learning to code has never been easier than it is right now, whilst the sheer scale of possibilities means it’s increasingly difficult to master any particular area. We have, in the tech industry, become incredibly expressive when it comes to making computers do what we want them to do. And, in turn, we are now beginning to see that expressivity turned back on us, and humanity at large. We exist in a time where humans can write or talk in their normal, spoken language, and software, written in a technical programming language, can parse that input, reason about it, and then provide a convincing response as though a human were providing it, or accurately perform the tasks demanded of it. To be clear - it is not the case that this response is fool-proof. There are often errors or inaccuracies, occasionally incoherent responses or outright fabrications; these are technical quirks of the systems we have built so far - but to deny that something very impressive is going on would be silly.

This new ability to converse with the machines seems to me to be at odds with how we have become accustomed, in the digital realm at least, to conversing with each other. We are fascinated by a machine’s ability to produce a coherent, articulate response, whilst person-to-person interaction is often typified by short, abstract messages which convey little, if anything, of our true thoughts and feelings. We imprison ourselves, trained to keep responses short and snappy, or to fit within some character limit, whilst we actively encourage machines to produce lengthy, thought-provoking essays and beautiful works of art.

Hello, Computer

If you’ll join me in a thought experiment, perhaps I can imbue you with the same curiosity that has been aroused in me by the bizarre state of affairs we find ourselves in:

Imagine that humanity continues to improve on existing AI systems. It’s not a huge leap of faith to predict systems which will be able to comprehend human instruction given in natural language and produce consistent, quality responses - whether that is to respond in kind (i.e. to reply as a human would, or to produce some output - say - a picture) - or to act on our behalf and perform some task (e.g. plan out your meals for the week and order all of the ingredients to be delivered to your door). Let’s assume, for the sake of this experiment, that humans will come to use AI more and more; that we become accustomed to speaking and writing to machines, rather than tapping on buttons in user interfaces.

I wouldn’t assume that it’s a foregone conclusion that AI systems will merely continue to adapt to human expression. Instead, I would consider it highly likely that humans will learn how best to talk to the machine; that we would, in tandem with improvements in natural-language interpretation in AI systems, begin to adopt dialects that produce the best results for our needs. We already see the beginnings of this in the growing micro-industry of prompt engineering - where people structure natural language in such a way that it produces the desired response from an AI system. Humans are remarkably flexible when it comes to our ability to transform our language and expression to fit a particular context; what happens when we are subsumed in an ecosystem driven by AI? It’s likely that all widely-spoken languages will quickly become valid inputs to these AI systems - and the AI will be able to freely translate between languages as required. It is not certain, by any means, that AI will be able to successfully interpret any natural language instruction any more so than humans can successfully understand each other today, rife as we are with misinterpretation and imperfect clarity; so humanity will be forced to adapt to the machine, rather than the other way around.

Imagine, then, a world in which AI systems can successfully parse a reasonable - if contextually modified - input, in any language, and produce a valid, coherent response, or act in some way on our behalf. A world where humans, driven by the progressive encroachment of technology into all aspects of our lives and work, are compelled to adapt to the demands and constraints of the machine; just like we adapted to the keyboard and mouse rather than the handwritten note, and modify our language so that we may continue to function effectively in our societies. A world where personalised AI learns and reinforces the habits and predilections of the individual; where we need to spend less and less time engaging with the wider world and each other, because it has become trivial to have ‘the system’ do whatever we need it to do. A world where the machine is vastly more capable of articulating our thoughts and desires than we can possibly hope to be - constrained, as we have become, by the need to express ourselves in a way that the machine understands, rather than a way that unburdens the soul and conveys the meaning in our hearts.

What then, becomes of language, when it is no longer the sole domain of humanity? When, as a species, we share a common tongue with machines, rather than writing code for their consumption, are we losing something vital? When we learn to express ourselves in a way that machines understand, because that is what our modern world will demand of us, will we retain our ability to talk to each other, or will our language retreat, evolving into something more akin to a programming language? What happens when the relationship between man and machine is inverted: when the machine acts as interpreter between humans who are no longer capable of expressing themselves to one another? We may become vastly more capable of achieving any number of things thanks to the progression of technology, but the cost of every person becoming masters of their domain may be beyond our comprehension.

The Factory of Shared Experience

When we have 8 billion people who cannot talk to each other, and a machine that can talk to all of them, what does humanity look like? Are we building an inverted Tower of Babel - an impossible number of individualised, personalised dialects specialised at communicating with a single machine? What then becomes of our soul and our spirit - that individual, internal world - already imperfectly expressed by the full gamut of language, art, and emotion we’re capable of?

Are we building bridges, or walls?

Lighten Up

I am, contrary to the tone of this post, not an alarmist about AI. I fully admit to indulging in hyperbole here, and I don’t believe that things will ever become quite as bleak as I may have imagined above. I find it an interesting exercise to let my imagination wander with these things, and to mull over the philosophical implications of ‘worst-case’ and ‘best-case’ scenarios. Realistically, we will probably build AI systems that comically fail to interpret our demands for a long time, even if steady progress is made. The way we interact with technology will probably change quite rapidly, I suspect, and there will undoubtedly be changes in our societies as a result. I am hopeful, however, that the consequence of more ‘natural’ interfaces with computers will result in a revitalisation of human communication, rather than the opposite; since we will ‘need’ to spend less time at computers, or tapping at screens, and will therefore have more time to look up, at each other, and talk.

If we can re-discover what it means to communicate in natural language, perhaps our motivation to seek validation in likes and comments might subside a little. Perhaps the consequence of machines responding to us in well-written, articulate prose will be that we, in turn, rediscover the waning art of self-expression.

Whatever the long-term consequences of these changes will be, what’s certain is that we will be gaining an impressive toolkit with which to influence our world, to create and to learn; and with this ambitious arsenal, it’s my hope that we can conquer any number of problems.

Obligatory Parting Horror

It would be remiss of me not to mention one of the primary inspirations for this post - by Paul Kingsnorth. We could not be further apart in our attitudes to technology, but Paul's 2-part essay beginning with The Universal stuck with me for days after reading it, and I would certainly recommend reading and subscribing. A teaser of the creeping horror you may find within:

Imagine, for a moment, that Steiner was onto something: something that, in their own way, all these others can see as well. Imagine that some being of pure materiality, some being opposed to the good, some ice-cold intelligence from an ice-cold realm were trying to manifest itself here. How would it appear? Not, surely, as clumsy, messy flesh. Better to inhabit - to become - a network of wires and cobalt, of billions of tiny silicon brains, each of them connected to a human brain whose energy and power and information and impulses and thoughts and feelings could all be harvested to form the substrate of an entirely new being.

You should definitely pay him a visit - you won’t be disappointed.

If you’ve enjoyed this post, or it has sparked some curiosity or creativity within you, then please consider subscribing. Any support is much appreciated!



]]>