<![CDATA[The Codist]]>https://thecodist.com/https://thecodist.com/favicon.pngThe Codisthttps://thecodist.com/Ghost 6.20Mon, 16 Mar 2026 19:50:40 GMT60<![CDATA[Why Buying An Apple II+ In 1979 Changed My Life]]>After graduating from college in 1979, I bought a then-new Apple II+ with what little money I had. Little did I know that everything that happened after that was not what I expected.

I went to graduate school in Chemistry, but my faculty advisor resigned after being denied tenure, leaving

]]>
https://thecodist.com/why-buying-an-apple-ii-in-1979-changed-my-life/69b80c83a90bc7055be8dccaMon, 16 Mar 2026 14:04:08 GMT

After graduating from college in 1979, I bought a then-new Apple II+ with what little money I had. Little did I know that everything that happened after that was not what I expected.

I went to graduate school in Chemistry, but my faculty advisor resigned after being denied tenure, leaving me unable to complete my degree. While I was accepted to my undergraduate school with an offer to pursue a PhD in Chemistry, I figured I needed to save a little money and got a job as a programmer at the local defense contractor. I expected to work only a couple of years before starting school again. Being a programmer as a career was not the plan.

I was hired despite having no college classes in programming or professional experience. My learning to program on the Apple II+, building small apps, including automating a food co-op, and playing around with 6502 and Z80 (via a card) assembly turned out to be just enough that my manager felt he could take a chance on me.

After a couple of years of success despite my lack of experience or formal training, I still had no intention of continuing. That is, until the time my one-week high-pressure writing of a VT-100 terminal emulator on an Apple II+ showed me that I should stick with programming. I would not have been able to do this without the home experiments. Without this week of insanity, I would have gone on to get a PhD, and everything would have been different.

I went on in 1985 to start an early Mac development company, and was one of the few to be at the Apple Developers conference in 1986 (at the Nob Hill Hilton in San Francisco), which was an amazing experience—every Mac developer in the world was there. We shipped our first product, Trapeze, in 1987. Sadly, we were forced to compete with Excel despite not actually being a Spreadsheet. But I moved on to start another company that helped build Persuasion for its author (the only real competitor PowerPoint ever had), and then built DeltaGraph over 5 years (which survived various owners until apparently dying during the pandemic).

The Mac market died, and my second company split up. I wound up working at Apple itself for half a year, when it seemed it was going out of business, I gave up and left. I remember being at the infamous meeting where all the departments met with the Copland OS team. If I had rotten tomatoes to sell, I would have made a fortune. After that disaster, I gave up and returned to Texas. Big mistake: a year later, Jobs returned to Apple, not to mention the dot-com gold rush was just starting. If I had stayed at Apple, I am sure that I would have been able to get along with Steve despite his reputation. Or I could have worked at Google, Netscape, or virtually anyone. Oh well.

Over the last decade of my career, I built iOS apps for three companies, the last of which was the biggest company I ever worked for, and what my team built was used every single day by around 120,000 people. In 2021, I retired after 40 years.

I still write code every day for my generative art on my Mac Studio, and it's all in Swift. The algorithms and processes I use are all my own, and aside from Affinity Photo and Rebelle, I write all my own code (I don't use p5.js or Processing, as most other generative artists do).

None of this would have happened without buying that early Apple II+. I had no way of knowing that my splurging on that purchase would have such far-reaching effects.

Today I have 3 iPads, an Apple Watch, a Mac Studio with an Apple Studio Display, and an iPhone, and I write all my code in Apple's Swift. Call me an Apple fanboi if you must, but it was that first Apple purchase that pointed me in a direction I never expected, and I have no regrets.

]]>
<![CDATA[That Time I Trashed The Company Mainframe, And The Lesson I Learned]]>I started my career in the fall of 1981, and in the first six months, I wrote a Jovial language source code formatter in Fortran. I had no prior experience in programming jobs or college programming classes; all I knew was what I had taught myself at home.

That formatter

]]>
https://thecodist.com/that-time-i-trashed-the-company-mainframe-and-the-lesson-i-learned/68f8efaa99cde104a005d950Wed, 22 Oct 2025 15:00:44 GMT

I started my career in the fall of 1981, and in the first six months, I wrote a Jovial language source code formatter in Fortran. I had no prior experience in programming jobs or college programming classes; all I knew was what I had taught myself at home.

That formatter was the only complete Fortran codebase I had ever seen, and I wrote the entire application. It ran on the Harris Super-Minicomputer. It would eventually be used to deliver source code to the US Air Force, but not yet. We had no Jovial compiler; we just had a syntax checker.

I worked for the world's largest defense contractor, supporting the F-16. They were contracting with a vendor to produce a Jovial compiler, assembler, and linker. My second (as far as I recall) project would be testing an early version of the assembler, which was written in Fortran and ran on the IBM mainframe. This would be my first use of the mainframe, and as it turned out, it was also the last.

Since I had no idea how to test an assembler, and no one gave me any hints, I decided the best thing to do was to write a little program, run it through the assembler, and see what happened. The assembler supported the MIL-STD-1750A spec processor (to be used in a future F-16) and the Zilog Z8000 as an interim processor. I was familiar with basic assembly language, primarily Z80 and 6502, so I put together a simple 20-line program, used the necessary JCL, and submitted the job, assuming I would see the result the next day.

Later that day, my phone rang. In 1982, we had no email (executives did, but no one else); therefore, we all had a phone as our primary communication device. When I picked up the phone, all I heard was a lot of swear words and yelling. The IBM mainframe operator was screaming at me for submitting a job that caused his operator console to overflow with errors. He was acting as if I had trashed the entire mainframe and made his life a living hell.

To this day, I have no idea how a user application could do such a thing, but I don't know much about the system they were running or what could have triggered such a mess. He ended the one-sided conversation by telling me to never rerun this app before slamming down the phone.

Well, that was not fun.

Now, I was expected to write up a bug report. The problem was that I had no idea what a bug report should contain or even what it should look like. No one gave me any hints, so I naively assumed I had to figure out what the bug was!

I couldn't rerun the assembler; there was no debugger (and likely I had no idea what that even was), but I did have the source code. So, I printed it out and decided to go through the entire application, trying to assemble my small program in my head, line by line.

At some point, I found it used a hash map to store labels, but I had no idea what that was (due to my limited education), so I went to the library and read up on that data structure.

After a couple of days, I puzzled over the hash function. It appeared to be either an endless loop or infinitely recursive (I don't recall today what it was); in any case, once you got in, you never left. I presumed that would cause an issue (again, I had no idea how that would affect a mainframe) and wrote my findings on paper (no email again; apparently, someone would collate bug reports and mail them to the vendor). I imagine the vendor was surprised to see the bug report had a bug documented in it!

After a while, we received the next release, and I called the operator and told him that the bug was probably fixed and how I should run another test. He told me to submit the job, and he would clear the mainframe before running it after hours. He was much calmer this time around. He ran it and everything was fine.

What was the critical thing I learned from this? It had nothing to do with the mainframe. With my limited experience, I could read and understand a relatively complex codebase that I had never seen before and identify a problem with it, simply by reading it. I discovered the importance of reading source code. To be a good programmer, merely being good at writing is not enough.

Today, of course, reading source code is an important skill to have. Code reviews are common, yet in my experience, many programmers gloss over them and try to complete them as quickly as possible. Often this results in LGTM. Generally, you don't want to code review an entire application. Still, sometimes you need to validate a potential codebase, such as in an open-source project or a vendor-supplied library. This skill is essential to be a good programmer.

I would like to know if Computer Science programs spend as much time teaching how to read code as they do on how to write it. After more than four decades as a programmer, I know how valuable this skill is. Not only is it useful on its own, but it can help you write better as well.

That is the lesson I learned here: reading source code is essential, and I could actually understand a codebase I had never seen before. Confidence-building things like this really helped me move forward in becoming a more professional programmer. Confidence like this helped me a great deal when I faced things I thought were impossible.

]]>
<![CDATA[Verizon's I'm A Teapot Error And Other Technology Fails]]>This is not my week for using technology. None of these problems was my fault, other than my existing in the universe and the attraction that brought.

Today, I wanted to review my Verizon mobile account, and I noticed that the website had changed since my last visit. I clicked

]]>
https://thecodist.com/verizons-im-a-teapot-error-and-other-technology-fails/68e4136a99cde104a005d905Mon, 06 Oct 2025 19:20:32 GMT

This is not my week for using technology. None of these problems was my fault, other than my existing in the universe and the attraction that brought.

Today, I wanted to review my Verizon mobile account, and I noticed that the website had changed since my last visit. I clicked on the sign-in link, which strangely brought up a menu of things, none of which said "sign-in" but were clearly a set of destinations—which is a terrible UI. Clicking on any of them led me to the dumbest error ever.

{"error":{"status-code":"418","message":"We're sorry. We are unable to process your request at this time. For more information about Verizon products and services, visit [verizon.com](http://verizon.com/). If you continue to experience problems please contact us [[email protected]](mailto:[email protected]) with Incident ID  418-DGfJTsdv, date, time, IP and requested URL.","request-id":"3d99cd2de3bd5dc23fa1199432546bec"}}

It also set the HTTP error for this JSON content to 418—I'm A Teapot.

That error is a joke; it was never meant to be used for anything. Not only was the error 418, but the error page displayed only the JSON content instead of an error page. A few minutes ago, I was able to access the sign-in page itself, but it still fails, this time with access control errors (Safari is experiencing a CORS issue), which then results in the 418 error again.

This was only the latest problem. I am building a new PHP version of my art website, converting it from statically-generated so that I can add more dynamic filtering options. Yes, using PHP is terrible, but it's a simple website, and PHP is easier than doing something more complicated.

While testing my new website on a test server, I noticed that images were loading slowly, and sometimes they would partially load, only to have the connection dropped by the server. I used KeyCDN to front a DigitalOcean Space, which has been working fine for four years. All the images are loaded using a subdomain (CNAME). So what gives? It also happened on my production website and on multiple networks, both via Wi-Fi and cell. I opened a ticket to KeyCDN since the images are loading from them, but all I got in response was "nothing changed on our end, try purging the cache", which of course did nothing. I decided to create a new subdomain and point it directly at DigitalOcean's CDN, which resolved the issue. I then closed my KeyCDN account.

I used PhpStorm, a development tool from JetBrains. Somehow, the project became confused; it had two root folders, and deleting the empty one resulted in it reappearing. I also couldn't rename the project. After some back and forth with support, including creating a video, I decided to abandon the project and recreate it from a copy, and that worked.

Earlier in the week, when I was ready to test my new app, I went to the DigitalOcean website to log in, apparently for the first time since I had installed Tahoe and the latest version of Safari. I logged in and got nothing, a white page.

So I got out the inspector and tracked it down to them loading Amplitude, an analytics package. Safari blocks trackers by default, including Amplitude. The problem was that whoever coded this assumed the tracker would always load; the page fails if the tracker is not loaded due to some exception. So I opened a ticket with them to report the problem

Then the fun began. The first couple of messages indicated that they would ask engineering to look into it, so I assumed they would handle it. Instead, I received several messages instructing me on what I needed to do in my own code to load the framework. I kept saying it's not my problem, it's yours. Each subsequent message repeated the exact instructions as the previous one, but using different words. Were they AI? Eventually, I resorted to lots of bold messaging, and someone finally realized I was reporting their bug, not asking for help. So hopefully they are working on it.

I discovered that Safari in Tahoe does not appear to allow you to disable content blockers for a specific website, which is what triggered the Amplitude issue. So I switched to Safari Technology Preview, which fixes that issue. Still, one shouldn't require a tracker to load, as they are commonly blocked, or at least deal with it in a reasonable fashion.

During my 40-year career, I consistently worked very hard to ensure that what I or my team delivered was always correct. Having things fail in strange ways is less enjoyable when you know how they should work, but can't fix anything.

With so much AI vibe coding, if strange errors are the future, although I have seen people mess things up often enough, it can now be done with more automation!

Lastly, today I had to return to the eye care place to check on my new glasses, as reading on a computer was more difficult. It turns out the lab ground the progressive lenses incorrectly, so they will have to redo it.

Sigh.

]]>
<![CDATA[Why Is Xcode So Antagonistic To Reduced Vision?]]>My eyesight is generally OK, but I have trouble reading small text, especially medium gray on light gray backgrounds. I usually code in 16 or even 18 point fonts, currently Atkinson Hyperlegible Mono. Every new release of Xcode makes seeing various UI elements harder, as if the designers all have

]]>
https://thecodist.com/why-is-xcode-so-antagonistic-to-reduced-vision/68cb266899cde104a005d8d9Wed, 17 Sep 2025 21:31:34 GMT

My eyesight is generally OK, but I have trouble reading small text, especially medium gray on light gray backgrounds. I usually code in 16 or even 18 point fonts, currently Atkinson Hyperlegible Mono. Every new release of Xcode makes seeing various UI elements harder, as if the designers all have 20/10 vision.

My eyes have a lot of floaters, and I wear progressive lenses, so looking up at tiny text is physically irritating. Otherwise, my vision is fine; I don't need any aids other than making the text a little bigger or having a little better contrast. I don't need assistance in any other application, or want to zoom text or use any of the other accessibility features, or change the resolution of the whole Mac just because Xcode chose poorly on text size.

The latest version of Xcode, shipping with Tahoe, has several UI elements that are too small to be easily readable for me, even though my primary display is a nice Apple Studio Display. The tabs are the worst culprits. Inactive tabs are around a 50% gray on a 10% gray background. The font size appears to be 10 points, and the tab itself is 23 points high. In my experience, this is a ridiculous waste of space.

Apple does not support changing the size or font of UI elements in macOS. Accessibility has a feature that allows text size to be altered in applications that support it, but, of course, Xcode is not one of them. I don't understand why I can change the size of the navigation (the list of files) and the editor itself, but not the UI elements. There is no technical reason why the tabs and other UI elements, such as the source file navigation, can't be larger, or bolder, or blacker, or something. As I mentioned, the text only takes up 40% of the vertical space in a tab. My display is large, and this is not like trying to fit text on a Mac Plus screen in 1986.

I also use (and have used others in the past) PhpStorm and WebStorm from JetBrains. In those developer tools, I can adjust the UI elements to make things large enough to see what I am doing easily. Apple controls the Operating System, Xcode, and UIKit. There is no technical reason they can't have at least a little give on the UI element size. Some of the new icons have such a small hit box that you need perfect hand control to hit them (the little control on the tabs is only 8 pixels wide).

In my final job, designers would constantly give me designs (iOS) with poor contrast, usually medium gray on light gray. I continually ensured we implemented them with more contrast, and they never noticed, or at least never complained. Some designers love small, low-contrast text, maybe for aesthetic reasons, but there is no reason to make things harder for people to read, especially those with compromised vision. For some reason, the team responsible for our apps as a whole (my team had the single largest vertical piece, but nothing outside that) decided never to support user-controlled text size on iOS, likely because it made the designers' lives more difficult. Yet we did have to support other accessibility features for legal reasons.

Not everyone has perfect vision; designers should always consider a wide range of people's vision. JetBrains clearly considered UI flexibility to be an essential feature early on. Apple cares a lot about various accessibility features, but for some reason, not in Xcode. Perhaps the source code is such a mess (the codebase goes back to NeXT as far as I know) that they can't do much, but I find that difficult to accept with their resources.

I would be happy if I could pick a different font for those tiny elements, even at the expense of losing horizontal space, as an option (since many of you can see better than I can!). That shouldn't be hard to implement, given that they control everything from the bare metal to macOS.

Sadly, I no longer know anyone at Apple I could complain to (I worked there 30 years ago for half a year).

]]>
<![CDATA[What Is A Good Programmer?]]>Am I a good programmer? The short answer is: I don’t know what that means.

I have been programming for 52 years now, having started in a public high school class in 1973, which is pretty rare because few high schools offered such an opportunity back then. I

]]>
https://thecodist.com/what-is-a-good-programmer/68863e2b99cde104a005d8c4Sun, 27 Jul 2025 15:11:00 GMT

Am I a good programmer? The short answer is: I don’t know what that means.

I have been programming for 52 years now, having started in a public high school class in 1973, which is pretty rare because few high schools offered such an opportunity back then. I have worked for 15 different employers across various industries. Additionally, I started two small software companies, one in 1985 and a second in 1987, which I ran until 1994. I retired in 2021.

Am I a good programmer because I have been programming for 52 years? No, doing something for long periods is not sufficient for you to claim that you are good. I played basketball for twenty years, and I was never particularly skilled. I’ve played guitar for about 50 years, and while I am decent, I’m not particularly special. Longevity is a positive aspect in that you have been able to remain valuable and employable, but it does not necessarily mean you are good.

Good is hard to define. If you compare programmers to each other, how do you decide which one is better? Does it have to be building the same type of projects in the same industry? The same level of experience? Knowledge of the same technologies? Similar titles? You could narrow down the comparison so much that it means little. If I build mobile apps, and you, for example, are named Linus and create an operating system, how do I compare to you? If you are building a superintelligent AI system and get paid 200 million a year, are you a good programmer, and I am not? Is pay even a way to tell a good programmer from a bad one? Maybe you worked for Google, and I worked for a non-technology company that doesn’t pay as well. Perhaps I was part of a team of two people building something, making $100 million a year, and your team has a budget of $100 million a year, and you lead a team of a thousand programmers. Who is the good programmer? Maybe we are both good, or not.

Is quality a way to decide? Maybe I shipped something working at a poor company with terrible QA, and you worked for a huge company with massive investment in process and delivery. Maybe my quality was good enough for my poor employer, and your stock price depends on your investment in quality. Who is the better programmer? Is that a fair comparison?

Say I have a small team, and you have a large team, but we both deliver something that works, comparable in complexity. Am I necessarily a better programmer, and you are not as good because you had more help? Perhaps I had a brilliant team member who did most of the work, and you did all the work since your large team was mainly useless—still not a great comparison.

Perhaps only you can decide if you are a good programmer, because you are confident in yourself. Yet, most of us have known programmers who talk a lot but deliver little. Being self-critical can be beneficial, as it keeps you from being complacent and never satisfied with being good enough. However, it can also lead to excessive self-doubt, resulting in a lack of motivation and accomplishment. Somewhere there is a balance between recognizing what you are good at and where you lack, but that’s a challenging balance to find.

Can others decide if you are a good programmer? I find this one a little easier to accept, although it can also be misleading. People may want to be associated with you because it seems beneficial for them or political reasons, and treat you like a god. Perhaps they are intimidated by your elevated position and have no choice but to act as if you are significantly better than they are, even if that’s not the case. Perhaps they follow you because they can see that you are capable, willing to help them, and that everything you deliver always works—people like working for leaders who seem successful. None of this means you are good; maybe you are just lucky or connected.

Is there some industry award that says you are good? Perhaps you excel at programming competitions, hold numerous certifications, or have a PhD in Computer Science. Maybe every actual project you deliver is terrible. Are you now good at those things, but bad at shipping, so are you a bad programmer? It could be because your employer is awful at supporting you, so no matter how good you might be, you can’t deliver if you are not permitted to do what is required to ship quality.

Who gets to decide if you are a good programmer? The industry, Hacker News commentators, your employer, your peers, that crazy guy who posts on X, or your mom? It’s tough to agree on what a good programmer is, much less who gets to decide.

When do you become a good programmer? Can you be a terrible programmer for a long time, then slowly or even suddenly become a good one? It’s still the same problem: defining what is good, who decides, and what kind of criteria or comparison can be used.

Ultimately, the definition of a good programmer is highly subjective and can be challenging to pin down. Do I think I was a good programmer? I do, yet do I think I could have been better? Every good thing I delivered could have been done better, faster, with fewer people, or in less time. Everything I delivered might have been improved by someone “better” than me in some way. Yet I was there, and we didn’t have a “better” person, so it had to be me. Sometimes the company couldn’t afford a “better” person, or didn’t want to hire them, or didn’t want to search for or pay for someone else. I was there, and generally, I delivered software that worked. I may be just good enough for the situation, but anyone could have done better.

Comparing yourself or others to theoretical alternatives is a tricky thing. I have seen a lot of software in my life, and some of it was amazing, and I was sure I could not have built it. Yet I have also seen horrific projects that I know I would have done a better job at, and sometimes found myself fixing other people’s terrible projects. Few people have ever taken over projects I wrote, so I don’t know if they would have felt the same way, or if I had never heard from them.

So, am I a good programmer, or not? I think I was, but what does that mean, since defining a good programmer is so hard? I know people have hired me several times because, in the past, I had done something they thought was good, and had team members who respected what I wrote (even when leading teams, I always wrote as much code as everyone else), and wanted to work with me. Maybe they thought I was a good programmer, perhaps they had some other reason.

In the end, being a good programmer is something you want to be, even if you don’t know what that means. Never be satisfied that you are good enough; always learn something new, and never be afraid to listen to others who might know something you don’t.

When you retire or stop being a programmer, you can look back and decide if it was a good life. That ultimately is good enough.


See my generative art at https://andrewwulf.com.

]]>
<![CDATA[Stress And Programming]]>Having spent four decades as a programmer in various industries and situations, I know that modern software development processes are far more stressful than when I started.

It's not simply that developing software today is more complex than it was back in 1981. In that early decade, none

]]>
https://thecodist.com/stress-and-programming/681e3572bf103ae4f9494a0dFri, 09 May 2025 18:04:03 GMT

Having spent four decades as a programmer in various industries and situations, I know that modern software development processes are far more stressful than when I started.

It's not simply that developing software today is more complex than it was back in 1981. In that early decade, none of us knew if what we were doing made sense, and almost everything you did was new, and there was a distinct lack of information on how to do anything available.

The processes were the most significant change between then and today. Contrary to some stories you might read, Waterfall was not all that common in my experience. I don't believe I was even aware of the name, much less the stepwise process lampooned by Royce in 1970, until around 1997.

What we did back then was far more relaxed; generally, it was release-focused. We did not make many updates since we shipped our apps on floppy disks and had to charge for updates ($5 cost for disk, duplications, packaging, and shipping). Generally, we stuck to what I jokingly call a "Madagascar", 6-9 months (penguin joke from the movies).

So development started with a list of things we wanted to add (for Deltagraph, it was the publisher's desires). We worked on them over time until we got closer to a cut-off date, after which we tapered the development and focused more on testing. There was plenty of time to devote to trying approaches, even starting over, and thus it was a far more relaxed process. We did not have email until 1991, and of course, no sprints, scrums, backlog grooming, Jira, and all the processes that dominate today. Yet we could ship things that worked despite having none of these things.

I retired four years ago, but I still keep in touch with former co-workers, who often have significant stress, such as closing tickets, meeting sprint promises, shipping on way-too-often schedules, and dealing with management yelling to go faster, dominating every day. Stress comes when you are overwhelmed not simply with work, but pressures to constantly exceed arbitrary demands, and try to cope with changing requirements, designs, schedules, and executive whims. Everything becomes an issue, and no one enjoys working. People try to cope with pressure and stress, but not everyone can manage it well.

Shipping way too often is, to me, a sign of poor management. I remember arguing with the shipping team, who insisted we needed to ship our mobile apps every two weeks (during the pandemic year, no less), even though we had successfully shipped five times a year before then without issue. They pointed to other companies' mobile apps that shipped every week or two. Yet, our business rarely changed much more often than every few months, and the nature of our business is that our customers came and went for a week or two, then possibly not for a year. This is similar to an airline, where you rarely use the app, unless you want to fly, or are about to board.

Updating your app or website faster than your business changes is unnecessary. Sometimes people insist that it's to roll out bug fixes faster (the team I argued with said this). My reply was Why are we shipping buggy apps? Would it not make more sense to ship less buggy apps, so we don't need to fix them so fast? They claimed there would be no need for hotfix releases since you could roll it into the next two-week release; in fact, they did a one-day hotfix soon after!

Shipping often takes time away from development, no matter how much you automate things. In our case, a new feature had previously taken about 2.5 months to get into the app, yet in the new scheme, any new feature had to be completed five sprints before the release it would be in, which took the same time! With the increased speed and more complex process, development had more overhead and less time to complete the work. This, of course, caused more stress to the teams to "keep up".

When I worked at the Travel company (more than a decade ago), our mobile team had 4-6 apps and mobile websites, and we had something like 80 releases a year, often simultaneously. Our process was straightforward; I frequently touched four apps daily, and we shipped when everything was solid, including building several new apps from scratch. At one point, we were forced to do Kanban on a new app, which was pointless as the app had constant requirement changes. On another app, the rest of the company tried to force us to do Scrum to slow us down to their pace (3 releases a year of the main website), but it failed miserably, and we returned to our freeform process.

When you can take the time to do things right and have more control over how you do things, you can often get things done faster and more productively. It doesn't work that well in large teams, but with 3-6 people, the process is mostly pointless. That was my experience in the 1980s and 1990s. Having control over how we work and having time to figure out more optimal ways to work makes for a much more stress-free environment, even if there is much to do.

How much time do you get in a Sprint to figure out how to do something better? Often, you have to estimate things at the beginning and get yelled at if you don't finish. In my final job, we were frequently forced to estimate an entire project's worth of Sprints ahead of time and then redo it repeatedly as requirements constantly changed (and not on a Sprint boundary either!).

If you focus more on releases assuming no timing requirement (like a server change or business reason), you can often get more done with fewer people since the process overhead is much less. Having the freedom to control how you do the work can frequently minimize stress. It doesn't work in all cases (large teams of random contractors or situations with multiple coordinated teams, for example). Still, anything where the process is more relaxed will often be a better experience for everyone.

Sadly, my opinion is unpopular with executives, who want programming projects to resemble assembling chairs and be predictable months in advance. This does not work very well, and with so many heavy-handed processes, productivity goes to hell. Programmers and others get stressed, which leads to more delays and less progress.

In my last job, despite all the sprints, schedules, meetings, and complaints, I got more out of my way-too-small team than teams with many more people because I managed to keep my team somewhat more relaxed and productive. It wasn't always possible, but the stress level for everyone was much lower, and we got a lot done, and it worked.

If you are in a stressful situation right now and in a job that seems more like a rigid prison, you should find yourself in a different environment, preferably in an industry that doesn't require rapid shipping and a company that is not in a hurry. Being stressed out daily is bad for your health and physical well-being and will likely result in you burning out. It's not worth it, no matter how much they pay you. I survived for four decades, but even I nearly gave up in late 2000s after two horrifically stressful jobs; I finally said screw it, and worked at a poor game company for way too little money, but lots of satisfaction and appreciation from the customers! Sometimes, the best thing is to cast about and find something you enjoy, even if it temporarily results in less money.

Stress and programming do seem to often come together, but it's not worth it. Even from a business standpoint, it's not worth having a staff or team that is stressed out and becoming poorly productive.

]]>
<![CDATA[What Is Software Quality?]]>Everyone wants the software they work on to produce quality products, but what does that mean? In addition, how do you know when you have it?

This is the longest single blog post I have ever written.

I spent four decades writing software used by people (most of the server

]]>
https://thecodist.com/what-is-software-quality/65cce6ac4068b1047e75062eMon, 31 Mar 2025 18:07:36 GMT

Everyone wants the software they work on to produce quality products, but what does that mean? In addition, how do you know when you have it?

This is the longest single blog post I have ever written.

I spent four decades writing software used by people (most of the server code I worked on also included clients) and leading teams responsible for building desktop, web, and mobile apps. I released my first application to the public in January 1987, so I have had a lot of time to think about how to deliver something that works, try all the ways to achieve it, and see the result.

After all that time, my favorite definition of software quality is "The code always does what it is supposed to do, never does what it is not supposed to do, you are never surprised by what it does when it gets to the customer, and you can track the quality long after the customer is using it."

That definition has four essential parts, the third of which is probably not apparent to many people. I will review each part.

Note that some of this targets those responsible for the entire project or product. If you are working on tickets assigned to you by someone else, you might not have any involvement in quality beyond the actual code or tests you are writing. For most of my career, I led or at least contributed to the quality of an entire codebase.

Having started in the 1980s, when no one knew what good practices were for building software, I had plenty of time to learn and try different things. Starting and running two small software companies from 1985 to 1994 (both as leader and programmer on my teams) gave me a head start on those who began as programmers or leaders in the following decades. We had none of the knowledge, tools, or processes that exist today, yet teams, including mine, could still ship software that worked.

What Should It Do?

Understanding what your code-based product or project is supposed to do seems like a no-brainer. Yet, it can be surprisingly hard to know for sure—people change their minds about what should be in or out, schedules change, teams change members, and other teams you need to work with have different priorities than you. The stereotype of knowing what you are supposed to do up front, which won't change (a waterfall concept), is a rare phenomenon if it ever happens. A sprint being planned and then totally completed by the end is rare enough. Imagine how unlikely a project going for 20 or more sprints will nail it perfectly. My team did two major projects at my last employer, each with 16 months' development time, yet the end product bore little resemblance to the initially proposed product. So, knowing what your code should do can be a very dynamic thing.

 This means that as a leader, you have to understand what the product team is asking today, imagine what might be asked tomorrow, deal with details that may only be decided deep into the schedule, adjust expectations regarding schedules, offer alternatives, or propose a different path, go to way the hell too many meetings, and somehow keep all of this in your head (and what tools you may have). To have product teams care to communicate with you, you have to understand the business needs, be able to ask important questions at the right time, and always stay in contact. You can't simply be handed some user stories and implement them as written without continuous clarifications, explanations, or even pointing out missing items. If a product team does not care to communicate at all (I've seen this in prior years; thankfully, in my last (very large) employer, I always had reasonable access to them, and they tended to appreciate the communication), then achieving quality will be difficult or impossible.

Another consideration has to be for the future, imagining what might change after you ship. At the same time, you can never know for sure and should never try to speculate in code on eventual changes, but you also need to ensure you don't lock yourself into a corner you can't get out of later. That is a tricky balancing act, but it gets better with experience. Sometimes, you know the product well, understand the business, and realize who will likely request radical changes. Sometimes, you have no idea what direction might be taken. In my final job, I had a decent idea of what might happen and was able to make sure we were flexible enough to deal with it. Still, in one case, a giant significant addition that came out of nowhere to an almost completed project caused me to have to work 7 days a week for 3 months because we were caught with insufficient people to do it (namely, just me!), and no additional budget was allocated.

Another concern that affects client-side applications is that the user interface and user experience must be well-exercised. While you might think understanding the UI is not essential to the programming team, I always found knowing why the design team chose what they designed helpful in asking relevant questions so that I knew how we should write the code; sometimes, you can find easier ways to meet the requirements with minor changes—again, understanding "why" can help with producing a better implementation. For example, in one project, the design team wanted certain shadows they had done in Photoshop that were impossible in an iOS app without extensive development. Explaining how they could accomplish something more straightforward that we could implement quickly helped them understand how to keep development time lower.

I always appreciated it when our QA team complained about the difficulty of certain operations, as they were performed all day. Feedback like that can help make implementation easier, assuming your design team will listen.

What Should It Not Do?

Assuming you know what the code is supposed to do, the next challenge is making sure it only does that and never does anything wrong. I consider wrong as being unintended behavior (you did not realize that it could do that), incorrect behavior (it doesn't follow the requirements), and poor code quality (random failures). 

Some incorrect functionality might not make the code unusable but only be annoying or look bad. Depending on how important it is to your customers, you can live with that type of problem. Odd behavior in a game might not be as crucial as in a banking application.

Ensuring your code does what it is supposed to do is only half of the pursuit of quality. The other half is understanding and planning for anything that could go wrong in your customer's hands. Identifying all failure modes requires imagination, experience, and persistence, and only then can you truly test your code. Writing or planning tests, for example, that only assert positive behavior leaves your quality on shaky grounds. Murphy's law (anything that can go wrong will go wrong) can happen to anyone. I learned this lesson the hard way back in January 1987 when my company first showed Trapeze at Macworld in San Francisco.

During development and testing, we always ran Macsbug (Apple's debugger) to catch any crashes and debug them on the spot. At Macworld, we rented some Macs but did not install Macsbug. During demos, there was a decent statistical chance it would crash. It was highly embarrassing. Later, I realized that the presence of the debugger altered low memory (recall that MacOS back then had no memory protection or virtual memory) and was hiding a nil pointer error. I failed to understand what could go wrong, which made me realize how vital knowing failure modes could be.

A later example from my final job is even better. Another team wrote code to convert some service data to local objects in the mobile app. They wrote unit tests (positive) and had a code review (all LGTM comments). It worked fine for several releases, but only until the service changed its API. The service was put into Stage, and no crashes were seen. However, once the app update appeared in the App Store, it became apparent that something terrible was happening. The app crash rate spiked to 50% multiple times daily, causing panic in all the mobile teams. No one understood why.

Eventually, several of us tracked down the problem: the service had changed its API, and no one noticed. It worked in Stage because the service data was only sent based on real-world activities, which did not happen there. The unit tests only used the API output when the code was written, which was a positive test. The programmer had "handled" any service data issue by calling FatalError(), the only way to crash an iOS app deliberately. He defended the code by saying, "It would just have crashed somewhere else." Code review had seen nothing (the Sgt Schultz form of review). Unit tests had not exposed his lack of error handling. For no apparent reason, his team continued to employ him.

Your application will be low-quality if you don't understand and plan for all failure modes. You can't write or perform tests of any type (unit, automated, manual, etc.) if all you are doing is validating that the code is performing the required tasks. Once the application (whatever it is) is in the hands of your customers or users, none of your testing matters, only the defenses you put into the codebase that will stop or at least minimize any problems.

You must minimize problems such as incorrect or missing data from servers (never trust a server!), random user behaviors (people are devious and do not always do what you expect), and incorrect responses from internal or external APIs. You may suffer from errors introduced by OS releases, open-source library changes, or other environment changes. You might have to use your imagination for some, but you can plan for others. This is a lot of work, and I've seen many people not realize how much. Poor management often only cares about shipping features.

One practice I've always followed is to use the programming language features and architecture to ensure that writing correct code that minimizes failure modes is easy and avoids requiring too many ad hoc random solutions. It's easier to prevent problems upfront than to fix them later. In my last job, using Swift correctly made this easy. Some languages, such as Javascript, may have fewer features you can exploit.

I wrote all the service calls and data management in our iOS project (part of the division's primary iOS app) for the final and most extensive project I did in that last job. Before I started, I built a protocol-oriented service call stack that abstracted all standard functionality, leaving only a tiny stub for each service. Each service was handled in two phases: the first freely converted whatever came from the service into a generic intermediate structure, and the second took that structure, converting it into data objects with strict adherence to the agreed-upon service contract. I eventually had to support nearly 70 calls (the crazy project changed repeatedly over 16 months). The code never crashed either in QA or after it was released, despite serving almost 100,000 people daily. That should not be an extraordinary event; that's what you should expect.

Another idea is to question all assumptions. Are they always reliable? Is it possible for something unexpected to occur? If so, can you ensure that there is something in your code that can at least minimize an unforeseen problem? This can be tricky since it involves questioning things others might think are not worth considering.

I remember being bitten by this in some code we inherited when I started at the Travel company in the early 2010s. A third party had written our iPad app before I started. The code was quite odd, and we were ordered to make it shippable quickly. After it was released, we found a bizarre bug reported by a customer, wherein if they searched for van rentals in Branson, Missouri, the app would crash.

After figuring out how to reproduce the crash, I found an EBCDIC character in the description of one van, even though the service returned UTF-8 encoded JSON data. Somehow, this incorrectly encoded character sequence was sent to an app expecting JSON data. I traced the code the third party had written and found it called a function in iOS (Objective-C back then) to convert the JSON payload to a string. Because the data was not properly encoded, it returned a NULL. Eventually, that NULL was passed to a C function, which crashed. The programmer had assumed that JSON would always be properly encoded (which it should be) but made no accommodations in case it wasn't. In this case, the data for each van was entered into an IBM system using EBCDIC and supposedly converted to XML and then converted to JSON. Most people would likely have assumed that would not fail.

Fixing it was reasonably straightforward. If the function returned NULL, I called another function that "fixed" the data (by removing incorrectly encoded characters), reran the conversion, and, if it still failed, made it an error.

Something I started in the late 80s, and also made sure QA did as well, was to run through the entire application every single day. I tried to run the app as best as possible, as a customer would. I did not just test new functionality, follow user stories, or use some script. Doing this every day, even if it only takes 15 minutes, can help find a lot of issues as you get very familiar with the application as a whole and see things break or change from one day to the next. It seems like a simple idea, but it always allowed me (and others) to discover and deal with issues immediately because we could easily compare each day's state. If you only look at new things, you will likely miss problems elsewhere.

This sounds like a lot of effort, but it is necessary to make your product robust. Quality does not come quickly or cheaply.

Why You Should Not Fear Shipping

The third part of this definition is the last step. Imagine an escalator: the steps rise and take you to the top. At the top, the steps flatten and deposit you on the next floor. The same should happen with your development process. The easiest, most straightforward step should be to go from the end of testing to delivery (deployment to production, app store uploads, etc.). At the Travel company, I usually did the final build (we didn't have CI/CD back then), and after a final test, uploaded it to the App Store. Often, I let someone else push the button. I never had any concerns, as I knew everything would be fine, and it generally was.

So when should you ship?

One sign for me is that your testing (QA, test suites, etc.) is not finding anything wrong. This presumes you have a robust process that can be repeated as necessary. Any serious issue is a warning sign that you are not ready. I always wanted my QA team to have to work hard to find anything wrong, even in the middle of active development. Having QA find crashes or test suites with serious failures means you are not working hard enough to ensure quality is a continuous process. That's why I never wanted my QA team or process to find problems; I wanted any failures to be found and dealt with long before they even saw them as much as possible. Keeping testing clean during development means you are less likely to have late problems and more likely that your shipping process will be smooth. Waiting until the end of development to deal with serious issues is a bad sign that quality will likely be low.

If you test the whole app daily, please deal with issues immediately. Ensure that testing only validates what you already know to work and tests all of the potential problem solutions you built into the code. That final bit of testing will be easy, and when to ship will become more apparent.

As I pointed out, sometimes, things you can't do anything about can still happen. Assuming you did everything I pointed out to minimize or eliminate all sources of problems you can control, you can still have issues with things you can't control.

We added a library to scan credit cards for a brand new app we released at the Travel company; we were likely the first to ship this feature. The third-party company provided us with the production binary, which we extensively tested, and everything was fine. A few days after the release, customers complained that the app would tell them the demo had expired. The third party had given us the production binary with intact time-locked demo functionality! So, we got the correct library, tested it, and uploaded a new version to the App Store the same day. There was no way to detect that it was a demo library. That third party didn't last very long as a business.

In another case, Apple replaced the old Google Maps support with its MapKit. We had a hotel detail page in the iPad app with a tiny map showing the location. After an iOS update, I noticed that the iPad app suddenly started crashing on occasion for some customers. I tracked it down to the map, which was previously a Google map and was now a MapKit map. I discovered that Apple's new servers sometimes delayed sending a map tile that only arrived after the page was closed; they had failed to retain the map object, so it was deleted and then updated, leading to a crash. I had to put all the maps onto a list for a few minutes before allowing them to be deleted.

This type of unexpected error usually doesn't happen immediately, but you must be aware enough to notice it. I will cover that in this final section.

You Are Not Done When You Ship

Just because you shipped your code does not mean you are finished and have nothing more to do.

Monitor every source of information available about your customers' experience. At my last job, I read the App Store reviews and the crash reporting tool daily. I frequently reviewed our defect list. I talked with my fellow employees, who were also customers. I kept asking for access to analytics, but no one would approve it. I listened to customer support complaints. All of this monitoring aimed to validate my assumptions about testing. 

When I started my final job, I asked for access to the crash reporting tool (my team and I did iOS development on the largest single piece of our two biggest iOS apps). After I received it, I looked at the audit page that showed who was looking at the reports and found that almost no one had ever looked. Over the following four years, I reported on what I saw, particularly right after any app release (two apps, two platforms) on the shipping Slack channel. I monitored every crash, not just mine (the only meaningful ones we had were in the Objective-C codebase I inherited). I also trained non-programmers on how to understand what the reports meant.

This was not my job; I just cared enough and managed to keep it politically neutral. When I started, the larger app's crash rate was around 1 per 100 sessions, which is mediocre. Over time, more and more people started looking and responding to crashes on various teams. By the time I retired, everyone was looking at them, and the app crash rate had been reduced to a very good 1 per 500 sessions. The final major project (in that app) that my team and I built had a crash rate of 1 per 100,000 sessions, with about 80% of the sessions in our code. The pandemic killed our business for a long time, and a new project (that I started but it shipped long after I retired) took its place.

Here are a few examples of how paying attention after shipping can pay off.

The Sync Process

Around the time of iOS 11's release, I noticed the App Store suddenly had many 1-star reviews daily. Our app did not usually get any on most days; now, I was seeing 25 or so per day. The customers were saying they could not launch our app at all. I reported this, but people ignored me, and a senior executive said they were “people venting,” and the crash reports showed nothing new.

This went on for two months. Suddenly, someone mailed our CEO complaining they could not launch the app, and execs set up a big war room to figure out what was happening. After some folks discovered the crash reporting tool company had been incorrectly filtering out 99% of the crashes, which was fixed, the lack of new crashes associated with this problem initially confused everyone.

Testing on various iPhones finally showed that anyone on an iPhone 6 could not launch the app after not using it for a few days. It took an average of 6-8 launches before the app would run. Before that, it would sit there for 10 seconds and then appear to crash. Newer iPhones had fewer issues but still took long to launch.

The launch method (the method called by iOS when your app launches) was syncing the local app database to a backend CMS server. For no apparent reason, an enormous download would appear every few days. I remember looking at a sample; it lasted forever and mainly consisted of redundant changes. Many entries had hundreds of changes, primarily deletions, additions, and name changes, repeatedly, almost endlessly.

iOS has a watchdog process that will shut down any app that takes longer than 10 seconds in the launch method.

The source for the launch method included a comment mentioning that nothing that takes a long time should be called in it!

The responsible team changed the sync to the first tick after launch, but the app took a minute or two to become functional on the older iPhones. After some analysis of the CMS server, it appeared that someone had added multiple writers (the data came from many internal servers), but the CMS did not support that configuration. Each writer competed with the others, causing massive updates to be generated as each fought the others. The solution was to reduce it back to a single writer. I never found out why the multiple-writer change happened.

After this, I noticed more people paid attention to my reports.

The Impossible Bug

A second example comes from my team. In the smaller of the two apps was a large module written by another programmer and me (and the second 8 months of the project by me alone). While there were a few crashes in our code (primarily outside of the top 20 crashes), one rare crash happened a few times every day, and it made no sense. It wasn't all that common, so ordinarily, it wouldn't be worth studying and challenging to reproduce (3-4 per day out of some 700,000 sessions!), but it bugged me.

It happened at the end of our flow, and the crash reflected no root ViewController, which made no sense. In iOS, there is only one window and one common way to get the ViewController stack. So, how did the flow work to get to a point where that root ViewController vanished?

Months passed, and it still bugged me until one of my QA team members came to me with a strange feature she had never seen before. Usually, QA uses a tool that configures users for testing. That day, the tool was down, and she had to make users in the app like a customer would. Midway through the flow, a modal requested that the customer choose security questions. She had never seen this before, which was not part of our code. As I watched her reproduce it, I saw it put up a little UIAlertView, which hung around for a few seconds. Boom! That was the issue.

In iOS, a UIAlertView creates another UIWindow. By having it stick around precisely when I manipulated the UIViewController stack, the topmost UIWindow did not have the root controller. This only happens when triggered at the end of the flow.

OK, so maybe it’s not a big deal, but it proves that you need to know everything that can affect your code at runtime. In this case, the team responsible had not told anyone about this feature. It was easy to fix (find the bottommost UIWindow), and maybe I should not have assumed a random alert would not be stuck over my code!

The following two examples are from my Travel company job, around 2010 or thereabouts.

The Exceptional Names

I was on the mobile team, and we consumed APIs generated by the web team. The web team had 10-15 times more programmers than we did, and it took months to deliver updates. At one point, they decided to install IBM Tealeaf, allowing them to see what a web page looked like from a customer’s viewpoint. To their horror, it showed that anyone booking a hotel (I think hotel, but it might have been flights) reservation with particular punctuation or non-ASCII characters in the name would get a nasty exception on the booking page. They looked at the server logs and found that this exception appeared about 1% of the time, going back for a long time. They had been losing 1% of revenue due to this error! If they had been mining the logs for problems, it could have been dealt with long before it became a loss of revenue. Always watch your logs for errors; never wait until a customer complains or you randomly find it years late.

New York City Vanishes

At one point, I noticed people complaining in the App Store reviews for our iPad app that searching for a location in our hotel booking flow would return “No Results Found,” even for obvious locations like New York City.

I tried it myself, and it worked, so I set up a loop in a command-line app, calling the API service we used from the backend. Running the New York City search all day eventually hit some empty responses.

We talked with the backend team that managed that API. They had multiple servers with a load balancer in front and a health check to ensure a server was running. The data would be updated a few times daily, and the servers would be restarted, loading the data into memory. Once I convinced them there was a problem, they realized that the data loading had some bug where it would occasionally fail to load anything. The health check only tested if the server was running, not that it was returning results. They decided not to fix the bug but changed the health check and restarted any failed servers. This problem also existed in the web app, but there was no way to report such a failure. It was found only by checking the App Store reviews.

Final Word

The point of these examples is to show that quality requires attention to everything, even after shipping! No matter how thorough your development is, you should continuously monitor what happens at the customer level. This reinforces and validates whatever you did before shipping and communicates what might need to be changed in your approach to quality.

Naturally, your employer might not care about quality or be unwilling to pay for it. I have seen that in many places and have often experienced it on the web or using apps. You might be able to convince them otherwise or have them ignore you. I was able to get people to pay attention to the crash rate of our apps despite that not being a priority when I started, but you can’t always make that sort of subversive act work. I never needed another job, but if I had interviewed for a mobile job elsewhere, I would have asked about the crash rate. If they don’t know, they probably don’t care. Perhaps another company might be a better choice.

I hope this long post gives you some useful ideas. Quality is not easy, quick, or cheap. It requires discipline, imagination, thoroughness, and continuity. It’s easy to lose and difficult to regain. In the long run, your customers greatly appreciate something that works and never gets in their way. As long as you view quality as necessary, it doesn’t have to be an impossible goal; you must make it part of your DNA. That embarrassing crash while giving demos back in 1987 paid off in the long run!

]]>
<![CDATA[Using Xcode's AI Is Like Pair Programming With A Monkey]]>I've never used any other AI "assistant," although I've talked with those who have, most of whom are not very positive. My experience using Xcode's AI is that it occasionally offers a line of code that works, but you mostly get junk

]]>
https://thecodist.com/using-xcodes-ai-is-like-pair-programming-with-a-monkey/67afb4ffc99d313bd3cce5dbFri, 14 Feb 2025 22:03:57 GMT

I've never used any other AI "assistant," although I've talked with those who have, most of whom are not very positive. My experience using Xcode's AI is that it occasionally offers a line of code that works, but you mostly get junk suggestions.

When it does something right, it's like the proverbial blind chicken finding a kernel of corn.

Oh boy, mixed animal metaphors.

Seriously, it's mostly useless and even irritating. I keep hoping it will learn my codebase and become a brilliant co-programmer. Instead, it offers code that doesn't compile, wants to add digits to every number I type, uses frameworks that make no sense in command line applications (I generate imagery for my art), and even when it randomly figures out a common pattern, it also randomly forgets it. The most maddening activity is when it tries to offer code completion it often adds inappropriate code that I have to manually backspace over or maneuver through a menu to get what I want (basically what was there before).

Because my art generation has many options, and the UI is just code (I'm a programmer; code is fine!), I have many boolean variables. If I have one set to "false," select that word, then type a "t," what do you think the most common replacement would be? It must be "Truchet" (a struct)! No, it's "true". Somehow, such a common suggestion is lost on the AI. I always have to type the whole word.

Perhaps the AI in Xcode is not intended for anything other than SwiftUI or common UIKit apps. Using it in an application without any familiar UI-related classes may not be a suggested use. I can understand that; I wish Apple would clarify what the AI is good for. It's like hiring a random person with no interview and expecting them to be useful (this happened to me during the dotcom era when management hired some guy who didn't know how to program and expected me to bill him 40 hours a week writing code).

You could argue that AI is still in its infancy, and it will magically get better with time and eventually replace me altogether. I don't believe today's AI will ever be that good, and in a future post (if I stop procrastinating), I will discuss that in more detail.

You could also say Apple's AI is terrible; I won't argue. Today, everyone needs an AI to be relevant. No one said it had to be good.

Meanwhile, I will invest in some bananas; perhaps the AI will improve if I offer bribes.

]]>
<![CDATA[Giving Junior Engineers Control Of A Six Trillion Dollar System Is Nuts]]>For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane.

Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No

]]>
https://thecodist.com/giving-junior-engineers-control-of-a-six-trillion-dollar-system-is-nuts/67a2914dc99d313bd3cce533Tue, 04 Feb 2025 23:05:17 GMT

For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane.

Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No matter how smart you might be, experience and knowledge matter. The treasury systems are a combination of Cobol, Python, and a lot of random technologies. I know modernization plans were being worked on, but I have no idea how far they've gone.

Junior engineers are, by experience, usually mentored or at least managed by more experienced programmers. They are never put in charge of a system that annually sends out six trillion dollars. The US Treasury processes less dollar volume than VISA, but VISA is not the source of the money; it is only the processor. Changing the Treasury code has always been tightly managed since errors could have major consequences. Any complex computer system that involves money or life is usually heavily managed to ensure nothing terrible can happen. This code is not some simple CRUD app where bad changes are only minimally annoying.

Allowing unqualified people to make changes directly to production code without any process or testing (which seems to be the case, although getting details is difficult due to secrecy) is ridiculous. Even if it were me with my decades of experience, I wouldn't touch that code without months of comprehension and follow whatever processes are required. The cost of failure is too enormous. Imagine sixty-eight million people not getting their Social Security checks (beginning next Wednesday). Imagine not paying the interest on the national debt; this would crater the US Dollar. You don't mess around with numbers like this. Even minor changes made without understanding could destroy the country.

The biggest revenue figure I was ever tasked with building and managing was a $100M-a-year system (only the iOS part; Android was another $50M, and many services were behind us). The US Treasury system is 60,000 times larger! My team never had trouble keeping our code working fine until the pandemic stopped everything. However, the Treasury is also way bigger than a single client; I am sure there are myriads of interacting systems to maintain.

Being smart is only part of being a programmer. Experience (knowing how to make changes, why to make changes, when to make changes, and, more importantly, when not to make changes) and knowledge (understanding the code, the architecture, the business rules, security requirements, interoperability with other systems, and more) are even more critical. Expecting people barely out of college to change delicate systems without any of this guarantees disaster.

Of course, disaster might be the requirement, the desired outcome. Breaking things is much easier than changing them or fixing them. I am sure that even with limited understanding, you can break almost anything. I fear this is the end goal: break every system until everything is gone. Why is unfathomable?

I don't buy for a second the stated goal to make the government more "efficient." To do that requires experience and knowledge, not just a large hammer. Complex systems are difficult to improve, especially while you are still using them. I've seen in my career people try to improve systems way smaller than the Treasury with large teams and accomplish little. Working at the travel company in the early 10s, they spent months studying why our hotel search was so pathetically slow (average 10 seconds), and all they did was cut down the number of hotels returned (then 9 seconds). Efficiency is hard, especially with legacy code and where the end state is undefined. This goal is just meaningless.

I have no idea how this plays out at this point. These few young people who were tasked with this don't realize that when you mess with people's money, they get really angry. They now have a target on their backs.

Sometimes, programmers like to complain about bureaucracy, but as systems become more complex and failure becomes more damaging, processes must be established to ensure control. I fear that even if these programmers cannot do whatever they are tasked with, the result could be disastrous by accident simply because they don't know what they are doing.

We will probably see what happens; I doubt it will take long.

]]>
<![CDATA[What I Miss And Don't From Working As A Programmer]]>I retired almost four years ago after nearly 40 years as a programmer. While I still write code daily, I do so to support my generative art rather than get paid for it.

Most of my career was spent building new applications, and no matter what my title was, I

]]>
https://thecodist.com/what-i-miss-and-dont-from-working-as-a-programmer/6653465b4068b1047e750a4fSat, 25 Jan 2025 02:36:14 GMT

I retired almost four years ago after nearly 40 years as a programmer. While I still write code daily, I do so to support my generative art rather than get paid for it.

Most of my career was spent building new applications, and no matter what my title was, I was always writing code. Besides my half year at Apple in the mid-90s and one architect position, writing code was a significant part of my job. I only had a single job—working at a game company—where I only supported an existing body of code. Many of my jobs were not only building new applications but often for a company or organization without a history of doing them, so I usually had to do more than deal with code and also worry about process, hiring, and product development. For nine years, starting in the mid-80s, I also started and ran two small startups.

I decided to retire in 2021 because, after such a long career, I had done enough, and it was time for something different. My art was much more interesting and challenging to me (and still is).

My final job was at the largest company I ever worked for. For 5.5 years, I worked on primarily new projects strategically important to my division, often visible enough for the CEO to pay attention. Working for a huge company is a different beast, with so many teams to interact with, so much politics to watch out for, and so many executives wanting to put their stamp on everything good (and run away from anything that might fail). Schedules, features, and decisions changed almost daily, making everything painful. Despite being the lead, I was also a full-time programmer on my team, so I often had to work more than forty hours a week due to all the meetings. One time, I worked seven days a week for months, which was no fun.

I don't miss the long hours. Besides my two startups in the mid-80s to mid-90s, I never worked much overtime in any other job. The difficulty of building large complex applications (or, in this case, large parts of our two applications), dealing with many different teams, seemingly endless meetings, and the constant churn of features and schedules took all of my experience to keep up with. Everything my team shipped worked well, and I rarely had any issues despite the complexity of the business. However, it took all that time to make sure of it.

The final year of my career was during the Pandemic. We all worked at home, and for part of that time, our business, like many others, was mainly shut down. Eventually, work picked up again, and significant new projects emerged while business was slow. I took most of the year to decide on retirement and gave three months' notice.

I still miss some things about working. While the hours sucked, and the politics and budgets and not having enough people was no fun, I miss being involved in everything, being around people all day (for the last year, virtually), solving complex problems, watching things ship that worked properly, and seeing customers appreciate what we built. Being retired and just making art all day and doing some writing (not enough!) is pleasant but not very interactive. Even though my 2000s decade was terrible (layoffs, employers going out of business, and other no-fun things) and caused me a bit of burnout, the last decade was fun again. It is almost too quiet after so many decades of working in many different places, with many interesting people (and some terrible ones) and building all types of software.

It's not that I desire to be a programmer again. However, I would still enjoy high-level product design if I had to. I spent at least one-fourth of my career involved in product design. In my last job, I was usually the person who asked all the questions of the product team and often discovered everything they missed or didn't realize. Other programmers were happy to have someone else ask the pointy questions, and the product team was glad someone found issues before their executives did. Figuring out what to build, especially in my last employer's industry, would be fun, and I would have a massive advantage as I understand what it takes to build things. It's not a big priority, though.

Eating lunch with my co-workers daily was fun when we worked in the office. Working remotely was not an issue since my team stayed whole that year. I mostly met with people from several time zones away anyway, so the location didn't matter much. The last time I had lunch at the office was with the entire iOS and Android teams; later that evening, we were told to start working at home. I do miss that interactivity.

I still read about new technology every day, as I have since my first job in the early 80s, but it's less important now since it's not my job anymore. My focus as a programmer is mainly on the art code (Swift), sometimes updating my static site generator, and the occasional server reinstallation or updates. Beyond that, it's mostly habit. I could still do my job today if I had to, but it's not all that interesting, and from what I know about my last employer, there is little new to do—a problem I never had.

Forty years is a long time. What will there be forty years from now if you are starting today? Who knows. I spend a lot of time reading AI research and plan to write on that topic soon. Unlike many, I am not all that excited about today's AI. It does have uses, but I don't expect today's AI will be able to wholesale replace programmers in what is left of my lifetime. I know what I did over my career; it's far too complex for today's LLMs to deal with. Programmers will still be necessary for a long time, although their exact tasks might differ. Forty years is about half the time programming has existed, so I've seen and experienced a lot of change. I do not doubt that someday, programming like we do today will disappear, but not for a long time. However, can the programmers starting today deal with the ever-increasing pace of change and still be productive? Will today's AI be a big help or create a generation of marginal programmers unable to adapt to what comes next?

It's different when you can think about the present and contemplate the future, not have to keep working and being productive, and not become obsolete as everything changes around you. I don't envy any of you. Change was always a constant feature, even when I started, and it has become more so over time. You can never stop learning, stop adapting, or stop paying attention.

You can retire in the end!

]]>
<![CDATA[How Many Hours Can You Code?]]>How many hours a day can you write code, and at what point does the quality of your work go down? Even more important is how many weeks and months of that max effort you can still be effective.

In my life, there have only been three periods where I

]]>
https://thecodist.com/how-many-hours-can-you-code/671143ebc99d313bd3cce137Wed, 18 Dec 2024 20:03:18 GMT

How many hours a day can you write code, and at what point does the quality of your work go down? Even more important is how many weeks and months of that max effort you can still be effective.

In my life, there have only been three periods where I worked crazy hours, and only two of those were multiple months. Generally, over my forty years, I worked a reasonably regular schedule. The standard forty hours a week rarely involve only writing code; you are always doing other things, such as meetings, talking with people, reading or writing documentation, emails, or, more recently, Slack messages and the like, or even reviewing code (which is like coding).

This post documents my first experience working long hours of coding. It lasted only a week and culminated in working about 30 hours in a row (followed by sleeping for two days). I was 26.

In 1985, I conceived the idea that became Trapeze, the first real alternative to row-and-column addressing in spreadsheets, which was first created with Visicalc. Trapeze used formulas referencing blocks of data by name, an odd hybrid of APL, spreadsheets, and ideas you see today in frameworks like Pandas. I got investments from various people, assembled a team, and started building the product. We were still working on it a year later, hoping to ship it at the December 1987 MacWorld in San Francisco. In August of 1986, we all went to the Boston MacWorld.

Of course, there was no internet, so experiencing other people's applications was only possible if you bought a copy. There were also few magazines, so it wasn't easy to see how different applications' UIs looked. In those days, I designed and built the UI for Trapeze while my two partners built the internals. Looking at all the demos was the first time I saw what UI should look like.

What I saw horrified me; my UI design was terrible and unusable.

So, I intended to design and build an entirely new UI while supporting the old one. This would allow the other two to finish the internals until I could integrate the replacement.

What followed was working around 90 hours a week for four and a half months. Every weekday, I worked from 8 to 6, went home for two hours, and returned from 8 until 2 A.M or until the second Jolt Cola failed to keep me awake. Then, I slept for a few hours before starting over. I worked from 8 until 6 on Saturday and Sunday afternoons for a few hours. I had to work in the office, as no one had a Mac at home, yet our office lacked AC on weekends.

We barely finished the product before the SF MacWorld show, where we introduced Trapeze. I did many of the demos. I discovered a horrible statistical bug: every time I tapped the central pop-up, hierarchical menu in my UI, it had a 10% possibility of crashing. That's not a great look for a new product, and that menu was the first Mac product to feature either, even before Apple released support in MacOS. When we returned to the office, we had to (manually) put together all the packaging.

In addition to all the coding, I had to deal with the manual, give press interviews, and all the other founder things.

After this, I was so run down that people forced me to go to the doctor, and he put me in the hospital for several days.

Besides introducing that crash (which was fixed before shipping, thankfully), I also failed to consider how to sell Trapeze appropriately; Trapeze was a powerful application and should have been sold directly to people most likely able to understand that. Instead, I went with the easy standard mail order and retail store sales via distributors, but that put us in the same category as Microsoft Excel, confusing people who thought we were just a regular spreadsheet application. Predictably, we failed miserably. Instead, like Mathematica did two years later, we should have sold via a sales team. The customers who saw Trapeze as a powerful modeling tool (it could do things Excel still can't do today) found it indispensable. My addled brain had no cycles for considering how imperfect selling retail was.

While I didn't die, the experience was no fun at all. Thankfully, it would be thirty years before I had to work crazy hours again. Ninety hours a week is nuts to work, much less for months. Programming is one of the most brain-intensive activities you can do, and pushing it for so long affects your physical, mental, emotional, and social well-being.

Fast-forward three decades, and I am working for what would be my final employer in a giant non-tech company, working in a large division on a strategically important part of our iOS apps. I was hired because one business unit was undergoing a significant business change ($40M budget), and the iOS and Android apps would be the customer connection. They had spent 18 months planning what to build, and my partner and I spent 9 months on the code. At that point, it looked to be moving into a 3-month testing phase, and he was transferred to another project in another country. I was the only iOS programmer left on the project.

There was a customer service type of internal app I had built earlier, and now a bunch of execs wanted a considerable expansion of it (one of those "we want it to do anything and everything" requests), so I was assigned to do that as well, considering testing would not require my full attention.

Then came the "mother of all change requests," as I like to call it.

Someone requested a massive expansion of the mostly completed product that would also affect the entire business unit's operations. Suddenly, many new teams had to be added, and everyone would have to make massive changes, but there was no more budget, so I would have to do all the iOS work by myself. In addition, a new team was required to build a crucial piece with which I would interact. They only had two contractors and a lead who did nothing. They had no idea what to do or start, so I told them I would help and became their virtual lead, reviewing the code and guiding the development. I even had to do QA, as that team had no QA.

Of course, I also had to work on my changes, support that team every day, and work on that customer service app simultaneously. This turned into three months of working seven days a week, around 70 hours each week.

The customer service project should have been postponed, but company politics did not allow that, so I was stuck doing three jobs simultaneously. By the end of the three months, I remember sitting in my manager's office, wondering aloud if I could do it for much longer. I was 59 years old!

The lessons I learned from the earlier story did make a difference. I ensured I slept well, ate decent food, and slowed down a lot, being more deliberate in everything. When you are tired, the easiest thing to do is work faster to get it done quicker, but that's how you make mistakes and miss things. It seems counter-intuitive, but that is the only way to ensure you do the right thing and avoid fixing things later. Juggling three tasks was hard; context-switching when tired is no fun.

After three months, I finished my code, the contractors delivered their part, and the customer service app was done and beginning deployment (eventually, various business units used it worldwide). We shipped the big project, and it was a hit with customers and executives. Eventually, it would bring in low nine figures of revenue per year (I calculated this one thing brought in more money than everything I ever worked on put together times ten).

These are two examples from my life; hopefully, they resonate somewhat. I've known many people who worked long hours, and it's never good.

Programming is not the only job that does long hours.

In my first job in the early 80s, a manager worked crazy hours and was a high-energy person. One day, he fell face down on the table during a meeting, instantly dead. I once ate in a restaurant where someone was hauled out on a stretcher; I asked a waiter, and he said the manager had been working double shifts seven days a week (i.e., about 110 hours a week). Young Wall Street bankers often work 90 hours a week and also frequently use ADHD drugs and/or cocaine to fuel the work. All I used was Jolt Cola.

Working these sorts of hours is a terrible way to treat your body and mind. Sometimes, you have to, but doing it for long periods can be a killer. It's not worth it, no matter how much you make. I did it for my own company in the first story, but I could have killed myself. In the second, I had little choice since there was no one else and no budget to hire anyone, even if that would have helped. I know people, such as in the triple-A games industry, who work insane hours month after month. In their case, this is often followed up by being laid off. It's not worth it. You will suffer health issues, mental issues, social stigma, and likely burnout. Even if you are paid massive amounts of money, if you die, you can't benefit from it, and your employer will replace your corpse.

We can all manage it in short bursts, but month after month is too much. If you find yourself stuck in this, consider leaving. I would respect someone who realizes they are destroying themselves and gives up more than someone who toughs it out until they collapse.

Programming can be hard enough in regular circumstances, especially today; trying to do it in a brain fog with little sleep and bad food is unlikely to be very good. I was fortunate the second episode came out well. The first did not, as we had to give up and sell our product, and I started another company to build Mac apps for other people (work on Persuasion for a year and Deltagraph for 5 years). If I had had a clearer head, Trapeze might still be around, but my head was full of mush.

Don't be that person if you can help it.

]]>
<![CDATA[My Art And Color-After Tiling]]>I make generative art with Swift and use tiling in many pieces. Truchet tiles are generally arranged randomly and contain everything appearing in the final image. What I do differently is to separate the layout of tiles from colorizing the image. I call this technique "Color-After Tiling."

For

]]>
https://thecodist.com/my-art-and-color-after-tiling/67213565c99d313bd3cce2afTue, 29 Oct 2024 19:39:02 GMT

I make generative art with Swift and use tiling in many pieces. Truchet tiles are generally arranged randomly and contain everything appearing in the final image. What I do differently is to separate the layout of tiles from colorizing the image. I call this technique "Color-After Tiling."

For example, this tile (a 1x1) was used for "Tiling #1."

My Art And Color-After Tiling

In raw form, it looked something like this:

My Art And Color-After Tiling

Finally, with a bit of color and shadow:

My Art And Color-After Tiling

Additionally, the following tile (a 2x2) was used to make the work "Shield Wall":

My Art And Color-After Tiling
My Art And Color-After Tiling

I will randomly lay out the tiles, which contain only lines, and then use my tools to fill the contiguous areas with other graphics (solid colors, paint, geometric shapes, etc.), which removes the lines. Sometimes, I fill the areas randomly, and sometimes (as in this case), I choose which areas become which color. This gives me more complex images than a traditional Truchet tile.

Tiles consist of 1x1, 1x2, 2x2, and 3x3 tiles with a consistent scheme so that they all sync together, generally the midpoint of each side of the 1x1. So far, I have defined nearly 60 different tiles. I use them in various combinations, including all tile sizes in the same image. The key is to eliminate each tile's lines, which can be used separately as another layer, as in the above example.

There are two ways to implement this: geometrically or using pixels. I create the tiles in code and then image them into a pixel image. The other method is to define everything geometrically and assemble the areas as vector shapes, filling those (and thus not having any strokes). I prefer the pixel method (at 300 dpi) as I have many CPU-based tools highly optimized for my platform (Mac Studio).

Most of my works range from 50 to 200+ megapixels.

Here are a few more examples of what I can do.

"Jamblaya": a 36"x36" that uses 3 2x2 tiles with a fill of nested ovals.

My Art And Color-After Tiling

"Float On The Clouds": a 30"x30" using three 1x1 tiles painted.

My Art And Color-After Tiling

"Sunny Delight": a 48"x36" using a single tile and multiple techniques.

My Art And Color-After Tiling

"Happy Trails": a 40x40 using multiple techniques, using only a single 1x1 tile.

My Art And Color-After Tiling

"A Smile A Day", a 48"x48" work using a single 3x3 tile with five connections per side:

My Art And Color-After Tiling

This technique could be used in a limited fashion in the real world. You could draw the tiling on a canvas and then hand-paint the contiguous areas; on a floor, you could draw the tiling and then fill the areas with mosaic tiles, where grout will cover the lines. Doing a more traditional tiled floor or surface using ceramic tiles is possible: you would have to precalculate all the required combinations of unique tiles, which is likely prohibitively expensive for more than two colors. Unlike regular ceramic tiles, the final "image" combines shape elements, not simply the tiles themselves.

Generally, I use techniques different from those of my fellow generative artists. Being the only artist I know using Swift (most use Javascript), I benefit from higher-performance code and can try crazier things. Color-after tiling is not the only unique tool/process I use.

So far, I have more than 120 works using this method, about 13% of what I have listed on my website. I am not selling any of them yet: without a wider audience, there is little point. No one in the greater art community has any idea I exist!

I design my art to be single-edition prints, ranging in size from 20" to 48" (and one 60"), to be printed at 300 dpi.

]]>
<![CDATA[How I Defeated An MMO Game Hack Author]]>In the late 2000's, I worked at a niche MMO game company. We had a small team, not a lot of money, but a loyal audience. It was a game of skill without any of the usual powerups and unreality, and the players enjoyed the challenge.

Then, one

]]>
https://thecodist.com/how-i-defeated-an-mmo-game-hack-author/6711928fc99d313bd3cce13bSat, 19 Oct 2024 17:25:59 GMT

In the late 2000's, I worked at a niche MMO game company. We had a small team, not a lot of money, but a loyal audience. It was a game of skill without any of the usual powerups and unreality, and the players enjoyed the challenge.

Then, one day, we heard a rumor that a hack was available for the game, and suddenly the players were angry. Since we only had four programmers, someone had to investigate, and I volunteered. People were sure that everyone who killed them in the game only did so because they used the hack. We tried to assure people we were looking into it, but we had no idea how widespread it was.

Note that everything I recount here is likely long obsolete today. We supported both Windows and Mac, but Windows was the apparent target; in any case, only 10% of our player base used a Mac.

I found the hack for sale by some company in China, but it quickly became evident that this was not the originator; they appeared to have stolen the hack and were reselling it. After more digging, I found the company selling the hack and many other hacks for far more popular games. Given our small player base, I wondered why they even cared.

So I went home and bought a copy. I did not want our IP address associated with the purchase. I had a Windows PC, so I could install it and see what it offered. I had not spent much time with Windows internals then, as I had mostly worked on the Mac version and the cross-platform game core (it was an OpenGL game).

The hack had the usual features: wall hacks, unlimited ammo, map additions, and an aimbot. As a customer, I also had access to their forum, so I could see what their customers were saying about us (we were too stupid ever to catch them, etc.). I could tell there weren't many people posting (maybe a dozen), so I hoped it meant the usage was low. The author also posted in the forum, though he was mostly factual and did not speculate much. It appeared to be a two-person company; one wrote all the hacks, and the other did the business side. The hack cost around $30 monthly, more than we charged for the game.

I decided to focus on several things. First, I would learn how to hack a game on Windows, then plan how to make hacking the game more difficult or at least slow the author down, and finally, see if I could determine who was actually using the hack.

Honestly, this is probably the most fun "project" I ever did, two months of battling a clever adversary!

I then spent several days reading about hacking Windows games and all the techniques people used, ignoring anything targeted at more complex games from prominent companies. Reading the forum, I could tell the author was a customer and played the game; I had no idea who yet. So the mystery of why there was a hack was not simply money, but he wanted to use it himself! Any money was just gravy. Call Of Duty or whatever big games they supported were far more profitable.

After studying hacking for a while, I started looking at how to make our game more frustrating to update the hack. At the core of the game loop was an array of all currently viewable people/vehicles (nominally up to 128). This fixed structure contained most of the dynamic game, so clearly, this was easy to attack if you were writing a hack. However, when you were live in the game, if the server did not receive the heartbeat network packets in a timely fashion, it would drop the client after a short while, so debugging while live was not really possible. We had an offline mode, where you could practice everything (the game had many vehicles and infantry types). It was the same game loop in a unique game "location," with no network required. The author was building his hacking code offline.

Our game had been written in a mishmash of C, C++, Lua, and Javascript nearly a decade before I started there. The code was ugly, frustrating, and a pain to work on. However, having C macros available would make things easier. So, I tackled the vehicle/infantry array first, using macros and accessor functions. I first made the offline and online arrays different but transparent to the programmers. Then, I started building macros for common datatypes like locations, which scrambled the contents in various ways and biased the values. Each build would have different offsets. I also built shadow values, which duplicated (with different biases) important data. Often, a hack is built by watching memory changes while playing the game.

These would not entirely stop a hack, but the investment of time in updating the hack would increase substantially after every release, and the current hack would be less useful until it was fixed. Knowing only one author meant he would waste time updating something that made them little money. I also changed the OpenGL pipeline sufficiently to make the wall hacks painful to update.

One thing we had in the game was a particular "vehicle" used by game managers and the company to watch people play. It was usually invisible but could be turned on and used to scare people who were breaking the rules. I read in the forum that they could always see the "vehicle," so they knew to act normal when it was around. The server programmer and I worked together to make it no longer be in the vehicle list unless it was made visible. Then the forum complained they didn't know when anyone was watching anymore. Somehow, they still thought we (in this case, me!) were stupid. Ha!

Now that I had begun to frustrate the programmer, I focused on identifying who was using the hack. I knew the hacking team had been angered by someone stealing their hack binary, so they changed it to download the hack at runtime. Our app had a launcher containing various settings that launched the game, so the hack had to be running when the game launched.

So, I figured there had to be an open port to download the hack. I found an API in Windows that let me see what ports were open. It was simple enough to discover what IP addresses they were using. Anyone with open ports to those IP addresses had to be running the hack application. If I saw that, I would set an innocuous bit in our launch data that our game would read (in multiple places), and the game binary would then set another innocuous bit in several packets sent to the server as the game ran that the server code would see and mark this user as being a hack user.

After shipping a release with this code, we could tell who the customers who used the hack were, including the programmer (who likely was the first since he had to test it). It turned out to be around a couple of dozen people, thankfully. We then had a fun meeting to decide what to do with this information. It ranged from slowly reducing the accuracy of any weapons during a session to changing them to Bozo The Clown to having a giant arrow floating in the sky pointing to the location of the hacked user, but not in their view. Ultimately, we decided the best idea was to do nothing for a few weeks and then ban them with a generic TOS violation.

After that, the hacking company gave up and removed the product; after all, it was not worth the effort.

None of this is relevant today. But it was a fun couple of months, and I was satisfied with "winning" a small battle. I don't understand why some people feel the need to cheat in a game; I also think that cheating gets boring fast, and likely most people just move on. The whole point of a game, especially one based more on skill than luck, is the challenge; if you remove that, the fun rapidly vanishes. Cheating like this also ruins the game for those who enjoy playing since skill is insufficient.

Games like World of Tanks (which I play today) are server-adjudicated, so all decisions based on user input are made in the safety of the server. This makes hacking the game to gain an advantage difficult or even impossible. There are still ways to cheat; a popular thing in WOT is to pay companies to join in games in low population times that they have stuffed with non-playing tanks, which you can then kill for enormous damage or experience, for bragging rights. It seems stupid to me, and WOT should be able to identify the players who do this and the accounts used to spawn non-playing tanks, but they don't seem to do much. Given that they do analytics on every player, every tank, and every map, this shouldn't be difficult. Like many multiplayer game companies, it's a matter of investment and caring.

Anti-cheat technology today is a big business. It sometimes ruins the game experience for those who don't cheat, but it's also necessary because it makes a lot of money for those who develop the cheats, and that makes the game no fun. Like in war, offense, and defense are constantly fighting each other.

At least, in my case, it was a fun battle! Soon after, I left the company and spent the rest of my career building iOS apps until I retired.

]]>
<![CDATA[Why I Use Swift To Make Generative Art]]>Now that I am retired from programming for a living, I make generative art (not AI; see my post What Is Generative Art?) every day. I belong to a discord community of generative artists, yet I stick out because I am the only person using Swift as my chosen language.

]]>
https://thecodist.com/why-i-use-swift-to-make-generative-art/67000b10c99d313bd3cce11dFri, 04 Oct 2024 15:47:41 GMT

Now that I am retired from programming for a living, I make generative art (not AI; see my post What Is Generative Art?) every day. I belong to a discord community of generative artists, yet I stick out because I am the only person using Swift as my chosen language. Others may be out there, but I have never found another one.

Most generative artists use Javascript (often with a framework called p5.js), a few use Java (often with p5's ancestor Processing), and Python and C++ are occasional options.

So why Swift? Firstly, I exclusively wrote in Swift from 2015 to 2021 in my final job, so I was very familiar with it. I prefer Swift to all other programming languages I used during my four decades (Basic, APL, Assembly, Pascal, C, C++, Objective-C, Java, PHP, Ruby, and Javascript). Modern syntax, constant improvements, decent performance, and language features making quality more straightforward to achieve made me appreciate it during all those years working on strategically important iOS code.

When I started that final job, everything was written in Objective-C, and executives were reluctant to allow Swift in. My manager at the time and I devised a way to encourage acceptance by telling them that Apple wanted new code written in Swift and would eventually no longer approve apps only written in Objective-C, and they believed us. Sometimes, fooling executives is easy!

Building iOS apps in Swift makes sense, but why create art with it? After retirement, I briefly looked at Processing and p5.js, but the idea of using Java or Javascript was not appealing. Using a popular prebuilt framework would limit my opportunities to differentiate my art. I know of people who use Vanilla Javascript for this reason.

I use only a couple of data structure open-source libraries and stick to MacOS frameworks like Core Image, Core Graphics, and GamePlayKit. Every other algorithm is devised by me. With Swift's decent performance and a 20-core Mac Studio, I get all the speed I need. Most of my generated images are from 50 to 200+ megapixels in size. So far, I have not felt a need to use Metal, as my CPU-based code is fast enough. If I ever venture into making animations, I might add that.

People who target NFTs as the result of generative art mostly have to use Javascript and p5.js, as NFT selling sites assume this if you want to dynamically generate new works instead of simply uploading a static result. In my process, generation is supplemented by many manual processes, including image manipulation, digital painting, and occasionally even AI (but never used directly since it can only make tiny 1-megapixel images). While my art is highly varied, to me, they are all accomplished by a similar process but with many different tools and options.

To people who view my art and understand generative art, I am so far out in left field that I am not even in the stadium (to use a baseball metaphor). That's the whole point of my art: going as far out as I can go and still making something you can hang on a wall. Someday, the art world might even see it; so far, it's been primarily programmers!

Swift isn't perfect, but neither is any programming language; you try to pick something you are comfortable with and gets the job done. I chose Swift for its strengths, familiarity to me, and thus the ability to try things out in code quickly and keep what works. I no longer care about shipping code to the world; I only care about the output.

Note that I am not associated with Apple in any way (despite having worked there in the bad old days from 1995 to 1996). Like anyone else, I am still trying to get a response from Developer Tech Support to my question. Some things even Swift can't do!

Please see the art on my website.

]]>
<![CDATA[How Talking Over A Wall Changed My Direction As A Programmer]]>I started my programming career in October 1981 at a large defense contractor (GD). At the time, my goal was to work for a couple of years and then continue my education with a Ph.D. in Chemistry (I had already been accepted).

The office I worked in was a

]]>
https://thecodist.com/how-talking-over-a-wall-changed-my-direction-as-a-programmer/66fd6216c99d313bd3cce086Wed, 02 Oct 2024 18:21:51 GMT

I started my programming career in October 1981 at a large defense contractor (GD). At the time, my goal was to work for a couple of years and then continue my education with a Ph.D. in Chemistry (I had already been accepted).

The office I worked in was a whole floor, broken up by occasional cube walls; most of us worked on metal desks in open areas. Since it was the early 80s, no one had a computer on their desk, and all terminals were in a bullpen with sign-up sheets.

Across the wall from my desk, there was a little team of two called the "Microcomputer Group." Their job was to plan and facilitate the introduction of PCs (a generic term, not just IBM or clones) into the company. The manager and I started talking over the wall since we both had Apple II's at home. We shared our adventures with them. He wasn't a programmer but more a tinkerer.

One Friday in fall 1983, he told me over the wall that some folks had come from Headquarters along with people from every division to talk about something involving Apple II's and invited me to come along. I thought it would be a fun diversion from my usual work (I think I was still working on testing the Jovial compiler we were having built). Little did I know, this would change the direction of my life.

Leading the meeting, filled with many people I didn't know, were two VPs and what today would be the CIO (a title not in use at the time). Apparently the President of the company had requested that someone build an app so he could read his email at home on the Apple II he had bought for his son. At the time email was only available to executives (or more often their secretaries), and reading it at home was unheard of. They had ignored him, since there was a commercial app (VT-100 terminal emulator) that could be bought, and they figured they could just buy that if he insisted. However, when they tried that he got mad and insisted he wanted it built in house, so he gave them an ultimatum of some kind, and so they panicked. Thus this meeting was to find someone to not only build a VT-100 emulator on an Apple II (with only a 40 column screen), but do it in a week!

So when they asked who could write 6502 assembly on an Apple II, I raised my hand figuring everyone here was a programmer—and found only my hand had been raised! So they dismissed everyone else and explained to me what I had to do.

The job was to write a VT-100 terminal emulator, that called a modem bank in St Louis, in 6502 Assembly and display the 80 column data on a 40 column screen (somehow). All I would have was a couple manuals, some sample code from Apple on talking to a Hayes modem, an Apple III with a dev environment plus an Apple II and modem, and exactly 7 days. Plus they would stay local—I guess they were worried about their job or something.

I had written things in assembly languages at work, but all I knew about 6502 was a little play coding at home. None of the projects I had worked on had a UI, and I had never done any communications programming. Of course there was no Stackoverflow, open source, internet, or anyone I could ask for help.

Somehow I was able to build the app, figure out how to flip the screen to show 80 column messages, and built the app by the following Saturday where I was able to demo it using one their logins. I worked all the last night (they fed me bad pizza at 2am causing some delay when it "returned"). It worked, and the President was thrilled.

Now I decided having conquered a somewhat impossible task, I wanted in on the Microcomputer Group. My manager's manager said no, but I used my new friends in high places to get in.

Soon I became the only PC (Apple and IBM) programmer that I knew of in the world largest defense contractor. I had a great time with that manager, we both got email addresses at a local bulletin board, but the only people we knew to email was each other. After a year I left to eventually start my first company.

Basically talking over the wall, and going to that random meeting, gave me the idea that I should stick to programming, and never looked back.

About 15 years later I ran into that manager again, and he was close to dying from a kidney ailment. I spent a day with him, driving him around so he could take some photographs, and having lunch. We didn't talk much about work, mostly he wanted to get out of being in bed and see the world a bit.

He passed away soon afterwards. But I will always remember how a shared interest between us changed the direction of my life.

(Today I make generative art, see it on my website)

]]>