github – Blitter Studio https://blitterstudio.com We still make them like they used to! Thu, 16 Jun 2022 16:07:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/blitterstudio.com/wp-content/uploads/2015/08/cropped-Logo2-large-square.jpg?fit=32%2C32&ssl=1 github – Blitter Studio https://blitterstudio.com 32 32 21027263 CI/CD for Amiberry using Github Actions (pt 3) https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-3/ Thu, 16 Jun 2022 16:07:34 +0000 https://blitterstudio.com/?p=753 Read More »]]> In the first and second parts of this series, we saw how to set up a CI/CD solution for Amiberry, using self-hosted runners. The workflow will automatically build on every new commit, but also create and publish a new release (including binaries and an auto-generated changelog), based on the git tag. So generally it works as intended, but there’s always room for improvement. What could we improve? Performance, of course!

The self-hosted approach works well, but since I’m using (mostly) Raspberry Pi devices to compile, it takes longer than I’d like to go produce all the binaries. Additionally, each device has to produce more than one binary (e.g. Dispmanx and SDL2 versions), and it can’t do that in parallel. And the whole workflow is not finished until all of the jobs contained in it are finished, so if I also wanted a new release, it would take more than 40 minutes in total to complete the whole thing. Surely we can do better than that!

We can’t make the Raspberry Pi compile faster than its CPU allows, so we’ll have to use something else to do the job: cross compile on another (faster) platform. Let’s see what we’ll need to make that happen:

  • a Linux environment where the magic will happen. Something like Debian or Ubuntu would do just fine.
  • the environment will need a few things installed:
    • a cross compiler (the one for the architecture we’ll be compiling for, e.g. armhf for 32-bit and aarch64 for 64-bit)
    • the Amiberry dependencies, for the architecture we’ll be compiling for (e.g. libsdl2)
    • git, build essentials and autoconf, since we’ll need those as well
    • environment variables configured accordingly, to use the cross compiler instead of the distro’s native one, when compiling Amiberry

Docker

We could just use another self-hosted solution for this, but I wanted to take things a step further and use something more portable. So, we’ll create a Docker image for each environment we’ll need (currently that will be ARM 32-bit, ARM 64-bit and Linux x86. I’ll keep the Mac Mini for the macOS compilation as a self-hosted runner).

I created 3 separate Dockerfiles, one for each environment I wanted to use:

First I tested that these worked to compile Amiberry locally, without errors. Then I deployed these to DockerHub, so that Github Actions (and everyone else) can grab them from there:

Then I tested that I can still compile Amiberry using the deployed images from DockerHub, like so:

docker run --rm -it -v D:\Github\amiberry:/build midwan/amiberry-docker-aarch64:latest

The above example uses the aarch64 image, as you can see. I’m giving it one parameter of the local path, where I have my Amiberry sources checked out (in this example, that’s D:\Github\amiberry), and it will map that to the image’s /build directory. Running the above will end up in a bash shell, waiting for further commands. All I have to do, is type the command to compile Amiberry for the platform of choice, and check the output:

make -j8 PLATFORM=rpi4-64-sdl2

After a few minutes, the compilation is finished and I have a binary ready. I copy the binary over to my Raspberry Pi running Manjaro Linux (64-bit), and test it – it works as expected, great! Now let’s add this to the workflow.

Adapting the workflow

We can now change a few things on the workflow, for each job:

  • Change the runs-on value, since we don’t want/need to run this on the self-hosted runners anymore. We can use ubuntu-latest instead.
  • We need a different step for the compilation, since we won’t be doing that on the self-hosted runner anymore. I needed an action that would allow me to specify an image docker to use, and give it some options to run in it. I found that https://github.com/addnab/docker-run-action worked perfectly for my needs.
  • The rest of the steps can remain as they were, since we don’t need to change anything else in the process. Just the compile step.

Considering the above, the compile step now becomes something like this:

    - name: Run the build process with Docker
      uses: addnab/docker-run-action@v3
      with:
        image: midwan/amiberry-docker-armhf:latest
        options: -v ${{ github.workspace }}:/build
        run: |
          make capsimg
          make -j8 PLATFORM=rpi4-sdl2

Obviously, the above changes slightly depending on the platform we are compiling for (we’ll use the aarch64 image for 64-bit ARM targets, and different PLATFORM=<value> options). One thing you may have noticed, is the use of a special variable: ${{ github.workspace }} – this one represents the directory where the sources were checked out in the previous step, and it’s exactly what we need to map to the /build directory of the Docker image, similar to what I did with my local test above.

Limitations

I can’t use the docker images to produce the Dispmanx binaries, since the Dispmanx specific files are not in the image. That’s OK for me, I can still use the self-hosted runners to compile those. They will only need to compile those now, so it shouldn’t take too long to finish.

Results

With the new Docker approach, we have a few extra benefits. Not only is the compilation time significantly reduced (from 10-13 down to 4-5 minutes per platform), it can also run most of these jobs in parallel, since it doesn’t have to wait for one to finish before starting the next one.

Here’s a sample of the latest compilation done after a commit I made:

Compile time, using Docker

Now compare that to the time it took to produce binaries for the 5.2 release, done with the self-hosted runners only:

Compile time, using self-hosted runners only

The total time for a new release has now gone from 40m 56s down to 18m 30s, for all the included binaries. Not bad!

]]>
753
CI/CD for Amiberry using Github Actions (pt 2) https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-2/ https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-2/#comments Sun, 12 Jun 2022 11:53:18 +0000 https://blitterstudio.com/?p=731 Read More »]]> In the previous part of this series, we saw how to set up some self-hosted runners on my local devices, connect them to a Github repository and prepare to give them some jobs. I want to use this setup to automate builds of Amiberry whenever I push a new commit, as well as publish new releases whenever I choose to. I am going to use git tags to mark new releases, with the version number (e.g. v5.2).

So let’s dive right in the workflow content, shall we?

The syntax is YAML, which means indentation matters. A lot. As in, you’ll get errors if something does not have the right indentation. Annoying for sure, but manageable if you are aware of it, at least.

Starting from the top, we’ll need to give our workflow a name, and define when it should be triggered. I will go for the boring name of “C/C++ CI” which just indicates what this workflow does. And I want this workflow triggered on two scenarios:

  • Whenever I push a new commit to the “master” branch
  • Whenever I push a new tag with a new version. The format should be vX.Y, where X/Y are numbers. Something like v5.2, for example.

Considering the above, the first part of our workflow looks like this:

name: C/C++ CI

on:
  push:
    branches: [ master ]
    tags: 
      - v[1-9]+.[0-9]

Then we have to define the jobs we want it to perform. You can specify multiple jobs, and each job will run in parallel with the other. And of course, each job consists of individual steps.

For each job, you can specify where that will be executed (e.g. a specific environment or self-hosted runner), which means I can separate out the builds I want for each device. So then I can add things like this:

jobs:

  build-rpi3-dmx-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32, dmx]
    steps:
...

  build-rpi4-dmx-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32, dmx]
    steps:

Of course, I still need to specify the steps each job will take. Let’s take a look at those next.

The first step should be to checkout the repository. Remember, this is running on each device separately, so they need to get the sources first, before they can compile them. We can use the “checkout” action for this step, and it’s quite simple:

- uses: actions/checkout@v3

That would be enough for most cases, but in Amiberry’s repository I also have some git submodules which I’d like to retrieve also, in order to build the IPF supporting library (capsimg). The Checkout action has an option that we can use to do just that, so it then becomes:

    - uses: actions/checkout@v3
      with:
        submodules: 'true'

And the whole job so far, looks like this:

  build-rpi3-dmx-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32, dmx]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

And I’ll go ahead and create one job for each build I want to be triggered, to start giving the workflow some structure. Now the jobs list looks like this:

jobs:

  build-rpi3-dmx-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32, dmx]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

  build-rpi3-sdl2-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

  build-rpi4-dmx-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32, dmx]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

  build-rpi4-sdl2-32bit-rpios:
    runs-on: [self-hosted, Linux, ARM, rpios32]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

  build-rpi3-dmx-64bit-rpios:
    runs-on: [self-hosted, Linux, ARM64, rpios64, dmx]
    steps:
    - uses: actions/checkout@v3
      with:
        submodules: 'true'

...

I’m not posting the full list above, as I assume you get my point.

Compiling the sources

Great, so far we have a list of jobs, that will run on my self-hosted devices, and checkout my git repository including submodules. We’re ready to do some compiling!

Since my devices already have all Amiberry requirements pre-installed, I can simply call the compile commands I want from the command line. Those are basically two for each platform:

  • Build the capsimg library, since I want to include that in the ZIP archive
  • Build Amiberry itself, for each platform I’ll be using

It’s quite easy to do these steps:

    - name: make capsimg
      run: make capsimg
    - name: make for RPIOS RPI3-DMX 32-bit
      run: make -j4 PLATFORM=rpi3

The name line is just for the logging part, which is good to have in order to see the individual steps and what each one is doing. The run line executes what is specified there on the device’s default shell, like bash for Linux. That’s perfect for my needs!

I’ll need to add these steps for each job of course, modifying what the PLATFORM=<value> contains, since Amiberry supports many of them. The make capsimg line can remain the same for all of them, however.

Uploading build artifacts

When this step is finished, and if no errors occurred, we should have an Amiberry binary available in the current directory. The next step is to take that and all related directories and data files, and upload them somewhere that testers can find them. There is an Action that does just that, the upload-artifact one.

    - uses: actions/upload-artifact@v3
      with:
        name: amiberry-rpi3-dmx-32bit-rpios
        path: |
          amiberry
          capsimg.so
          abr/**
          conf/**
          controllers/**
          data/**
          kickstarts/**
          savestates/**
          screenshots/**
          whdboot/**

Notice that I’m not creating any ZIP Archive before uploading these. That’s because when you try to download the uploaded contents, a ZIP file will be dynamically created. If I had uploaded a ZIP file here, then we’d have a ZIP file inside a ZIP file, which I didn’t want. This is documented in the upload-artifact action “Limitations” section:

During a workflow run, files are uploaded and downloaded individually using the upload-artifact and download-artifact actions. However, when a workflow run finishes and an artifact is downloaded from either the UI or through the download api, a zip is dynamically created with all the file contents that were uploaded. There is currently no way to download artifacts after a workflow run finishes in a format other than a zip or to download artifact contents individually. One of the consequences of this limitation is that if a zip is uploaded during a workflow run and then downloaded from the UI, there will be a double zip created.

https://github.com/actions/upload-artifact

The name like is important here, as that will be the name of our artifact once it gets uploaded. I tried to keep a clear naming convention, to indicate the exact type of Amiberry version each one represents.

At this point, our workflow already covers some of our requirements:

  • It automatically triggers whenever a new commit is pushed to the repository
  • It will checkout the sources, and build the capsimg and Amiberry binary, for each platform I included
  • It will upload those binaries and all related data files/directories to a location that people can get it from

Creating archives for new Releases

That looks good already, but what about new Releases? I want it to take some extra steps, but only if I have used a git tag with the specific format I specified, which would mean I am generating a new Release. Let’s see how we can do that.

    - name: Get tag
      if: github.ref_type == 'tag'
      id: tag
      uses: dawidd6/action-get-tag@v1
      with:
        # Optionally strip `v` prefix
        strip_v: false

There are a few things to mention here. We can use an “if” line in an action/step, to specify that this step should only be executed if a certain condition applies. The condition I wanted to use in my case, was to check if the action that trigged this job was caused by a git tag, not just a new commit. Remember we set our git tag trigger at the top of this workflow, which had a specific format to look for – so all we need to do here, is to make sure the following steps will only be executed if we started the workflow because I pushed a new git tag.

Then I want to get the actual tag value that triggered this, and store it in a variable. We’ll be using that variable in the next step, because I want to name my ZIP archives with the version as well. For example, if I used the tag v5.2 to trigger this workflow, I want the text “v5.2” stored somewhere so I can use it to name my archive “amiberry-v5.2-rpi4-sdl2-rpios.zip” or similar.

I found an action that helps me do the job easily, and that’s what the uses: dawidd6/action-get-tag@v1 line is about. It will output the tag value to variable named “tag”. It can optionally also strip out the “v” part, if you enable that, as you can see in the line below.

The next step is to ZIP the files I want to include in the new Release. I want to use the variable name “tag” I stored above, as part of the filename. Otherwise, this is rather straightforward – I just have to make sure that zip is installed on the self-hosted runners, of course!

    - name: ZIP binaries
      if: github.ref_type == 'tag'
      run: zip -r amiberry-${{ steps.tag.outputs.tag }}-rpi3-dmx-32bit-rpios.zip amiberry capsimg.so abr conf controllers data kickstarts savestates screenshots whdboot

You can use variables with the special syntax shown above: ${{ variable name }} will be replaced with the variable value, during runtime. In my case that specific variable holds the value of my git tag, so it will be replaced with something like v5.2 (keeping the same example as before here). So that would make the whole line look like this, when executed (assuming the tag was “v5.2”):

run: zip -r amiberry-v5.2-rpi3-dmx-32bit-rpios.zip

Creating a Changelog with each release

Next, I wanted to have a changelog generated with each new release, that would include what changed since the previous release was published. I wanted to have this generated automatically, based on the commits that came between the two releases, and it took me a while to find something that would work exactly like I wanted. Eventually, I found something that came close enough:

    - name: Create Changelog
      if: github.ref_type == 'tag'
      id: changelog
      uses: loopwerk/tag-changelog@v1
      with:
          token: ${{ secrets.GITHUB_TOKEN }}
          config_file: .github/tag-changelog-config.js

Notice that I’m including the if: github.ref_type == 'tag' line here as well. I don’t want this step to trigger on new git commits, only when I’m publishing a new release. And the way I’ve designed things so far, is that new releases would only be triggered when I push a new git tag, with the specific version format I am using.

What was interesting about this specific action, is that it allows you to configure the look of your changelog depending on a Javascript config file. I went for the default example, as it seemed good enough for my needs, but it does require that your commits use a specific label format in order for this to work. It also means I will need to add that config file in my repository, of course. It needs to find it from somewhere…

Creating a new Release

Now that we have everything in place, it’s time to add one more step in our workflow: Creating a new Release and publishing all items (zip archive, changelog). Again, there are several actions to do this, but the one I found that worked well for me is this:

    - name: Create Release
      if: github.ref_type == 'tag'
      uses: ncipollo/release-action@v1
      with:
        allowUpdates: true
        omitBodyDuringUpdate: true
        body: ${{ steps.changelog.outputs.changes }}
        artifacts: |
          amiberry-${{ steps.tag.outputs.tag }}-rpi3-dmx-32bit-rpios.zip

I liked this one, because it allows me to create a new release and optionally upload an artifact to it. And it can also use my generated changelog, from the previous step. Perfect!

There are a few options I had to tweak around, to make sure it worked the way I wanted:

  • allowUpdates: true – Since I am having multiple jobs running in parallel, each one will complete at different times (when the binary compilation is done) and will try to add the ZIP archive to the same release. I need to allow updates to a release, in order for this to succeed.
  • omitBodyDuringUpdate: true – I don’t want any changes to the Release body, when it’s doing an update. It will only be uploading multiple ZIP archives as the jobs complete, so the body should remain the same once it was generated.
  • body: ${{ steps.changelog.outputs.changes }} – This is where it gets interesting. I can specify the body text of the Release, and I can use the output of my Changelog step here.
  • artifacts: finally, I can specify which artifact will be uploaded to the Release. Each job will complete separately, and will upload its own ZIP archive here, that’s why it’s important to allow Updates to the Release.

Of course, these should all be added for each job I added, which will make the file a bit long, but that’s because Amiberry supports so many targets (and I’m not even compiling all of them here). You can see the complete file here, if you want.

The final output can be seen on the v5.2 Release of Amiberry, on Github.

Triggering the workflow manually

The automated job works fine, but sometimes (and especially during testing), you may want to trigger the workflow manually as well. This is trivial to do, as all we need to do is add one line to the top of the workflow:

name: C/C++ CI

on:
  workflow_dispatch:
  push:
    branches: [ master ]
    tags: 

Adding workflow_dispatch: will make it possible for us to trigger this workflow from Github, with the use of a button:

Run workflow manually

This workflow worked well for my needs, but there’s always room for improvement. For example, compiling on each device takes quite some time, especially on slower ones. The whole process is automated of course, so I can just “fire and forget”, but it can still take more than 40 minutes before a full release is complete on Github. In the next part of this series, we’ll take a look at how we can improve that, and have everything complete in less than half that time!

]]>
https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-2/feed/ 1 731
CI/CD for Amiberry using Github Actions (pt 1) https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-1/ https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-1/#comments Sat, 11 Jun 2022 12:09:48 +0000 https://blitterstudio.com/?p=721 Read More »]]> The Amiberry emulator supports multiple platforms, so if you want to provide binaries on new releases (like I do), you’ll have a lot of compiling to do. Now, you could just compile a binary manually for each platform you have available, but that would take (very) a long time (especially if you have some slow devices). And what happens if you want to have binaries compiled for testing before a major release? Like for example, on each new commit or pull request? That’s where we need automation, and specifically what’s called Continuous Integration and Continuous Deployment – or CI/CD for short.

I have been using Azure DevOps for many years, and the Azure Pipelines specifically, for setting up similar scenarios for CI/CD. However, since everything else regarding the Amiberry project is hosted on Github, and since there is a similar feature there, named Actions, I thought I’d give it a try and move over my pipelines to Github as well.

But let’s set the goals fist:

  • I want an automated job, that would compile the latest version of Amiberry every time I commit something new in the repository (let’s say the “master” branch only, for now).
  • After it compiles each binary, it should compress it with ZIP and upload it somewhere testers can access it. The ZIP archive should contain everything that’s needed to run Amiberry, not just the executable (so data files and directories should be included). If you have the requirements to run Amiberry (e.g. SDL2), the archive should be portable – just extract it and run Amiberry from it.
  • The ZIP filename should indicate the platform and OS it was compiled for. For example, amiberry-rpi4-sdl2-64bit-debian.zip would be such a filename.
  • When the time comes to publish a new release, I want the automated job to do the publishing for me. I will use git tags to mark a new release, with the following example format: v5.2. So basically, if I set a git tag on a certain commit, with that version format, a new release should be published automatically with that version number.
  • A new release should also include the ZIP archives, which besides the platform and OS we had before, should also include the version in the filename. For example, amiberry-v5.2-rpi4-sdl2-64bit-debian.zip
  • A new release should also contain a Changelog, containing what changed since the previous release. The information for that should come from the commits themselves and should be fully automated also. I don’t want to maintain a file manually if I can avoid it.

Looks like quite the list, doesn’t it? Let’s see what we can achieve!

First things first: Let’s read the docs: https://docs.github.com/en/actions (I like doing things by the book, from time to time).

Github Actions uses so-called “workflows“, which are YAML based files that dictate what should happen. It’s basically the same thing as Azure DevOps’ “Pipelines”, with slightly different syntax in some parts, so if you’re familiar with one, the other shouldn’t be too hard to get into either.

Those workflows are saved in your repository (under .github/workflows), so you can include them in the git history as you make changes to them. Nice!

Here’s an example of such a workflow file, from the Quickstart:

name: GitHub Actions Demo
on: [push]
jobs:
  Explore-GitHub-Actions:
    runs-on: ubuntu-latest
    steps:
      - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
      - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
      - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
      - name: Check out repository code
        uses: actions/checkout@v3
      - run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
      - run: echo "🖥️ The workflow is now ready to test your code on the runner."
      - name: List files in the repository
        run: |
          ls ${{ github.workspace }}
      - run: echo "🍏 This job's status is ${{ job.status }}."

The workflows need to run somewhere, and you can either use Github-hosted VMs, or use your own (self-hosted runners). To get started, I chose the self-hosted option for now.

If you choose to use self-hosted runners, you will need some device(s) to run the jobs on, obviously. I had several Raspberry Pi 4s around, a RockPro64 and a Mac-Mini which I could use. That should be good enough!

You can easily install the required software to make each of those machines a “self-hosted runner”, by going in your Repository settings, under Actions -> Runners:

Actions -> Runners

Then click on “New self-hosted runner“:

New self-hosted runner

Select the type of platform you will be installing the runner in, and it will give you a list of commands you can use, to download and configure it. Sweet!

Create self-hosted runner

I used that process to install 5 separate self-hosted runners:

  • A Raspberry Pi 4, running Manjaro Linux 64-bit
  • A Raspberry Pi 4, running RPI-OS 32-bit
  • A Raspberry Pi 4, running RPI-OS 64-bit
  • A RockPro64, running Manjaro Linux 64-bit
  • An Intel Mac-Mini, running macOS Catalina

All devices already had the Amiberry requirements installed, to make sure they can compile it without problems.

I installed the runners, configured them to talk to my Github repository, set them up to run automatically as services and checked that they showed “Green” from Github’s side once they were up and running.

Coming in the next chapter: Time to write some content for those workflows!

Note: if you just want to skip ahead and see the result, the repository already contains everything: https://github.com/midwan/amiberry/blob/master/.github/workflows/c-cpp.yml

]]>
https://blitterstudio.com/ci-cd-for-amiberry-using-github-actions-pt-1/feed/ 2 721