<![CDATA[Box of Cables]]>https://boxofcables.dev/https://boxofcables.dev/favicon.pngBox of Cableshttps://boxofcables.dev/Ghost 5.88Tue, 17 Mar 2026 05:40:39 GMT60<![CDATA[How to build a custom kernel for WSL in 2025]]>https://boxofcables.dev/how-to-build-a-custom-kernel-for-wsl-in-2025/67408d67e9557114aba32540Sat, 23 Nov 2024 15:33:14 GMT

Previously, I have shown you how to build a custom kernel for WSL on openSUSE and for KVM acceleration.

Thanks to the introduction of the WSL System Distro, a lightweight Linux distro that runs on top of WSL managing various tasks, it is no longer necessary to write distro-specific guides to building a custom WSL kernel.

Because the WSL System Distro is standardized across all WSL installs, we can build a custom kernel for WSL the same way on any WSL install, regardless of your day-to-day WSL distro, whether it's AlmaLinux, Pengwin, Ubuntu, openSUSE, or Arch.

The WSL System Distro is powered by Azure Linux, also known as CBL-Mariner. The System Distro image ships with WSL, though it is mostly invisible. Unlike your day-to-day WSL distro though, the System Distro image is immutable. That means, even if you make changes in the System Distro, like installing packages or creating files, the minute you exit the System Distro, they are lost, and it reverts to the original version that the WSL team ships.

On the other hand, the WSL System Distro is the same across all WSL installations. Azure Linux is an rpm-based distribution that utilizes tdnf, a simplified version of dnf, the rpm-based package manager. tdnf can connect to the Azure Linux Microsoft package repository and download any dependencies you need, including all the Linux build dependencies. We just need to build the kernel and export it to our Windows file system before it closes and all is lost.

Here is how to do that and keep that System Distro environment alive in the process.

Requirements

  • WSL 2 installed with wsl.exe --install
  • Update WSL with wsl.exe --update
  • Any WSL distro installed, after all, you have to have a distro to boot into with your custom kernel
How to build a custom kernel for WSL in 2025

Entering the WSL System Distro

Getting root access to the WSL System Distro is as simple as:

wsl.exe --system --user root
How to build a custom kernel for WSL in 2025

Note without --user root you will enter as the lower-privileged wslg user, which is utilized for WSLg GUI rendering, one of the aspects of the WSL experience the System Distro helps manage for WSL.

Note of Caution

Because the System Distro is immutable, if for any reason you exit the System Distro, WSL is accidentally shut down, or maybe your computer sleeps while your kernel build is happening, all your progress will be lost, and you will have to start over.

To keep the System Distro instance you are working on for your build alive, without losing any progress, I recommend two things:

  1. Download Microsoft PowerToys, enable Awake, and set it to keep your system awake indefinitely:
How to build a custom kernel for WSL in 2025
  1. Then, open a second tab in Windows Terminal, enter the System Distro as above with:
wsl.exe --system --user root

And then start a long sleep loop while we are working in the other tab:

while true; do sleep 10000; done
How to build a custom kernel for WSL in 2025

Installing Dependencies

Then we need to install the build dependencies for the Linux kernel in the WSL System Distro using tdnf:

tdnf install -y gcc glibc-devel kernel-headers make gawk tar bc perl python3 bison flex dwarves binutils diffutils elfutils-libelf-devel zlib-devel openssl-devel ncurses-devel curl

Just a reminder, when you exit the WSL System Distro instance, all these dependencies will be gone, and it will reset to the factory image shipped by the WSL team, and you will need to reinstall these.

How to build a custom kernel for WSL in 2025

Kernel Sources

Move to your local home folder inside the WSL file system to improve build performance:

cd ~
How to build a custom kernel for WSL in 2025

Grab the tar build of the latest build of the WSL sources from GitHub:

curl -k -s https://api.github.com/repos/microsoft/WSL2-Linux-Kernel/releases/latest | \
grep '"tarball_url":' | \
cut -d '"' -f 4 | \
xargs -n 1 -I {} sh -c 'curl -k -L -o $(basename {} .tar.gz).tar.gz {}'
How to build a custom kernel for WSL in 2025

Unpack the tar archive:

tar  -xzf ~/linux-msft-wsl-*.tar.gz
How to build a custom kernel for WSL in 2025

Building Your Kernel

Drop into the kernel sources directory:

cd microsoft-WSL2-Linux-Kernel*
How to build a custom kernel for WSL in 2025

You will need to use a kernel config file. The default WSL2 kernel config files are in ./Microsoft:

How to build a custom kernel for WSL in 2025

You can manually edit these here or copy them and make your edits there and then point to them using KCONFIG_CONFIG= when running make on your kernel build.

I personally prefer to take the respective default WSL2 kernel config file for my architecture and copy it to .config in my main kernel sources folder, which doesn't require setting KCONFIG_CONFIG=.

For this Arm device, that would be:

cp Microsoft/config-wsl-arm64 .config
How to build a custom kernel for WSL in 2025

On x86 devices, that would be:

cp Microsoft/config-wsl .config

Customizing Your Kernel

If you copied the default WSL2 kernel config file to .config, you can now edit using your preferred text editor, such as nano, vim, or emacs.

My personal preference, usually when turning features on and off, is simply to use the TUI interface for customizing the kernel, which can be accessed with:

make menuconfig
How to build a custom kernel for WSL in 2025

Gets you to here:

How to build a custom kernel for WSL in 2025

Where you can easily enable and disable features, drivers, optimizations, etc., without having to decipher .config.

Save your config file (unless you want to save elsewhere and set KCONFIG_CONFIG= on your kernel make command) as .config:

How to build a custom kernel for WSL in 2025

You can also apply patches, if you are advanced kernel hacker.

Building Your Kernel

It's now time to build your custom kernel.

Build with make, specifying the maximum number of cores in your system to speed things along:

make -j$(nproc)
How to build a custom kernel for WSL in 2025

While it's building, just to be sure, make sure that sleep loop is still running, just in case:

How to build a custom kernel for WSL in 2025

Make you want to open another tab, drop into the System Distro, install htop, and see all the magic happening:

tdnf install htop -y
How to build a custom kernel for WSL in 2025
htop
How to build a custom kernel for WSL in 2025

Installing Your Custom Kernel

Once our build is complete:

How to build a custom kernel for WSL in 2025

We copy our build into place:

On Arm devices:

cp arch/arm64/boot/Image $(wslpath "$(cmd.exe /c "echo %USERPROFILE%" | tr -d '\r')")

On x86 devices:

cp arch/x86/boot/bzImage $(wslpath "$(cmd.exe /c "echo %USERPROFILE%" | tr -d '\r')")

And configure our .wslconfig to point to the custom kernel, if we are not replacing and existing kernel. In which case, you will want to make sure the configuration matches.

On Arm devices:

powershell.exe /C 'Write-Output [wsl2]`nkernel=$env:USERPROFILE\Image | % {$_.replace("\","\\")} | Out-File $env:USERPROFILE\.wslconfig -encoding ASCII'

On x86 devices:

powershell.exe /C 'Write-Output [wsl2]`nkernel=$env:USERPROFILE\bzImage | % {$_.replace("\","\\")} | Out-File $env:USERPROFILE\.wslconfig -encoding ASCII'

Testing

Once confirming Image or bzImage are in place and .wslconfig is properly set, we can shut down our WSL instance. Note, confirm everything we copied from the System Distro is in place before doing so because all changes (installed dependencies, kernel sources, and built kernel) will be lost.

wsl.exe --shutdown

Then restart WSL in the System Distro or your preferred working distro:

wsl.exe --system

And run:

uname -a

We should see our kernel with a timestamp of our build date and time:

How to build a custom kernel for WSL in 2025

Happy kernel hacking!

]]>
<![CDATA[Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL]]>https://boxofcables.dev/creating-a-linux-desktop-app-in-visual-studio-using-avalonia/66ac089493c6612a36399ac4Mon, 02 Oct 2023 21:39:06 GMT

Pre-requisites:

Install the Avalonia Extension in Visual Studio.

Go to "Manage Extensions" in Visual Studio. To get there quickly, open Visual Studio start without code and look for the "Extensions" menu option:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Click "Manage Extensions" and type "Avalonia" in the top-right search box.

Click "Download" on "Avalonia for Visual Studio":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Click "Download" on "Avalonia Toolkit":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Then click "Close". Close all Visual Studio windows.

VSIX will then do some computing:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

When VSIX Installer asks, allow the enabled modifications:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Visual Studio will do some magic:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Once everything is installed, click "Close":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Enable WSL Debug Support in Visual Studio

Open the Visual Studio Installer and click "Modify" next to Visual Studio 2022:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Click on "Individual Components":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

In the search box:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Type "WSL" and you will see ".NET Debugging with WSL":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Check ".NET Debugging with WSL" and click "Modify":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Visual Studio Installer will install .NET Debugging with WSL:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Build a Sample App

Open Visual Studio and click "Create New Project":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

You will see all possible project templates:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

To narrow the choices, select C# under "Languages", Linux in "Platforms", and Avalonia in the "Project Types" drop-down menu:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Select "Avalonia .NET Core App" and click "Next":

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Choose a suitable destination folder and project name in the following screen:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Accept the defaults in the next page. As of writing, .NET 7 is still the current stable version of .NET, though .NET 8 is right around the corner. By the time you read this, .NET 8 may be the new default.

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Visual Studio will load a blank project template:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Select "WSL" in the target platform dropdown:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Attempt to build and run the blank template. However, when you run the app for the first time, you may see an error with this message:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

.NET is missing from our WSL distro. Thankfully, Visual Studio will handle the installation of .NET on WSL on most WSL distros.

In newer builds of Visual Studio you will get this message in Visual Studio before attempting to run on WSL:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Click "OK" and Visual Studio will launch .NET install on our default WSL distro:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

It might ask you for admin password before installation.

It will start downloading and installing the required .NET version for the default WSL distribution:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Once the installation finishes, the WSL window will close automatically, and you will see the following message in Visual Studio:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

After running again, Windows will load WSL and run the application inside that. For the first start, it may take some time to load.

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

The application will launch using WSLg in WSL2:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Open Source .NET Avalonia Apps

To experiment with building additional Linux apps using .NET and Avalonia, check out the following projects:

GitHub - waliarubal/Jaya: Cross platform file manager application for Windows, Mac and Linux operating systems. (planned mobile support)
Cross platform file manager application for Windows, Mac and Linux operating systems. (planned mobile support) - GitHub - waliarubal/Jaya: Cross platform file manager application for Windows, Mac a...
Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL
GitHub - dan0v/AmplitudeSoundboard: A sleek, cross-platform soundboard, available for Windows, MacOS, and Linux
A sleek, cross-platform soundboard, available for Windows, MacOS, and Linux - GitHub - dan0v/AmplitudeSoundboard: A sleek, cross-platform soundboard, available for Windows, MacOS, and Linux
Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL
GitHub - anovik/SmartCommander: An open-source cross-platform file manager for Windows and Linux based on Avalonia
An open-source cross-platform file manager for Windows and Linux based on Avalonia - GitHub - anovik/SmartCommander: An open-source cross-platform file manager for Windows and Linux based on Avalonia
Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Building Smart Commander

Use your preferred git client, in my case I use GitHub Desktop to clone Smart Commander:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

In your default WSL distro, install libice6 and libsm6:

sudo apt install libice6 libsm6
Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Open Visual Studio and locate the .sln file in the /src/ folder of the project:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

Open the project. Make sure your target is set to WSL:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

And click the run button next to "WSL".

Visual Studio will go into debug mode:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL

And your Linux app will launch:

Getting Started Creating a Linux Desktop App in Visual Studio using Avalonia and WSL
]]>
<![CDATA[Windows Dev Drive Benchmarks]]>https://boxofcables.dev/windows-dev-drive-benchmarks/66ac089493c6612a36399ac3Tue, 30 May 2023 21:18:58 GMT

Dev Drive was recently announced at Microsoft Build 2023:

Dev Drive is a new form of storage volume available to improve performance for key developer workloads.
Dev Drive builds on ReFS technology to employ targeted file system optimizations and provide more control over storage volume settings and security, including trust designation, antivirus configuration, and administrative control over what filters are attached.

Dev Drive can be set up from the new Dev Home:

Dev Home is a new control center for Windows providing the ability to monitor projects in your dashboard using customizable widgets, set up your dev environment by downloading apps, packages, or repositories, connect to your developer accounts and tools (such as GitHub), and create a Dev Drive for storage all in one place.

Or Dev Drive and optionally configured under Disks & volumes in Settings, and as either a virtual VHD drive or raw disk:

Windows Dev Drive Benchmarks

I wanted to put Dev Drive to the test on my developer machine, a Windows Developer Kit 2023, an Arm device, running the latest Windows Insider Dev Channel build.

Setup

Windows Developer Kit 2023
Qualcomm Snapdragon 8cx Gen 3
32GB LPDDR4x RAM
512GB NVMe
Windows NTFS C: Drive
Windows ReFS D: Drive created as a Dev Drive
BitLocker Enabled on C: and D:

Windows Environment

Windows 11 Dev Channel Build 23466.1001
Windows Terminal Preview 1.18
Visual Studio Community 2022 Preview
Dev Home Preview 0.137.141.0
PowerShell 7.3.4
Python 3.11

Building Flask on Windows

Flask is a Python web UI framework. This PowerShell script builds Flask on Windows from source and then runs all the build tests from the Flask repository.

I thought building a Python-heavy app natively on Windows would be a good starting point for comparison.

Windows Dev Drive Benchmarks

Work is done in the script to shift as much caching to the folder or parent folder on the drive the script is being run on, to minimize dev tooling's tendency to cache to C:

winget install python git.git --disable-interactivity #install Python

$sw = [Diagnostics.Stopwatch]::StartNew() #start timer

python -m pip cache purge #clean up global pip cache
Remove-Item -Recurse -Force $env:LOCALAPPDATA\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages #clean up local pip cache

python -m pip install --upgrade pip #upgrade pip
python -m pip install pipenv #install pipenv

Remove-Item -Recurse -Force flask #remove old repo
Remove-Item -Recurse -Force Pipfile* #remove old Pipfile
Remove-Item -Recurse -Force pipenvcache #remove old pipcache

md pipenvcache #create new pipcache folder
$env:PIPENV_CACHE_DIR = "$pwd\pipenvcache" #set cache folder for pipenv
$env:PIPENV_VENV_IN_PROJECT=1 #set pipenv to create virtual environment in this folder

git clone https://github.com/pallets/flask #clone repo
Set-Location flask
python -m pipenv install setuptools wheel
python -m pipenv install -r requirements/dev.txt #install flask dependencies
python -m pipenv install $pwd #install flask
python -m pipenv install -r requirements/tests.txt #install test dependencies
Get-ChildItem -Path . -Filter .\tests\*.py | ForEach-Object {python -m pipenv run python $_.FullName} #run tests
Set-Location ..

$sw.Stop() #stop timer

Write-Host $([string]::Format("`n🏁️🏃💨 Total time: {0:d2}:{1:d2}:{2:d2} ⏱️📎🏆️🎉",
                                  $sw.Elapsed.Hours,
                                  $sw.Elapsed.Minutes,
                                  $sw.Elapsed.Seconds)) -ForegroundColor Green

Write-Output $([string]::Format("Total time: {0:d2}:{1:d2}:{2:d2}",
                                  $sw.Elapsed.Hours,
                                  $sw.Elapsed.Minutes,
                                  $sw.Elapsed.Seconds)) | Out-File -FilePath .\flaskbuildtime.txt -Append

You can download my Flask build script here:

GitHub - sirredbeard/devdrive-benchmarking
Contribute to sirredbeard/devdrive-benchmarking development by creating an account on GitHub.
Windows Dev Drive Benchmarks

The results showed a 10% boost in compile and testing time of Flask when run on the Dev Drive, D:

Windows Dev Drive Benchmarks

Interesting result. I have some more benchmark scripts on the way.

]]>
<![CDATA[The GitHub Copilot Lawsuit Threatens Open Source and Human Progress]]>https://boxofcables.dev/the-github-copilot-lawsuit-threatens-open-source-and-human-progress-1/66ac089493c6612a36399ac2Mon, 07 Nov 2022 14:24:51 GMTBackgroundThe GitHub Copilot Lawsuit Threatens Open Source and Human Progress

Matthew Butterick, "writer, designer, pro­gram­mer, and law­yer" has teamed up with the class action plaintiff's law firm Joseph Saveri Law Firm to sue GitHub and Microsoft over GitHub Copilot.

Over at githubcopilotinvestigation.com, Butterick describes his reasoning. He says:

When I first wrote about Copi­lot, I said “I’m not wor­ried about its effects on open source.” In the short term, I’m still not wor­ried.

That's good to know. But he then goes to wax nostalgic about his specific experience in open source and how GitHub Copilot is something new and different.

But as I reflected on my own jour­ney through open source—nearly 25 years—I real­ized that I was miss­ing the big­ger pic­ture. After all, open source isn’t a fixed group of peo­ple. It’s an ever-grow­ing, ever-chang­ing col­lec­tive intel­li­gence, con­tin­u­ally being renewed by fresh minds....Amidst this grand alchemy, Copi­lot inter­lopes.

It reads like someone threatened by innovative technologies, gatekeeping because this isn't how he did open source over the last 25 years. Butterick throws in some vague handy-wavy anti-Microsoft rhetoric, also outdated and trite (but sure to appeal to the FSF crowd), for good measure:

We needn’t delve into Microsoft’s very check­ered his­tory with open source...

Almost none of the people mentioned in the article he linked to, which is about Microsoft in the late 1990s and early 2000s, are still at Microsoft. Since that time, Microsoft has seen a profound pivot towards open source software. The CEO, Satya Nadella, has not been implicated in any of the anti-FOSS activities at Microsoft during that time, has embraced-even promoted-the pivot towards open source software, and, for what it's worth, came to Microsoft from Sun Microsystems.

Most of the product managers, engineers, and decision-makers at Microsoft these days were barely out of high school in the early 2000s. In tech, it's ancient history.

Butterick even knocks Bill Gates for his open letter to computer hobbyists, which contained the radical idea that developers should be able to define the terms on which their software is distributed, which is the fundamental basis of modern free and open source software. Ironic, because if there are no rules on software code, then he would have no basis for his lawsuit.

Sadly, the Copilot case also seems prepared to bring open source software patent claims, something free and open source software advocates largely solved with the GPL v3 and Red Hat's Open Innovation Network efforts:

The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

I thought we were all against software patents, particularly in open source. I guess not.

Why F/OSS Advocates Should Support GitHub Copilot

The legal underpinnings of GitHub Copilot are based on two basic principles:

  1. Fair use.
  2. The de minimus exception (US) or incidental inclusion (UK and elsewhere).

Fair use protection is broad under US copyright law and is codified in EU copyright directives, although adoption at the member state level varies. Many other non-US and non-EU countries have similar exceptions, though the US is the broadest to my knowledge.

Fair use is a doctrine that allows the limited use of copyrighted material without obtaining prior permission of the copyright owner.

Fair use - Wikipedia
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

Since 2016, US law has held that automated scanning, indexing, and minor reproduction of copyrighted works, specifically Google's indexing of books for Google Books, is protected fair use.

Authors Guild, Inc. v. Google, Inc. - Wikipedia
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

Fair use has come into play in open source software most recently with the Google v. Oracle case, in which the courts held that Google's clean-room implementation of the Java API did not violate Oracle's copyright on those API calls.

Google LLC v. Oracle America, Inc. - Wikipedia
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

Fair use protects the reimplementation of other APIs, such as the Win32 API in ReactOS or WINE, in open source. It also protects reverse engineering of proprietary applications to create open source alternatives. Importantly, like in the Google Books case, protects scanning copyrighted datasets to develop indices, ML models, and other derivative works.

GitHub's scanning of the source code available on its servers to develop an AI model is protected fair use for this reason.

Incidental inclusion is a legal term of art from UK copyright law. In the UK, incidental inclusion protects accidentally including small bits of copyrighted material by this distinct carve-out:

A typical example of this would be a case where a someone filming inadvertently captured part of a copyright work, such as some background music, or a poster that just happened to on a wall in the background.

This specific carve-out is needed in UK and other non-US countries where fair use protections are not as broad.

In the US accidentally including small bits of copyrighted material is protected under the umbrella of broad fair use protections, but is referred to as the de minimis exception.

Under US law fair use protections, the intent, amount, and effect of infringement determines whether an infringement is protected by the fair use doctrine. Where the intent, amount, and effect of infringement is minimal, it is covered by the de minimis exception.

GitHub Copilot is not intended to violate GitHub contributors' copyrights.

While there have been a handful of viral examples of verbatim reproduction of code by Copilot, GitHub has produced reports that state the actual rate of verbatim reproduction is exceedingly low. There is reason to believe the model will continue to improve and that rate will go down.

Finally, the effect of that verbatim reproduction is also minimal. GitHub Copilot is not currently capable of reproducing whole software projects, undermining other companies, or destroying open source communities.

It is an AI-assisted pair programmer that is great at filling in boilerplate code we all use and borrow from each other in free and open source software, protected by FOSS licenses and fair use.

While this is a general overview of the legal basis of GitHub Copilot, there are several valuable in-depth analyses that go into further detail:

GitHub Copilot is not infringing your copyright
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress
Analyzing the Legal Implications of GitHub Copilot - FOSSA
The release of GitHub Copilot raises questions about potential copyright infringement and license compliance issues.
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

It is also worth pointing out that organizations like Free Software Foundation do not actually disagree with the legality of GitHub Copilot, they just also raise similar vague concerns about it and throw in anti-Microsoft rhetoric for good measure, to appease their base. They must fundraise, after all.

Sure, GitHub’s AI-assisted Copilot writes code for you, but is it legal or ethical?
GitHub’s Copilot is an AI-based, pair-programming service. Its machine learning model raises a number of questions. Open source experts weigh in.
The GitHub Copilot Lawsuit Threatens Open Source and Human Progress

What Could Happen If GitHub Loses

What are some of the potential outcomes of the GitHub Copilot litigation?

  • Fair use and "incidental inclusion" in open source software becomes more restrictive.

Ever copy and paste code snippets from StackOverflow? Did you remember to properly cite and add the relevant Creative Commons license to your LICENSE.md file for that code? How about borrowing some sample code from a blog or looking at a GitHub Gist and reproducing it in your code?

We all know we should apply that attribution/license, but do we always? How much of that code is running in production in your company or open source community right now?

Thankfully, that kind of usage is likely protected under fair use. If that goes away, copying code like this could open free and open source developers up to additional liability, expensive lawsuits, and even troll lawsuits.

We could see a whole industry crop up of open source software copyright trolls, going after open source projects for minor infringements of licenses.

  • Training ML datasets on copyrighted materials becomes more restrictive.

The ability for ML developers to train their models on copyrighted datasets under fair use right now is dramatically accelerating AI/ML. The advances we have seen in open source AI/ML being developed on datasets that are otherwise copyrighted is unprecedented. Just in the last 12 months the advances we have seen in AI/ML have been extraordinary. Models and advances in model development that used to take years are taking weeks and sometimes just days.

If training ML models on copyrighted datasets becomes more restrictive, AI/ML development will slow.

For example, I know of one AI/ML project (PDF) that scraped publicly accessible webcams during COVID lockdowns to measure social distancing. Those webcam images were copyrighted and, if fair use did not apply, could not be used without obtaining written permission from thousands of webcam owners.

This will have profound impacts on medical research, science, models that improve accessibility for users, and other practical applications of AI/ML that improve the lives of humans and benefit our planet.

This means more lawyers involved in model training, which will then become more expensive, and slower.

It will also likely take ML model training out of the hands of hobbyists, open source developers, and individual researchers and limit it to big corporations who can afford those compliance costs.

Individual ML developers, like individual open source developers, will suddenly face much more legal ambiguity and exposure, if we do not defend fair use.

Conclusion

tl;dr Based on squeamish feelings that GitHub Copilot is something new and different, and gripes about Microsoft from 20 years ago, a tech lawyer has teamed up with a class action plaintiff's law firm to sue GitHub over an incredibly helpful tool that improves open source quality and output, the potential outcomes of which could include:

  • Making free and open source software harder to share
  • Re-implementing proprietary applications, hardware, and protocols as free and open harder to do
  • Making training AI/ML models more expensive, taking it out of the hands of hobbyists and researchers, limiting it only to big corporations with huge legal departments
  • Slowing development of real-world application of AI/ML models that will improve human life and longevity
  • Upending the current détente in the free software and open source communities over software patents

You do not have to love Microsoft, GitHub, or 'back them' in this case. But free and open source advocates who have concerns about GitHub Copilot should but just as skeptical of the GitHub Copilot plaintiffs based on what is at risk here.

]]>
<![CDATA[Microsoft's CBL-Delridge is 404, long live CBL-Mariner]]>https://boxofcables.dev/cbl-delridge-is-dead-long-live-cbl-mariner/66ac089493c6612a36399ac1Thu, 03 Nov 2022 21:46:53 GMT

CBL-Delridge, Microsoft's Debian-based Linux distribution, is no more. As pointed out to me by Mary Jo Foley, the CBL-Delridge apt package repository, is now 404:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

I had previously written a guide to building your own image of CBL-Delridge from that repository:

Building CBL-Delridge, Microsoft’s other Linux distro
Microsoft has another Linux distro you probably haven’t heard of. You can easily build it and even import it into WSL.
Microsoft's CBL-Delridge is 404, long live CBL-Mariner

I am afraid that method will no longer work.

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

Mary Jo Foley also wrote about CBL-Delridge:

Surprise: There’s yet another Microsoft Linux distro, CBL-Delridge
Microsoft has been public about its CBL-Mariner Linux release, which just hit the 2.0 milestone. But did you know there’s also a Microsoft CBL-Delridge?
Microsoft's CBL-Delridge is 404, long live CBL-Mariner

The only external use of CBL-Delridge by Microsoft, to my knowledge, was in Azure Cloud Shell, the shell built into Azure's web portal interface and Windows Terminal:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner
Microsoft's CBL-Delridge is 404, long live CBL-Mariner

But you will notice if you login to Azure Cloud Shell now, another Linux distro is powering ACS, Microsoft's CBL-Mariner:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

CBL is short for "Common Base Linux", which originally appeared to apply to a whole family of Linux distros within Microsoft, but Microsoft appears to be increasingly consolidating efforts about CBL-Mariner.

Microsoft describes CBL-Mariner as "an internal Linux distribution for Microsoft’s cloud infrastructure and edge products and services" but it now powers such diverse offerings as:

  • Azure Cloud Shell
  • Azure Kubernetes Service (AKS)
  • The lightweight layer that runs above your distro of choice on Windows Subsystem for Linux (WSL)

Unlike Debian and thus .deb-based CBL-Delridge, CBL-Mariner is .rpm-based, with .spec files borrowed from VMware's Photon OS, openmamba, and the Fedora Project. The project also acknowledges Linux from Scratch, so despite being .rpm-based, CBL-Mariner is not a derivative of Red Hat Enterprise Linux, Fedora, or SUSE Linux. It is something new.

Also unlike CBL-Delridge, Microsoft makes installable .ISO images available for CBL-Mariner.

The GUI installer on the ISO is quite fast. A full installation on a VM with just 4GB of RAM took 64 seconds.

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

And CBL-Mariner is proud of its install speed too:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

You can browse the packages available in CBL-Mariner's default repository. While there is no official GUI desktop, there are some interesting GNOME packages landing in the repository:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

Development on CBL-Mariner is quite active, with 223 releases to date on both the 1.0 and 2.0 release branches:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

CBL-Mariner even provides detailed instructions on how it is built:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

And how to build it yourself:

Microsoft's CBL-Delridge is 404, long live CBL-Mariner

You can learn more from CBL-Mariner's GitHub page:

GitHub - microsoft/CBL-Mariner: Linux OS for Azure 1P services and edge appliances
Linux OS for Azure 1P services and edge appliances - GitHub - microsoft/CBL-Mariner: Linux OS for Azure 1P services and edge appliances
Microsoft's CBL-Delridge is 404, long live CBL-Mariner

Or Microsoft's official documentation for CBL-Mariner:

CBL-Mariner Documentation
CBL-Mariner is an internal Linux distribution built by Microsoft for use in Azure
Microsoft's CBL-Delridge is 404, long live CBL-Mariner

So, the question is, is CBL-Mariner going to be Microsoft Linux?

]]>
<![CDATA[Building custom WSL distro images with Podman]]>https://boxofcables.dev/building-wsl-distro-images-with-podman/66ac089493c6612a36399ac0Sat, 30 Jul 2022 02:50:20 GMT

Podman is a daemonless container engine for developing, managing, and running OCI Containers on Linux, but it now has support on Windows and macOS as well.

Podman is a drop-in replacement for the Docker client. If you know the Docker CLI syntax and Dockerfile format, you know Podman.

Like Docker, it is possible to generate WSL distro images with Podman. Podman is also available for your favorite Linux distribution on WSL and for Windows, using a custom WSL backend.

This guide will walk you through the process

In PowerShell:

Enable WSL

wsl.exe --install

Install Scoop

Building custom WSL distro images with Podman
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
irm get.scoop.sh | iex

Install Podman

Building custom WSL distro images with Podman
scoop install podman

Initiate Podman

Building custom WSL distro images with Podman
podman machine init

Start Podman

Building custom WSL distro images with Podman
podman machine start

Create a base Containerfile

Containerfile is the successor to the Dockerfile. The syntax is the same and has become a standard, called the OCI container format.

We are going to create a very simple container image from an upstream container image of Ubuntu 22.04, ubuntu:jammy, add some useful packages-like Podman, and save it as Containerfile:

FROM ubuntu:jammy
RUN apt update && apt upgrade -y && apt install ca-certificates podman -y
Building custom WSL distro images with Podman

Build the Container

Back in Terminal, let's build our ContainerFile with Podman:

Building custom WSL distro images with Podman
podman build --file .\Containerfile . --tag customjammy:latest

Initiate the Container

We need to run our new container, customjammy, one time to initiate it:

Building custom WSL distro images with Podman
podman run -t customjammy:latest echo "Hello World!"

Export the Container as a .Tar File

Now we can export the container a .tar file to import into WSL:

Building custom WSL distro images with Podman
podman export --output customjammy.tar $(podman ps -a --format "{{.Names}}")

Import .Tar file into WSL

First we create a directory to store our imported distro:

Building custom WSL distro images with Podman
mkdir C:\WSL\customjammy

Next we import our distro, now bundled into a .tar file, into WSL:

Building custom WSL distro images with Podman
wsl --import "My-Ubuntu-Jammy" C:\WSL\customjammy .\customjammy.tar

Run Imported WSL Distro

Finally, we can run our imported distro with the wsl.exe -d command:

Building custom WSL distro images with Podman
wsl.exe -d My-Ubuntu-Jammy
Building custom WSL distro images with Podman

We can cat the /etc/os-release file to verify we are running Ubuntu 22.04 Jammy Jellyfish:

cat /etc/os-release

When you restart Windows Terminal you should be able to see My-Ubuntu-Jammy in the drop-down menu:

Building custom WSL distro images with Podman

You can also set your new distro as the default profile in Windows Terminal settings:

Building custom WSL distro images with Podman

This process can be done with any upstream image available on Dockerhub, Quay, or other Linux container registry.

You can further customize your image using standard OCI commands, like pre-installing packages and setting custom settings.

]]>
<![CDATA[Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined]]>https://boxofcables.dev/running-determined-ai-on-wsl-2/66ac089493c6612a36399abdTue, 21 Jun 2022 05:42:37 GMT

Determined is an open source platform for AI/ML model development and training at scale.

Determined handles the provisioning of machines, networking, data loading, and provides fault tolerance.

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

It allows AI/ML engineers to pool and share computing resources, track experiments, and supports deep learning frameworks like PyTorch, TensorFlow, and Keras.

I am still learning about AI/ML. My interest was piqued after GPU compute arrived on Windows Subsystem for Linux (WSL), starting with CUDA.

Determined seems like a very cool and easy to use platform to learn more on, it offers a web-based dashboard and includes a built-in AI/ML IDE.

There are several ways to deploy Determined, including pip , and to use Determined, like a terminal cli tool, det.

My preference is a more cloud native approach deploying with containers and interacting through the web-based dashboard.

This guide will cover setting up a local Determined deployment on WSL 2 with Docker Desktop.

We will:

  • Verify a working GPU setup on WSL
  • Deploy a database backend container
  • Deploy a Determined master node container connected to the database
  • Deploy and connect a Determined agent node container to the Determined master node
  • Launch the JupyterLab IDE in the Determined web interface

Requirements for this tutorial:

  • Windows 11 (recommended) or Windows 10 21H2
  • Windows Subsystem for Linux Preview from the Microsoft Store (recommended) or the standard Windows Subsystem for Linux feature but run wsl.exe --update to make sure you have the latest WSL kernel
  • The latest NVIDIA GPU drivers directly from NVIDIA, not just Windows Update drivers
  • Any WSL distro
  • Docker Desktop 4.9+ installed with WSL integration enabled for the WSL distro you are going to be working in
  • A CUDA-enabled NVIDIA GPU, e.g. GeForce RTX 1080 or higher*

*This workflow does work without a CUDA-enabled NVDIA GPU but will default to CPU-only if no GPU is available.

Basics

Verify that Docker Desktop is accessible from WSL 2:

docker --version
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

This should not be docker-ce or an equivalent installed in WSL, but the aliases Docker Desktop places using WSL integration:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Verify that GPU support is working in Docker and WSL 2:

docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Note my NVIDIA GeForce 2070 Super is visible in nvidia-smi output.

Set up PostgreSQL

Start an instance of PostgreSQL:

docker run -d --name determined-db -p 5432:5432 -v determined_db:/var/lib/postgresql/data -e POSTGRES_DB=determined -e POSTGRES_PASSWORD=password postgres:10

I recommend changing your password to anything besides password.

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Get your WSL IP address

Grab your WSL instance's eth0 IP address from ip, parse it using sed, and stash it as an environmental variable $WSLIP:

WSLIP=$(ip -f inet addr show eth0 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Start the Determined Master Node

Start up an instance of the determined-master image, connected to the PostgreSQL determined database we spun up on port 5432:

docker run -d --name determined-master -p 8080:8080 -e DET_DB_HOST=$WSLIP -e DET_DB_NAME=determined -e DET_DB_PORT=5432 -e DET_DB_USER=postgres -e DET_DB_PASSWORD=password determinedai/determined-master:latest 
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Launch the Determined Master Node web dashboard:

powershell.exe /c start http://$WSLIP:8080
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Use the default admin account, no password, to log in.

Now you have access to the Determined dashboard.

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

But we do not have any agents connected to run experiments on.

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Attach a Determined Agent Node

Start up an instance of the determined-agent image, pointed at our Determined Master host IP ($WSLIP) and port (8080):

docker run -d --gpus all -v /var/run/docker.sock:/var/run/docker.sock --name determined-agent -e DET_MASTER_HOST=$WSLIP -e DET_MASTER_PORT=8080 -e NVIDIA_DRIVER_CAPABILITIES=compute,utility determinedai/determined-agent:latest
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Note:

  • Include --gpus all is to pass-through our NVIDIA GPU to the determined-agent container.
  • Set NVIDIA_DRIVER_CAPABILITIES to also include compute, overriding the determined-agent default of just utility. This enables the agent to detect the pass-through CUDA GPU. This issue was documented and I submitted a PR.
  • If you do not have an CUDA-enabled GPU and wish to use CPU only, use:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name determined-agent -e DET_MASTER_HOST=$WSLIP -e DET_MASTER_PORT=8080 determinedai/determined-agent:latest
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Return to the Determined dashboard, to see our clusters:

powershell.exe /c start http://$WSLIP:8080/det/clusters

We can now see 1 connected agent and 0/1 CUDA slots allocated, ready for training deep learning models:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Click Launch JupyterLab to spin up a web-based Python IDE for notebooks, code, and data:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

And our available CUDA GPU will be automatically assigned. You can see how it is provisioned and visible in the Determined dashboard:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined
Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

And now we have a CUDA-accelerated JupyterLab Python AI/ML IDE:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

We can even start up additional CPU-only Determined worker agents:

docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name determined-agent-2 -e DET_MASTER_HOST=$WSLIP -e DET_MASTER_PORT=8080 determinedai/determined-agent:latest

Note the tweaked the name of the image to determined-agent-2.

And see those resources available in the Determined web dashboard:

Running a massively scalable CUDA-accelerated AI/ML lab on WSL 2 with Determined

Notes

  • When stopping determined-agent, be sure to stop determined-fluent too.
]]>
<![CDATA[Creating A Lightweight Windows Container Dev Environment without Docker Desktop]]>https://boxofcables.dev/a-lightweight-windows-container-dev-environment/66ac089493c6612a36399abcMon, 28 Feb 2022 20:57:46 GMTIntroductionCreating A Lightweight Windows Container Dev Environment without Docker Desktop

If you are getting started with Windows Container development, one option is to install Docker Desktop.

Docker Desktop gives you access to both Windows Containers and Linux containers, by leveraging WSL 2.

Another option may eventually be Rancher Desktop if they add Windows support, but it is currently limited to Linux containers.

But if you prefer a lighter, command line approach to working with Windows Containers, it is possible to install and use Docker static binaries without Docker Desktop.

The Docker static binaries are distributed under the Apache 2 license and do not require a Docker Desktop subscription, even for commercial use.

The downside to this approach is that Docker static binaries on Windows do not support Linux containers, buildx, docker scan, or docker compose functionality.

The flip side though is that if you are the type that prefers minimal command line interfaces then you can also install 'native' Linux Docker on WSL 2 without Docker Desktop and switch back and forth as needed.

Requirements

Windows Containers requires Windows 10/11 Pro or Enterprise version 1607 or higher.

To get started, in Windows Features enable:

  • Containers
  • Hyper-V

and reboot if necessary.

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Alternatively, you can open PowerShell as Administrator and run:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
Enable-WindowsOptionalFeature -Online -FeatureName Containers

Install Scoop

Open PowerShell as your normal user, ideally in the new Windows Terminal, and run:

Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh')
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

If you get an error about PowerShell script execution policy:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

You need to change the execution policy with:

Set-ExecutionPolicy RemoteSigned -scope CurrentUser

Then re-run:

Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh')

Install Useful Scoop Tools

In PowerShell use Scoop to install tools that improve the use of Scoop, specifically git and aria2. git enables Scoop to update itself. aria2 speeds up downloads.

scoop install git aria2
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Install Docker Binaries

In PowerShell use Scoop to install the Docker static binaries:

scoop install docker
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Enable the Docker Service in Windows

We now need to enable and start the Docker Service in Windows. This requires a PowerShell instance with elevated privileges as Administrator. In PowerShell start an elevated shell with:

Start-Process PowerShell -verb RunAs
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Enable the elevated PowerShell to make changes in the prompt. Then in the elevated PowerShell run:

dockerd --register-service ; Start-Service docker ; exit
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

This will register the service, start it, and then exit the elevated Administrator shell. If you open Services, you should now see the Docker Engine listed:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

It will start automatically on Windows boot. If desired, you can configure it using Services to only start it manually.

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Enable User Access To Docker Service

By default, non-privileged Windows users cannot reach the Docker Service. At this point if you run docker run hello-world:nanoserver as a non-privileged user, you will encounter the following error:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop
docker: error during connect: This error may indicate that the docker daemon is not running.: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/create": open //./pipe/docker_engine: Access is denied.
See 'docker run --help'.

There are two solutions.

One, to always use an elevated PowerShell to work with Docker. This is quick and easy but is not advised. One mistake and you can cause irreparable damage to your Windows installation. You can even configure this in Windows Terminal:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Second, my recommended method, is to use dockeraccesshelper to enable and configure access to the Docker Service for non-privileged users.

Install dockeraccesshelper

dockeraccesshelper is an open source PowerShell module to allow non-privileged users to connect to the Docker Service.

To configure dockeraccess module, open another elevated PowerShell:

Start-Process PowerShell -verb RunAs
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Enable the elevated PowerShell to make changes. Then in the elevated PowerShell install dockeraccesshelper with:

Install-Module -Name dockeraccesshelper

Accepting all the prompts.

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Import the dockeraccesshelper module with:

Import-Module dockeraccesshelper

Note, if you encounter the following error:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Run the following to enable execution of remote signed PowerShell scripts for the current user:

Set-ExecutionPolicy RemoteSigned -scope CurrentUser

Then re-run:

Import-Module dockeraccesshelper
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Finally, we need to configure dockeraccesshelper by running:

Add-AccountToDockerAccess DOMAIN\USERNAME

Substituting DOMAIN and USERNAME for the domain and username of your non-privileged user.

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

If you are not sure what your domain and username are, you can use the whoami command in the PowerShell shell of your non-privileged user, then copy and paste it into the elevated PowerShell:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Then exit your elevated PowerShell and return to your non-privileged PowerShell with exit:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

If we return to the non-privileged PowerShell, we can re-run docker run hello-world:nanoserver:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

And with everything right we should see:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

You now have a lightweight environment configured for working with Windows containers using Docker from PowerShell.

At rest, dockerd uses about 20MB of RAM:

Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Build Example Windows Container

I have a Dockerfile that builds a Windows container with a development environment for the Nim programming language.

You can clone the repository with:

git clone https://github.com/sirredbeard/nim-windows-container
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Drop down into the directory:

cd .\nim-windows-container\

And build it locally with:

docker build . --file Dockerfile --tag nimstable-ltsc2022
Creating A Lightweight Windows Container Dev Environment without Docker Desktop

Or, alternatively, pull it directly from the GitHub package repository with:

docker pull ghcr.io/sirredbeard/nim-windows-container/nimstable-ltsc2022:latest

To start playing with it and see how Windows Containers are built.

Maintenance

To update Scoop, run:

scoop update

To update Docker using Scoop, run:

scoop update docker

Additional Windows Container Documentation

]]>
<![CDATA[Building CBL-Delridge, Microsoft's other Linux distro]]>https://boxofcables.dev/building-cbl-d-microsofts-other-linux-distro/66ac089493c6612a36399abbWed, 02 Feb 2022 23:17:45 GMT

Microsoft maintains a handful of Linux distributions. You may have heard of CBL-Mariner:

GitHub - microsoft/CBL-Mariner: Linux OS for Azure 1P services and edge appliances
Linux OS for Azure 1P services and edge appliances - GitHub - microsoft/CBL-Mariner: Linux OS for Azure 1P services and edge appliances
Building CBL-Delridge, Microsoft's other Linux distro
CBL-Mariner: Microsoft’s internal Linux distribution for Azure first-party services and edge appliances | ZDNet
The Linux Systems Group at Microsoft has developed a Linux distribution for internal use and made it available on GitHub.
Building CBL-Delridge, Microsoft's other Linux distro

CBL-Mariner is used across Microsoft, including Azure and to power WSLg on WSL 2.

You may also have heard of Microsoft's earlier networking Linux distro, SONiC:

SONiC (operating system) - Wikipedia
Building CBL-Delridge, Microsoft's other Linux distro
Microsoft has crafted a switch OS on Debian Linux. Repeat, a switch OS on Debian Linux
Toolkit for wrangling networks released
Building CBL-Delridge, Microsoft's other Linux distro

Did you know Microsoft also maintains yet another Linux distribution, CBL-Delridge?

Searches for it on Google bring up...CBL-Mariner and a mention of CBL-Delridge on my Microsoft Open Source timeline:

Building CBL-Delridge, Microsoft's other Linux distro

CBL-Delridge is used to power Azure's Cloud Shell:

Building CBL-Delridge, Microsoft's other Linux distro

Unlike CBL-Mariner, which is built from scratch, CBL-Delridge is based on Debian, specifically version 10, codenamed Buster.

The current version of CBL-Delridge, or CBL-D for short, is coincidentally also version 10, codenamed Quinault.

The apt package repositories for CBL-D are public and therefore it is possible to build our own image of CBL-D for our own purposes and even import it into WSL.

First, we need debootstrap. debootstrap allows you to bootstrap Debian-family distributions from apt repositories. deboostrap can be found in most other distro package repositories, including openSUSE.

On openSUSE install debootstrap with:

sudo zypper in -y debootstrap
Building CBL-Delridge, Microsoft's other Linux distro

Next we need to create a chroot, a new root directory folder to bootstrap the CBL-D image into:

mkdir /tmp/chroot
Building CBL-Delridge, Microsoft's other Linux distro

Then we need to give debootstrap some very basic information about CBL-D. debootstrap includes this info for common Debian family distributions, including Debian, Kali, Ubuntu, and Devuan:

Building CBL-Delridge, Microsoft's other Linux distro

No Quinault here though. But Quinault is based on Debian Buster, so we can just copy the debootstrap script for Buster and debootstrap will find it and use it:

sudo cp /usr/share/debootstrap/scripts/buster /usr/share/debootstrap/scripts/quinault
Building CBL-Delridge, Microsoft's other Linux distro

Next, we fire up debootstrap with:

sudo debootstrap --arch "amd64" --include=gnupg,sudo,nano quinault /tmp/chroot https://packages.microsoft.com/repos/cbl-d/

Our command specifies:

--arch "amd64" because we are building for amd64. It is possible to cross-compile with debootstrap but it can be messy.

--include=gnupg,sudo,nano to include these additional packages on top of the base Quinault image.

quinault is the version name of our distro and the reason we copied the debootstrap script before.

/tmp/chroot is where we want to bootstrap the image, into the new root filesystem.

https://packages.microsoft.com/repos/cbl-d/ is the location of the CBL-D apt repository.

Building CBL-Delridge, Microsoft's other Linux distro

If successful, we land at:

Building CBL-Delridge, Microsoft's other Linux distro

The contents of /tmp/chroot look like the root folder of any Linux distro:

Building CBL-Delridge, Microsoft's other Linux distro

We have a few more steps to complete though to make this a functional distro image we can use.

Our base image includes https://packages.microsoft.com/repos/cbl-d in it's apt sources.list file, but there is an additional repository for CBL-D published by Microsoft that contains Go, Python, and a handful of utilities.

Add the additional repository to sources.list with:

echo 'deb https://packages.microsoft.com/repos/cbl-d quinault-universe main' | sudo tee -a /etc/apt/sources.list > /dev/null
Building CBL-Delridge, Microsoft's other Linux distro

The contents of /tmp/chroot/etc/apt/sources.list should now include:

deb https://packages.microsoft.com/repos/cbl-d quinault main
deb https://packages.microsoft.com/repos/cbl-d quinault-universe main

Finally, we need to add the security keys for the Microsoft CBL-D apt repository, otherwise apt cannot verify packages or perform upgrades. We do this by using the chroot command to run apt-key from inside our new chroot filesystem:

sudo chroot /tmp/chroot/ /usr/bin/apt-key adv --keyserver hkps://keyserver.ubuntu.com:443 --recv-keys EB3E94ADBE1229CF

This runs /usr/bin/apt-key inside our chroot at /tmp/chroot as if /tmp/chroot was the root folder. It essentially runs the command on CBL-D inside CBL-D, even though it is just a folder on openSUSE.

Building CBL-Delridge, Microsoft's other Linux distro

Once we have imported the keys to apt to authenticate with the Microsoft repositories, we have a working image.

Now, we can play with CBL-D.

The easiest way to get started is to launch bash in the CBL-D chroot folder with:

sudo chroot /tmp/chroot/ /bin/bash

This is similar to how we used chroot to run the apt-key command, except this time spawning a shell we can explore with.

Building CBL-Delridge, Microsoft's other Linux distro

Now you are running as if /tmp/chroot was the root directory, so running apt update is going to check the Microsoft repository for upgrades for CBL-D, even here in a folder on openSUSE. You can exit the chroot with exit to go back to your host distro:

Building CBL-Delridge, Microsoft's other Linux distro

If you want to bundle up your chroot environment to import into WSL to play with, then we need to make a .tar.gz archive and then import with wsl.exe --import. Make sure if you used the command above you exit the chroot and go back to your host distro.

We create a tar of the chroot by:

sudo tar -cpzf cbld-quinault.tar.gz -C /tmp/chroot/ .

sudo here is necessary because of some of the permissions on the bootstrapped file system.

Building CBL-Delridge, Microsoft's other Linux distro

Then we can import it as a WSL distro, from inside WSL or separate PowerShell, with:

wsl.exe --import CBL-D "C:\WSL\CBL-D" cbld-quinault.tar.gz --version 2

This command tells WSL to import a distro with:

CBL-D as the name, which you can call with wsl.exe -d CBL-D or visible in Windows Terminal.

With the WSL files stored at C:\WSL\CBL-D.

From our cbld-quinault.tar.gz root filesystem archive.

As WSL 2 with --version 2.

Building CBL-Delridge, Microsoft's other Linux distro

Then you can launch CBL-D on WSL with:

wsl.exe -d CBL-D
Building CBL-Delridge, Microsoft's other Linux distro

Or re-open Windows Terminal and it will be visible in the drop-down menu:

Building CBL-Delridge, Microsoft's other Linux distro

CBL-D is an abbreviated version of Debian, so all of your favorite packages might not be present.

There are about 1554 as of writing, compared to 28392 in Debian 10. I have posted a list here, almost all console tools and the most basic X utilities.

Building CBL-Delridge, Microsoft's other Linux distro
]]>
<![CDATA[February 1 Live Stream: Trying (and failing) to install openSUSE and SmartOS on Vultr]]>]]>https://boxofcables.dev/february-1-live-stream-trying-and-failing-to-install-opensuse-and-smartos-on-vultr/66ac089493c6612a36399abaWed, 02 Feb 2022 01:43:16 GMT]]><![CDATA[How Oracle saved rpm on WSL 1]]>https://boxofcables.dev/how-oracle-saved-rpm-on-wsl-1/66ac089493c6612a36399ab9Tue, 01 Feb 2022 20:57:06 GMT

Oracle Linux has been published on the Microsoft Store for WSL. Oracle Linux is compiled from the sources of Red Hat Enterprise Linux (RHEL). On WSL 2 you are going to get the Microsoft WSL 2 kernel by default. But notably Oracle Linux for bare metal and VMs is available with a choice of two Linux kernels, one that is guaranteed to be compatible with RHEL and one that is Oracle's custom kernel, called the Unbreakable Enterprise Kernel (UEK). Oracle Linux competes in the enterprise Linux space with RHEL and SUSE Enterprise Linux (SLES).

Unlike RHEL and SLES, Oracle Linux and it's package repositories are completely free, no subscription is required. Oracle even publishes a handy script to switch from CentOS directly to Oracle Linux. This makes it an option for administrators running CentOS who want to remain downstream of RHEL and not go to CentOS Stream:

How Oracle saved rpm on WSL 1

Oracle Linux can then be managed alongside RHEL, Ubuntu, and SLES with SUSE Manager.

Oracle Linux officially landing on WSL reminds me of a story about Oracle and WSL from 2019 I wanted to share.

rpm, the Red Hat package manager, tool was very flaky on WSL 1. It saw frequent rpmdb corruption and would seg fault regularly, with issues reported back to early 2018.

How Oracle saved rpm on WSL 1

This issue affected Red Hat Enterprise Linux and all related distributions, including CentOS, Oracle Linux, Scientific Linux, and to a lesser extent Fedora. It occurred on Windows builds 17134 (1803) through 18383 (1909).

It, ironically, did not affect openSUSE or SUSE Enterprise Linux because even though zypper is based on rpm, SUSE had a patched rpm, including the part that triggered this bug on WSL 1.

At Whitewater Foundry, we wanted to ship Fedora Remix for WSL and Pengwin Enterprise, but this rpm bug held back our progress. Pengwin Enterprise shipped with Scientific Linux in the Microsoft Store but could be custom built with RHEL for enterprise customers with valid RHEL subscriptions.

How Oracle saved rpm on WSL 1

I reached out to Red Hat, who we were partners with, for assistance in isolating the rpm bug on WSL 1. We had narrowed the bug down to the database function in rpm, related to a bad mmap function.

Unfortunately our relationship with Red Hat at the time was...complicated.

While the Red Hat partnership and sales teams were excited by our partnership, prominent figures in engineering were not happy at all. Unfortunately this meant Red Hat engineering would not help us with the rpm bug, even as mutual customers asked for help.

While Red Hat still has not shipped Red Hat Enterprise Linux for WSL, they have since indicated they plan to ship a Podman app for Windows based on WSL. That is progress.

Eventually I reached out to Oracle, as a long shot, for assistance. Much to my surprise, Oracle was incredibly responsive to my inquiry. I detailed the issue and our progress on the bug.

Oracle assigned an experienced engineer to the project who replied with sample C code that reproduced the WSL 1 bug, provided below, confirming a bug in mmap handling on WSL 1.

Working with the WSL team at Microsoft, a fix for the WSL 1 bug was then tested in Windows Insider build 18890 and shipped in Windows 10 19041 (20H1).

Oracle provided their assistance no questions asked, even though we were officially partnered with Red Hat, and improved the experience of running all RHEL-based distros on WSL 1.

That is how Oracle helped save rpm on WSL 1.

Additional Reading

Sample C code from Oracle

/*
 * Author: Lauren Foutz
 * This program demonstrates a bug in extending an mmapped file in WSL, 
 * where the mappings to the extended part end up effecting mappings to
 * the beginning of the file.
 *
 * Output Should Be:
 *   Fill the first 64K bytes with 'A'.
 *   Extend the file to 1MB
 *   Now fill the extended part of the file with 'B'.
 *   First byte of the extended part of the file at address 0x7f03a50d6000, should be B: B
 *   First byte of the file at mapped address 0x7f03a50c6000, should be A: A
 *   Address 0x7f03a50d6000 is now: C, address 0x7f03a50c6000 should still be A, is now: A 
 *   Reopen the file and read the first 5 bytes.
 *   First five bytes of the file should be 'AAAAA': AAAAA
 *
 * On WSL the output is:
 *   Fill the first 64K bytes with 'A'.
 *   Extend the file to 1MB
 *   Now fill the extended part of the file with 'B'.
 *   First byte of the extended part of the file at address 0x7fe1db720000, should be B: B
 *   First byte of the file at mapped address 0x7fe1db710000, should be A: B
 *   Address 0x7fe1db720000 is now: C, address 0x7fe1db710000 should still be A, is now: C
 *   Reopen the file and read the first 5 bytes.
 *   First five bytes of the file should be 'AAAAA': CBBBB
 */

#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h>

#include <errno.h>
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>

int
main(argc, argv)
	int argc;
	char *argv[];
{
	    const char *io_file = "tmp_mmap4_f";
	    int fid = 0, i = 0;
	    void *addr1 = NULL;
	    int open_flags = 0, mode = 0, total_size = 0;
	    char buf[256], *caddr;

	    unlink(io_file);

	    open_flags = O_CREAT | O_TRUNC | O_RDWR;
	    mode = S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH;

	    /* Open a file. */
	    fid = open(io_file, open_flags, mode);

	    total_size = 1024 * 1024;

	    /* Fill the first 64K bytes with 'A'*/
	    printf("Fill the first 64K bytes with 'A'.\n");
	    memset(buf, (int)'A', sizeof(buf));
	    for (i = 0; i < 256; i++) {
		lseek(fid, i * 256, SEEK_SET);
		write(fid, buf, sizeof(buf));
	    }

	    /* mmap the file */
	    addr1 = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fid, 0);

	    /* Extend the file, write the last byte */
	    printf("Extend the file to 1MB\n");
	    lseek(fid, total_size - 1, SEEK_SET);
	    write(fid, buf, sizeof(buf[0]));

	    printf("Now fill the extended part of the file with 'B'.\n");
	    caddr = (char *)(addr1);
	    for (i = 256 * 256; i < total_size; i++) {
		caddr[i] = 'B';
	    }   
	    caddr = caddr + (256 * 256);
	    printf("First byte of the extended part of the file at address %p, should be B: %c\n", caddr,  caddr[0]);
	    printf("First byte of the file at mapped address %p, should be A: %c\n", addr1, *(char *)addr1);
	    caddr[0] = 'C';
	    printf("Address %p is now: %c, address %p should still be A, is now: %c \n", caddr, caddr[0], addr1, *(char *)addr1);

	    munmap(addr1, total_size);
	    fsync(fid);
	    close(fid);

	    /* Reopen the file and read the first 5 bytes. */
	    printf("Reopen the file and read the first 5 bytes.\n");
	    fid = open(io_file, O_RDWR, mode);
	    read(fid, buf, sizeof(buf[0]) * 5);
	    close(fid);
	    buf[5] = '\0';
	    printf("First five bytes of the file should be 'AAAAA': %s\n", buf);

	    return 0;
}
]]>
<![CDATA[Installing Retro Home on the Raspberry Pi 400]]>https://boxofcables.dev/installing-retro-home-on-the-raspberry-pi-400/66ac089493c6612a36399ab8Thu, 13 Jan 2022 20:35:36 GMT]]><![CDATA[Installing RISC OS on the Raspberry Pi 400]]>Let's learn about the origins of ARM and RISC OS at Acorn, download and flash a RISC OS image for the Pi 400 using another ARM device ironically, tangent into BeOS and Haiku, and then explore the unique UI of RISC OS.

]]>
https://boxofcables.dev/installing-risc-os-on-the-raspberry-pi-400/66ac089493c6612a36399ab7Fri, 31 Dec 2021 00:53:05 GMT

Let's learn about the origins of ARM and RISC OS at Acorn, download and flash a RISC OS image for the Pi 400 using another ARM device ironically, tangent into BeOS and Haiku, and then explore the unique UI of RISC OS.

]]>
<![CDATA[Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition]]>https://boxofcables.dev/kvm-optimized-custom-kernel-wsl2-2022/66ac089493c6612a36399ab6Wed, 15 Dec 2021 23:04:03 GMT

This guide walks you through the process of building a basic accelerated KVM custom kernel for WSL 2 that can be used with any WSL 2 distro you have installed.

This guide has been updated and simplified.

Requirements

Windows 11, build 22000+
WSL 2
OpenSUSE Tumbleweed
This guide works with Intel and AMD processors

Update openSUSE and install build dependencies

Run:

sudo zypper -n up

sudo bash -c "zypper in -y -t pattern devel_basis && zypper in -y bc openssl openssl-devel dwarves rpm-build libelf-devel aria2 jq"
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Get the Microsoft WSL 2 kernel sources

Run:

curl -s https://api.github.com/repos/microsoft/WSL2-Linux-Kernel/releases/latest | jq -r '.name' | sed 's/$/.tar.gz/' | sed 's#^#https://github.com/microsoft/WSL2-Linux-Kernel/archive/refs/tags/#' | aria2c -i -

tar -xf *.tar.gz
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Change to the kernel directory

Run:

cd "$(find -type d -name "WSL2-Linux-Kernel-linux-msft-wsl-*")"
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Copy the default Microsoft kernel configuration

Run:

cp Microsoft/config-wsl .config
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Tweak the default Microsoft kernel configuration for KVM guests

The following tweaks are based on kernel version 5.14.

Run:

sed -i 's/# CONFIG_KVM_GUEST is not set/CONFIG_KVM_GUEST=y/g' .config

sed -i 's/# CONFIG_ARCH_CPUIDLE_HALTPOLL is not set/CONFIG_ARCH_CPUIDLE_HALTPOLL=y/g' .config

sed -i 's/# CONFIG_HYPERV_IOMMU is not set/CONFIG_HYPERV_IOMMU=y/g' .config

sed -i '/^# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set/a CONFIG_PARAVIRT_CLOCK=y' .config

sed -i '/^# CONFIG_CPU_IDLE_GOV_TEO is not set/a CONFIG_CPU_IDLE_GOV_HALTPOLL=y' .config

sed -i '/^CONFIG_CPU_IDLE_GOV_HALTPOLL=y/a CONFIG_HALTPOLL_CPUIDLE=y' .config

sed -i 's/CONFIG_HAVE_ARCH_KCSAN=y/CONFIG_HAVE_ARCH_KCSAN=n/g' .config

sed -i '/^CONFIG_HAVE_ARCH_KCSAN=n/a CONFIG_KCSAN=n' .config
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Build the kernel

Run:

make -j 8
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Copy the built kernel to your Windows user's home folder

From WSL:

powershell.exe /C 'Copy-Item .\arch\x86\boot\bzImage $env:USERPROFILE'
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Point to your custom kernel in .wslconfig

From WSL:

powershell.exe /C 'Write-Output [wsl2]`nkernel=$env:USERPROFILE\bzImage | % {$_.replace("\","\\")} | Out-File $env:USERPROFILE\.wslconfig -encoding ASCII'
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Restart WSL

Open a PowerShell terminal and run:

wsl.exe --shutdown
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Confirm you are booting your custom kernel

Confirm you are running your new kernel by checking the date, which should be very recent. Run:

wsl.exe uname -a
Build An Accelerated KVM Guest Custom Kernel for WSL 2 - 2022 Edition

Congrats, your WSL 2 kernel is now optimized for KVM guests.

Now...what are you going to run with that?

]]>
<![CDATA[Bill Gates was right]]>https://boxofcables.dev/bill-gates-was-right/66ac089493c6612a36399ab5Wed, 08 Dec 2021 21:11:37 GMT

The Homebrew Computer Club was a personal computing hobbyist group in Menlo Park, California, which began in 1975 and met until 1986.

It was an informal gathering of computing enthusiasts to swap parts, circuits, software, and served as a forum to share information about DIY projects, when most personal computing was, in fact, DIY.

Its members included Steve Jobs, Steve Wozniak, Jerry Lawson, Adam Osbourne, and other early personal computing pioneers, hackers, and entrepreneurs, many of whom would go on to start companies, launch revolutionary products, and become tech legends.

There is no denying that the Homebrew Computer Club and the exchange of ideas there was a critical incubator in the growth of personal computing.

But the swapping of software at the Homebrew Computer Club, on tape form at the time, is often cited as an antecedent to open source in open source lore. I take issue with that notion.

In 1976, Bill Gates, co-founder of what was then 'Micro-Soft', published an open letter to hobbyists in the Homebrew Computer Club newsletter, decrying rampant piracy of Altair BASIC among the hobbyist community, including at meetings of the Homebrew Computer Club.

In his letter, Gates argued that widespread piracy of Altair BASIC made development of software for the burgeoning hobbyist market unsustainable:

The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however, 1) Most of these "users" never bought BASIC (less than 10% of all Altair owners have bought BASIC), and 2) The amount of royalties we have received from sales to hobbyists makes the time spent on Altair BASIC worth less than $2 an hour.

For many, this letter was the beginning of an ideological war between what would become open source and proprietary software, of free software vs. Microsoft.

For example, in the otherwise excellent documentary "Revolution OS" at 7:15, Gates' letter was cast as a declaration of war on computing freedom:

Or in this excerpt from "Rebel Code":

In the light of his Open letter to Hobbyists, the open source movement emerges as Bill Gates' worst nightmare magnified a thousand times. Not just a few hobbyists who "steal", but a thriving community that writes its own - excellent - code, and then gives it away. Because their actions patently do not "prevent good software from being written," they implicitly call into question the very basis of the Microsoft Empire: If good software can be written and given away like this, who needs Microsoft or companies like it?

Excerpt from Rebel Code by Glyn Moody. Published by Allen Lane. Copyright © 2001 Glyn Moody.

This all makes for a compelling apocryphal story. But it gets it wrong.

Piracy is not the foundation of open source. The exchange of pirated copies of Altair BASIC among hobbyists is, at best, an antecedent to warez BBS and boards in the 80s and 90s. That is a vastly different story than the story of open source.

Open source is a social contract between the developer and users.

That social contract relies on developers having the right to dictate the terms of distribution of their software and users adhering, in most cases voluntarily, to those terms.

At a minimum that social contract includes respecting the applied license. Under licenses such as the GPL, that contract also includes sharing derivative code.

The social contract between developers of open source and users can go further than just the license, of course. It can include contributing upstream, fostering community, and financial support, even if not explicitly required by the terms of the license.

It is this social contract and adherence to the developer's license that allows open source to work, mostly because we just all agree this is how it should work.

The notion, put forward by Gates in his open letter, that developers have the right to license software as they see fit and have that license respected, is not the antithesis of open source, it is the foundation of it.

It was a revolutionary idea at the time, that developers set their terms and users should respect them, upon which the free software and later open source movements were entirely based.

Gates was right.

It would be seven years before courts would recognize that software code could be copyrighted.

The open letter was not a declaration of war against open source. Without respect for licensing, that unique social contract between developer and user, the GPL and open source would not function, nor would we have the benefit of all the quality open source software upon which modern computing is based.

The open source movement owes a nod to the idea Gates put forward in his open letter. The open letter deserves its proper context in the development of open source.

More Reading

]]>