containerization – Hackaday https://hackaday.com Fresh hacks every day Mon, 16 Jun 2025 09:37:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 156670177 Network Infrastructure and Demon-Slaying: Virtualization Expands What a Desktop Can Do https://hackaday.com/2025/06/11/network-infrastructure-and-demon-slaying-virtualization-expands-what-a-desktop-can-do/ https://hackaday.com/2025/06/11/network-infrastructure-and-demon-slaying-virtualization-expands-what-a-desktop-can-do/#comments Wed, 11 Jun 2025 17:00:06 +0000 https://hackaday.com/?p=781730 The original DOOM is famously portable — any computer made within at least the last two decades, including those in printers, heart monitors, passenger vehicles, and routers is almost guaranteed …read more]]>

The original DOOM is famously portable — any computer made within at least the last two decades, including those in printers, heart monitors, passenger vehicles, and routers is almost guaranteed to have a port of the iconic 1993 shooter. The more modern iterations in the series are a little trickier to port, though. Multi-core processors, discrete graphics cards, and gigabytes of memory are generally needed, and it’ll be a long time before something like an off-the-shelf router has all of these components.

But with a specialized distribution of Debian Linux called Proxmox and a healthy amount of configuration it’s possible to flip this idea on its head: getting a desktop computer capable of playing modern video games to take over the network infrastructure for a LAN instead, all with minimal impact to the overall desktop experience. In effect, it’s possible to have a router that can not only play DOOM but play 2020’s DOOM Eternal, likely with hardware most of us already have on hand.

The key that makes a setup like this work is virtualization. Although modern software makes it seem otherwise, not every piece of software needs an eight-core processor and 32 GB of memory. With that in mind, virtualization software splits modern multi-core processors into groups which can act as if they are independent computers. These virtual computers or virtual machines (VMs) can directly utilize not only groups or single processor cores independently, but reserved portions of memory as well as other hardware like peripherals and disk drives.

Proxmox itself is a version of Debian with a number of tools available that streamline this process, and it installs on PCs in essentially the same way as any other Linux distribution would. Once installed, tools like LXC for containerization, KVM for full-fledged virtual machines, and an intuitive web interface are easily accessed by the user to allow containers and VMs to be quickly set up, deployed, backed up, removed, and even sent to other Proxmox installations.

Desktop to Server

The hardware I’m using for Proxmox is one of two desktop computers that I put together shortly after writing this article. Originally this one was set up as a gaming rig and general-purpose desktop computer running Debian, but with its hardware slowly aging and my router not getting a software update for the last half decade I thought I would just relegate the over-powered ninth-generation Intel Core i7 with 32 GB of RAM to run the OPNsense router operating system on bare metal, while building a more modern desktop to replace it. This was both expensive not only in actual cost but in computer resources as well, so I began investigating ways that I could more efficiently use this aging desktop’s resources. This is where Proxmox comes in.

By installing Proxmox and then allocating four of my eight cores to an OPNsense virtual machine, in theory the desktop could function as a router while having resources leftover for other uses, like demon-slaying. Luckily my motherboard already has two network interfaces, so the connection to a modem and the second out to a LAN could both be accommodated without needing to purchase and install more hardware. But this is where Proxmox’s virtualization tools start to shine. Not only can processor cores and chunks of memory be passed through to VMs directly, but other hardware can be sectioned off and passed through as well.

So I assigned one network card to pass straight through to OPNsense, which connects to my modem and receives an IP address from my ISP like a normal router would. The other network interface stays with the Proxmox host, where it is assigned to an internal network bridge where other VMs get network access. With this setup, all VMs and containers I create on the Proxmox machine can access the LAN through the bridge, and since the second networking card is assigned to this bridge as well, any other physical machines (including my WiFi access point) can access this LAN too.

Not All VMs are Equal

Another excellent virtualization feature that Proxmox makes easily accessible is the idea of “CPU units”. In my setup, having four cores available for a router might seem like overkill, and indeed it is until my network gets fully upgraded to 10 Gigabit Ethernet. Until then, it might seem like these cores are wasted.

However, using CPU units the Proxmox host can assign unused or underutilized cores to other machines on the fly. This also lets a user “over-assign” cores, while the CPU units value acts as a sort of priority list. My ninth-generation Intel Core i7 has eight cores, so in this simple setup I can assign four cores to OPNsense with a very high value for CPU units and then assign six cores to a Debian 12 VM with a lower CPU unit value. This scheduling trick makes it seem as though my eight-core machine is actually a ten-core machine, where the Debian 12 VM can use all six cores unless the OPNsense VM needs them. However, this won’t get around the physical eight-core reality where doing something like playing a resource-intense video game while there’s a large network load, and this reassignment of cores back to the router’s VM could certainly impact performance in-game.

A list of VMs and containers running on Proxmox making up a large portion of my LAN, as well as storage options for my datacenter.

Of course, if I’m going to install DOOM Eternal on my Debian 12 VM, it’s going to need a graphics card and some peripherals as well. Passing through USB devices like a mouse and keyboard is straightforward. Passing through a graphics card isn’t much different, with some caveats.

The motherboard, chipset, and processor must support IOMMU to start. From there, hardware that’s passed through to a VM won’t be available to anything else including the host, so with the graphics card assigned to a VM, the display for the host won’t be available anymore. This can be a problem if something goes wrong with the Proxmox machine and the network at the same time (not out of the question since the router is running in Proxmox too), rendering both the display and the web UI unavailable simultaneously.

To mitigate this, I went into the UEFI settings for the motherboard and set the default display to the integrated Intel graphics card on the i7. When Proxmox boots it’ll grab the integrated graphics card, saving the powerful Radeon card for whichever VM needs it.

At this point I’ve solved my initial set of problems, and effectively have a router that can also play many modern PC games. Most importantly, I haven’t actually spent any money at this point either. But with the ability to over-assign processor cores as well as arbitrarily passing through bits of the computer to various VMs, there’s plenty more that I found for this machine to do besides these two tasks.

Containerized Applications

The ninth-gen Intel isn’t the only machine I have from this era. I also have an eighth-generation machine (with the IME disabled) that had been performing some server duties for me, including network-attached storage (NAS) and media streaming, as well as monitoring an IP security camera system. With my more powerful desktop ready for more VMs I slowly started migrating these services over to Proxmox, freeing the eighth-gen machine for bare-metal tasks largely related to gaming and media. The first thing to migrate was my NAS. Rather than have Debian manage a RAID array and share it over the network on its own, I used Proxmox to spin up a TrueNAS Scale VM. TrueNAS has the benefit of using ZFS as a filesystem, a much more robust setup than the standard ext4 filesystem I use in most of my other Linux installations. I installed two drives in the Proxmox machine, passed them through to this new VM, and then set up my new NAS with a mirrored configuration, making this NAS even more robust than it previously was under Debian.

The next thing to move over were some of my containerized applications. Proxmox doesn’t only support VMs, it has the ability to spin up LXC containers as well. Containers are similar to VMs in that the software they run is isolated from the rest of the machine, but instead of running their own operating system they share the host’s kernel, taking up much less system resources. Proxmox still allows containers to be assigned processor cores and uses the CPU unit priority system as well, so for high-availability containers like Pihole I can assign the same number of CPU units as my OPNsense VM, but for my LXC container running Jelu (book tracking), Navidrome (streaming music), and Vikunja (task lists), I can assign a lower CPU unit value as well as only one or two cores.

The final containerized application I use is Zoneminder, which keeps an eye on a few security cameras at my house. It needs a bit more system resources than any of these other two, and it also gets its own hard drive assigned for storing recordings. Unlike TrueNAS, though, the hard drive isn’t passed through but rather the container mounts a partition that the Proxmox host retains ultimate control over. This allows other containers to see and use it as well.

A summary of my Proxmox installation’s resource utilization. Even with cores over-assigned, it rarely breaks a sweat unless gaming or transferring large files over the LAN.

At this point my Proxmox setup has gotten quite complex for a layperson such as myself, with a hardware or system failure meaning that not only would I lose my desktop computer but also essentially all of my home’s network infrastructure and potentially all of my data as well. But Proxmox also makes keeping backups easy, a system that has saved me many times.

For example, OPNsense once inexplicably failed to boot, and another time a kernel update in TrueNAS Scale caused it to kernel panic on boot. In both cases I was able to simply revert to a prior backup. I have backups scheduled for all of my VMs and containers once a week, and this has saved me many headaches. Of course, it’s handy to have a second computer or external drive for backups, as you wouldn’t want to store them on your virtualized NAS which might end up being the very thing you need to restore.

I do have one final VM to mention too, which is a Windows 10 installation. I essentially spun this up because I was having an impossibly difficult time getting my original version of Starcraft running in Debian and thought that it might be easier on a Windows machine. Proxmox makes it extremely easy to assign a few processor cores and some memory and test something like this out, and it turned to work out incredibly well.

So well, in fact, that I also installed BOINC in the Windows VM and now generally leave this running all the time to take advantage of any underutilized cores on this machine for the greater good when they’re not otherwise in use. BOINC is also notoriously difficult to get running in Debian, especially for those using non-Nvidia GPUs, so at least while Windows 10 is still supported I’ll probably keep this setup going for the long term.

Room for Improvement

There are a few downsides to a Proxmox installation, though. As I mentioned previously, it’s probably not the best practice to keep backups on the same hardware, so if it’s your only physical computer then that’ll take some extra thought. I’ve also had considerable difficulty passing an optical drive through to VMs, which is not nearly as straightforward as passing through other hardware types for reasons which escape me. Additionally, some software doesn’t take well to running on virtualized hardware at all. In the past I have experimented with XMR mining software as a way to benchmark hardware capabilities, and although I never let it run nearly long enough to ever actually mine anything it absolutely will not run at all in a virtualized environment. There are certainly other pieces of software that are similar.

I also had a problem that took a while to solve regarding memory use. Memory can be over-assigned like processor cores, but an important note is that if Proxmox is using ZFS for its storage, as mine does, the host OS will use up an incredibly large amount of memory. In my case, file transfers to or from my TrueNAS VM were causing out-of-memory issues on some of my other VMs, leading to their abrupt termination. I still don’t fully understand this problem and as such it took a bit of time to solve, but I eventually both limited the memory the host was able to use for ZFS as well as doubled the physical memory to 64 GB. This had the downstream effect of improving the performance of my other VMs and containers as well, so it was a win-win at a very minimal cost.

The major downside for most, though, will be gaming. While it’s fully possible to run a respectable gaming rig with a setup similar to mine and play essentially any modern game available, this is only going to work out if none of those games use kernel-level anticheat. Valorant, Fortnite, and Call of Duty are all examples that are likely to either not run at all on a virtualized computer or to get an account flagged for cheating.

There are a number of problems with kernel-level anti-cheat including arguments that they are types of rootkits, that they are attempts to stifle Linux gaming, and that they’re lazy solutions to problems that could easily be solved in other ways, but the fact remains that these games will have to be played on bare metal. Personally I’d just as soon not play them at all for any and all of these reasons, even on non-virtualized machines.

Beat On, Against the Current

The only other thing worth noting is that while Proxmox is free and open-source, there are paid enterprise subscription options available, and it is a bit annoying about reminding the user that this option is available. But that’s minor in the grand scheme of things. For me, the benefits far outweigh these downsides. In fact, I’ve found that using Proxmox has reinvigorated my PC hobby in a new way.

While restoring old Apple laptops is one thing, Proxmox has given me a much deeper understanding of computing hardware in general, as well as made it easy to experiment and fiddle with different pieces of software without worrying that I’ll break my entire system. In a very real way it feels like if I want a new computer, it lets me simply create a virtual one that I am free to experiment with and then throw away if I wish. It also makes fixing mistakes easy. Additionally, most things running on my Proxmox install are more stable, more secure, and make more efficient use of system resources.

It’s saved me a ton of money since I nether had to buy individual machines like routers or a NAS and its drives too, nor have I had to build a brand new gaming computer. In fact, the only money I spent on this was an arguably optional 32 GB memory upgrade, which is pennies compared to having to build a brand new desktop. With all that in mind, I’d recommend experimenting with Proxmox for anyone with a computer or similarly flagging interest in their PC in general, especially if they still occasionally want to rip and tear.

]]>
https://hackaday.com/2025/06/11/network-infrastructure-and-demon-slaying-virtualization-expands-what-a-desktop-can-do/feed/ 21 781730 virtualization_feat
Putting a Pi in a Container https://hackaday.com/2024/08/30/putting-a-pi-in-a-container/ https://hackaday.com/2024/08/30/putting-a-pi-in-a-container/#comments Sat, 31 Aug 2024 05:00:49 +0000 https://hackaday.com/?p=704964 Docker and other containerization applications have changed a lot about the way that developers create new software as well as how they maintain virtual machines. Not only does containerization reduce …read more]]>

Docker and other containerization applications have changed a lot about the way that developers create new software as well as how they maintain virtual machines. Not only does containerization reduce the system resources needed for something that might otherwise be done in a virtual machine, but it standardizes the development environment for software and dramatically reduces the complexity of deploying on different computers. There are some other tricks up the sleeves as well, and this project called PI-CI uses Docker to containerize an entire Raspberry Pi.

The Pi container emulates an entire Raspberry Pi from the ground up, allowing anyone that wants to deploy software on one to test it out without needing to do so on actual hardware. All of the configuration can be done from inside the container. When all the setup is completed and the desired software installed in the container, the container can be converted to an .img file that can be put on a microSD card and installed on real hardware, with support for the Pi models 3, 4, and 5. There’s also support for using Ansible, a Docker automation system that makes administering a cluster or array of computers easier.

Docker can be an incredibly powerful tool for developing and deploying software, and tools like this can make the process as straightforward as possible. It does have a bit of a learning curve, though, since sharing operating system tools instead of virtualizing hardware can take a bit of time to wrap one’s mind around. If you’re new to the game take a look at this guide to setting up your first Docker container.

]]>
https://hackaday.com/2024/08/30/putting-a-pi-in-a-container/feed/ 16 704964 docker-main
A Guide To Running Your First Docker Container https://hackaday.com/2024/06/10/a-guide-to-running-your-first-docker-container/ https://hackaday.com/2024/06/10/a-guide-to-running-your-first-docker-container/#comments Mon, 10 Jun 2024 08:00:07 +0000 https://hackaday.com/?p=683514 While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little …read more]]>

While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.

The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.

While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.

]]>
https://hackaday.com/2024/06/10/a-guide-to-running-your-first-docker-container/feed/ 21 683514 docker-main
Linux Fu: Docking Made Easy https://hackaday.com/2022/06/21/linux-fu-docking-made-easy/ https://hackaday.com/2022/06/21/linux-fu-docking-made-easy/#comments Tue, 21 Jun 2022 14:00:21 +0000 https://hackaday.com/?p=540446 Most computer operating systems suffer from some version of “DLL hell” — a decidedly Windows term, but the concept applies across the board. Consider doing embedded development which usually takes …read more]]>

Most computer operating systems suffer from some version of “DLL hell” — a decidedly Windows term, but the concept applies across the board. Consider doing embedded development which usually takes a few specialized tools. You write your embedded system code, ship it off, and forget about it for a few years. Then, the end-user wants a change. Too bad the compiler you used requires some library that has changed so it no longer works. Oh, and the device programmer needs an older version of the USB library. The Python build tools use Python 2 but your system has moved on. If the tools you need aren’t on the computer anymore, you may have trouble finding the install media and getting it to work. Worse still if you don’t even have the right kind of computer for it anymore.

One way to address this is to encapsulate all of your development projects in a virtual machine. Then you can save the virtual machine and it includes an operating system, all the right libraries, and basically is a snapshot of how the project was that you can reconstitute at any time and on nearly any computer.

In theory, that’s great, but it is a lot of work and a lot of storage. You need to install an operating system and all the tools. Sure, you can get an appliance image, but if you work on many projects, you will have a bunch of copies of the very same thing cluttering things up. You’ll also need to keep all those copies up-to-date if you need to update things which — granted — is sort of what you are probably trying to avoid, but sometimes you must.

Docker is a bit lighter weight than a virtual machine. You still run your system’s normal kernel, but essentially you can have a virtual environment running in an instant on top of that kernel. What’s more, Docker only stores the differences between things. So if you have ten copies of an operating system, you’ll only store it once plus small differences for each instance.

The downside is that it is a bit tough to configure. You need to map storage and set up networking, among other things. I recently ran into a project called Dock that tries to make the common cases easier so you can quickly just spin up a docker instance to do some work without any real configuration. I made a few minor changes to it and forked the project, but, for now, the origin has synced up with my fork so you can stick with the original link.

Documentation

The documentation on the GitHub pages is a bit sparse, but the author has a good page of instructions and videos. On the other hand, it is very easy to get started. Create a directory and go into it (or go into an existing directory). Run “dock” and you’ll get a spun-up Docker container named after the directory. The directory itself will be mounted inside the container and you’ll have an ssh connection to the container.

By default, that container will have some nice tools in it, but I wanted different tools. No problem; you can install what you want. You can also commit an image set up how you like and name it in the configuration files as the default. You can also name a specific image file on the command line if you like. That means it is possible to have multiple setups for new machines. You might say you want this directory to have an image configured for Linux development and another one for ARM development, for example. Finally, you can also name the container if you don’t want it tied to the current directory.

Images

This requires some special Docker images that the system knows how to install automatically. There are setups for Ubuntu, Python, Perl, Ruby, Rust, and some network and database development environments. Of course, you can customize any of these and commit them to a new image that you can use as long as you don’t mess up the things the tool depends on (that is, an SSH server, for example).

If you want to change the default image, you can do that in ~/.dockrc. That file also contains a prefix that the system removes from the container names. That way, a directory named Hackaday won’t wind up with a container named Hackaday.alw.home, but will simply be Hackaday. For example, since I have all my work in /home/alw/projects, I should use that as a prefix so I don’t have the word projects in each container name, but — as you can see in the accompanying screenshot — I haven’t so the container winds up as Hackaday.projects.

Options and Aliases

You can see the options available on the help page. You can select a user, mount additional storage volumes, set a few container options, and more. I haven’t tried it, but it looks like there’s also a $DEFAULT_MOUNT_OPTIONS variable to add other directories to all containers.

My fork adds a few extra options that aren’t absolutely necessary. For one, -h will give you a short help screen, while -U will give you a longer help screen. In addition, unknown options trigger a help message. I also added a -I option to write out a source line suitable for adding to your shell profile to get the optional aliases.

These optional aliases are useful for you, but Dock doesn’t use them so you don’t have to install them. These do things like list docker images or make a commit without having to remember the full Docker syntax. Of course, you can still use regular Docker commands, if you prefer.

Try It!

To start, you need to install Docker. Usually, by default, only root can use Docker. Some setups have a particular group you can join if you want to use it from your own user ID. That’s easy to set up if you like.  For example:

sudo usermod -aG docker $(whoami)
newgrp docker
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service

From there, follow the setup on the project page, and make sure to edit your ~/.dockrc file. You need to make sure the DOCK_PATH, IGNORED_PREFIX_PATH, and DEFAULT_IMAGE_NAME variables are set correctly, among other things.

Once set up, create a test directory, type dock and enjoy your new sort-of virtual machine. If you’ve set up the aliases, use dc to show the containers you have. You can use dcs or dcr to shut down or remove a “virtual machine.” If you want to save the current container as an image, try dcom to commit the container.

Sometimes you want to enter the fake machine as root. You can use dock-r as a shorthand for dock -u root assuming you installed the aliases.

It is hard to imagine how this could be much easier. Since the whole thing is written as a Bash script, it’s easy to add options (like I did). It looks like it would be pretty easy to adapt existing Docker images to be set up to be compatible with Dock, too. Don’t forget, that you can commit a container to use it as a template for future containers.

If you want more background on Docker, Ben James has a good write-up. You can even use Docker to simplify retrocomputing.

]]>
https://hackaday.com/2022/06/21/linux-fu-docking-made-easy/feed/ 20 540446 LinuxFu
Field Guide to Shipping Containers https://hackaday.com/2021/04/07/field-guide-to-shipping-containers/ https://hackaday.com/2021/04/07/field-guide-to-shipping-containers/#comments Wed, 07 Apr 2021 17:00:57 +0000 https://hackaday.com/?p=468090 In the 1950s, trucking magnate Malcom McLean changed the world when he got frustrated enough with the speed of trucking and traffic to start a commercial shipping company in order …read more]]>

In the 1950s, trucking magnate Malcom McLean changed the world when he got frustrated enough with the speed of trucking and traffic to start a commercial shipping company in order to move goods up and down the eastern seaboard a little faster. Within ten years, containers were standardized, and the first international container ship set sail in 1966. The cargo? Whisky for the U.S. and guns for Europe. What was once a slow and unreliable method of moving all kinds of whatever in barrels, bags, and boxes became a streamlined operation — one that now moves millions of identical containers full of unfathomable miscellany each year.

When I started writing this, there was a container ship stuck in the Suez canal that had been blocking it for days. Just like that, a vital passage became completely clogged, halting the shipping schedule of everything from oil and weapons to ESP8266 boards and high-waist jeans. The incident really highlights the fragility of the whole intermodal system and makes us wonder if anything will change.

A rainbow of dry storage containers. Image via xChange

Setting the Standard

We are all used to seeing the standard shipping container that’s either a 10′, 20′, or 40′ long box made of steel or aluminum with doors on one end. These are by far the most common type, and are probably what come to mind whenever shipping containers are mentioned.

These are called dry storage containers, and per ISO container standards, they are all 8′ wide and 8′ 6″ tall. There are also ‘high cube’ containers that are a foot taller, but otherwise share the same dimensions. Many of these containers end up as some type of housing, either as stylish studios, post-disaster survivalist shelters, or construction site offices. As the pandemic wears on, they have become so much in demand that prices have surged in the last few months.

Although Malcom McLean did not invent container shipping, the strict containerization standards that followed in his wake prevent issues during stacking, shipping, and storing, and allow any container to be handled safely at any port in the world, or load onto any rail car with ease. Every bit of the container is standardized, from the dimensions to the way the container’s information is displayed on the end. At most, the difference between any two otherwise identical containers is the number, the paint job, and maybe a few millimeters in one dimension.

Standard as they may be, these containers don’t work for every type of cargo. There are quite a few more types of shipping containers out there that serve different needs. Let’s take a look at some of them, shall we?

Flat rack with a bus. Image via Alconet Containers

Flat Rack Container

Flat rack containers are basically platforms with no walls or roofs that are used to transport things like pipes, machinery, timber, buses, and boats. In other words, anything large or bulky that needs to be loaded and unloaded from the top.

Some flat rack containers have collapsible sides that make the cargo easy to remove, and others’ sides are fixed in place. Collapsible flat racks can be stacked together — a stack of four is about the size of a dry storage container. Flat racks come in both 20ft and 40ft sizes, though the widths and heights are often similar between the two.

 

The biggest tires ever? Image via xChange

Open Top Container

There are a lot like dry storage containers, except they have either a tarp on the top or a convertible lid that can be taken off completely to accommodate shipments of any height, including massive tires.

The short sides double as doors, so there are a couple of options when it comes to loading and unloading cargo.

 

 

Image via Florida Container Depot

Tunnel Container

Also called double-door containers, tunnel containers are basically dry storage containers that open at both ends instead of just one. This makes it really easy to load and unload the shipment, or if the container is being used as a temporary warehouse, to get to a piece of stored cargo that would otherwise be stuck in the back of the container.

As you might expect, tunnel containers come in 20- and 40-foot lengths. They are usually made of steel and have plywood floors.

 

Image via xChange

Open Side Storage Container

These are a lot like tunnel containers except that only one of the short sides has doors, and one of the long sides can be opened to accommodate wide things. This design is also useful for helping to locate specific cargo without having to unload too much of the container.

Many types of containers have forklift pockets so they can be easily moved around when empty. You can clearly see the forklift pockets around the bottom of this open side storage container.

Bananas like to travel in a certain way. Image via Refrigerated and Frozen Foods

Refrigerated or “Reefer” Container

Reefer containers are used for shipping things like produce and other perishable items at consistent temperatures over long distances. They are typically air-cooled or water-cooled, though some of them come with a generator.

These containers are the reason that I can get oranges and apples year round, despite living in the Midwest. They’re also used for items like fresh flowers and pharmaceuticals.

The container pictured is designed to transport bananas, which must be kept away from oxygen lest they begin to ripen. It has a controlled atmosphere system that can keep bananas green for up to 45 days.

Image via Marine Insight

Insulated Container

Not all perishables have to be actively refrigerated during shipping, but they do need to be kept within a specific temperature range. Some things like apples or certain types of pharmaceuticals will do fine in an insulated or thermal container.

Insulated containers also serve the purpose of safeguarding the cargo from outside air contaminants by running it through a filter first. Some insulated containers have dual walls like a Thermos, and others are simply lined with thermal blankets.

 

Image via Maritime Manual

Cargo Storage Roll Container

These are specialized containers for transporting sets or stacks of things. They have rollers on the bottom that make them easier to move around, and the whole thing can fold up for storing and stacking.

Cargo storage roll containers are usually made from strong wire mesh and come in different colors.

 

Image via Florida Container Depot

Half-Height Container

Half-height containers are basically dry storage containers that are half as tall, and can be either open-topped or closed.

These are usually used to move things like coal, stones, and other heavy, pour-able cargo that calls for easy loading and unloading. They’re also used for vehicles, heavy equipment, or anything else that fits inside. The lower center of gravity makes them quite useful for heavy loads.

 

Image via Conexwest

Car Carriers

Yep, you guessed it — these are for transporting cars and other vehicles. If you buy a car overseas, it has to get to you somehow.

Many car carriers have a ramp and two levels of racks so they can be stuffed with cars without wasting any space.

 

 

Image via More Than Shipping

Tanks

Tank containers are giant steel cylindrical tanks with frames built around them. They’re used for shipping all kinds of liquids from molasses to gasoline.

Each tank container can carry 21,000 to 40,000 liters of whatever types of liquids it is designed to store.

 

 

IBCs come in several styles. Image via Wikipedia

Intermediate Bulk Containers

These are a class of specialized container that holds everything from liquids to solids. They are typically used for shipping goods like chemicals, food syrups, paints, and raw materials. IBCs are called intermediate because they are smaller than tanks but larger than drums.

Some containers are rigid, and others are flexible and fold up for storage when empty. IBC containers have only been around for about thirty years.

 

Image via Air Sea Containers

Drums

We’ve all seen these before, though there are many types beyond the 55-gallon drum. Drums are usually made from steel, hard plastic, or dense paperboard, depending on the intended use.

Drums are handy containers for many liquids and powders because they can be rolled, moved around with a hand truck, or stacked together on pallets for easily shifting groups of them around with a forklift.

 

Image via Maritime Manual

Special Purpose Containers

These are unique, sometimes one-off containers that are often used for high-profile shipments like weapons and military cargo. Because of this, they are often heavily secured.

Unlike most other standardized containers, these come in many shapes and sizes and are made of whatever materials suit the special purpose.

 

Image via Wikipedia

Swap Bodies

These are mostly used in Europe and have a strong bottom with the convertible top, so they can ship many kinds of items.

Swap body containers are typically only used on trucks and trains and don’t ride on container ships. Instead of a sturdy base with forklift pockets, these have spindly folding legs on the corners to support the container at a dock or between truck and train.

 

A Container for Everything

These are some of the most common types of shipping containers out there aside from the metal box we’re all familiar with. Lots of cargo has special needs, but it can all be containerized one way or another.

So what is it like to receive a large shipment via container ship? Our own [Bob Baddeley] has firsthand experience and told us all about it a while back. Do you have any experience with shipping at this scale, or have you ever repurposed a shipping container? Let us know in the comments!

]]>
https://hackaday.com/2021/04/07/field-guide-to-shipping-containers/feed/ 94 468090 Shippingcontainer
Lightweight OS For Any Platform https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/ https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/#comments Mon, 22 Mar 2021 18:31:18 +0000 https://hackaday.com/?p=466837 Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection …read more]]>

Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection at all to help with documentation. It was the wild west of Linux, and while we can all rely on an easy-to-install Ubuntu distribution if we need it, there are still distributions out there that require some discovery of those old roots. Meet SkiffOS, a lightweight Linux distribution which compiles on almost any hardware but also opens up a whole world of opportunity in containerization.

The operating system is intended to be able to compile itself on any Linux-compatible board (with some input) and yet still be lightweight. It can run on Raspberry Pis, Nvidia Jetsons, and x86 machines to name a few, and focuses on hosting containerized applications independent of the hardware it is installed on. One of the goals of this OS is to separate the hardware support from the applications, while being able to support real-time tasks such as applications in robotics. It also makes upgrading the base OS easy without disrupting the programs running in the containers, and of course has all of the other benefits of containerization as well.

It does seem like containerization is the way of the future, and while it has obviously been put to great use in web hosting and other network applications, it’s interesting to see it expand into a real-time arena. Presumably an approach like this would have many other applications as well since it isn’t hardware-specific, and we’re excited to see the future developments as people adopt this type of operating system for their specific needs.

Thanks to [Christian] for the tip!

]]>
https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/feed/ 35 466837 skiff-os-main