Jekyll2026-03-16T10:31:59+01:00https://devcontainers.github.io/feed.xmlDevelopment containersDevelopment containers documentation and specification page. General Availability of Dependabot Integration2024-01-23T00:00:00+01:002024-01-23T00:00:00+01:00https://devcontainers.github.io/guide/dependabotWe are excited to announce that starting today, in collaboration with the Dependabot Team, the devcontainers package ecosystem is now generally available! Dependabot will now be able to update your public Dev Container Features, keeping them up-to-date with the latest published versions.

To opt-in, add a .github/dependabot.yml to a repository containing one or more devcontainer.json configuration files:

# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates

version: 2
updates:
  - package-ecosystem: "devcontainers" # See documentation for possible values
    directory: "/"
    schedule:
      interval: weekly

Once configured, Dependabot will begin to create pull requests to update your Dev Container Features:

Dependabot PR

An example diff generated by Dependabot is shown below:

---
 .devcontainer-lock.json              | 8 ++++----
 .devcontainer.json                   | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/.devcontainer-lock.json b/.devcontainer-lock.json
index 324582b..a3868d9 100644
--- a/.devcontainer-lock.json
+++ b/.devcontainer-lock.json
@@ -1,9 +1,9 @@
 {
   "features": {
-    "ghcr.io/devcontainers/features/docker-in-docker:1": {
-      "version": "1.0.9",
-      "resolved": "ghcr.io/devcontainers/features/docker-in-docker@sha256:b4c04ba88371a8ec01486356cce10eb9fe8274627d8d170aaec87ed0d333080d",
-      "integrity": "sha256:b4c04ba88371a8ec01486356cce10eb9fe8274627d8d170aaec87ed0d333080d"
+    "ghcr.io/devcontainers/features/docker-in-docker:2": {
+      "version": "2.7.1",
+      "resolved": "ghcr.io/devcontainers/features/docker-in-docker@sha256:f6a73ee06601d703db7d95d03e415cab229e78df92bb5002e8559bcfc047fec6",
+      "integrity": "sha256:f6a73ee06601d703db7d95d03e415cab229e78df92bb5002e8559bcfc047fec6"
     }
   }
 }
\ No newline at end of file
diff --git a/.devcontainer.json b/.devcontainer.json
index e9d9af5..9eb9165 100644
--- a/.devcontainer.json
+++ b/.devcontainer.json
@@ -1,6 +1,6 @@
 {
     "image": "mcr.microsoft.com/devcontainers/base:jammy",
     "features": {
-        "ghcr.io/devcontainers/features/docker-in-docker:1": {}
+        "ghcr.io/devcontainers/features/docker-in-docker:2": {}
     }
 }

This updater ensures publicly-accessible Features are pinned to the latest version in the associated devcontainer.json file. If a dev container has an associated lockfile, that file will also be updated. For more information on lockfiles, see this specification.

Features in any valid dev container location will be updated in a single pull request.

Dependabot version updates are free to use for all repositories on GitHub.com. For more information see the Dependabot version update documentation.

]]>
["@joshspicer"]
Speed Up Your Workflow with Prebuilds2023-08-22T00:00:00+02:002023-08-22T00:00:00+02:00https://devcontainers.github.io/guide/prebuildGetting dev containers up and running for your projects is exciting - you’ve unlocked environments that include all the dependencies your projects need to run, and you can spend so much more time on coding rather than configuration.

Once your dev container has everything it needs, you might start thinking more about ways to optimize it. For instance, it might take a while to build. Maybe it takes 5 minutes. Maybe it takes an hour!

You can get back to working fast and productively after that initial container build, but what if you need to work on another machine and build the container again? Or what if some of your teammates want to use the container on their machines and will need to build it too? It’d be great to make the build time faster for everyone, every time.

After configuring your dev container, a great next step is to prebuild your image.

In this guide, we’ll explore what it means to prebuild an image and the benefits of doing so, such as speeding up your workflow, simplifying your environment, and pinning to specific versions of tools.

We have a variety of tools designed to help you with prebuilds. In this guide, we’ll explore two different repos as examples of how our team uses different combinations of these tools:

What is prebuilding?

We should first define: What is prebuilding?

If you’re already using dev containers, you’re likely already familiar with the idea of building a container, where you package everything your app needs to run into a single unit.

You need to build your container once it has all the dependencies it needs, and rebuild anytime you add new dependencies. Since you may not need to rebuild often, it might be alright if it takes a while for that initial build. But if you or your teammates need to use that container on another machine, you’ll need to wait for it to build again in those new environments.

Note: The dev container CLI doc is another great resource on prebuilding.

Prebuilt Codespaces

You may have heard (or will hear about) GitHub Codespaces prebuilds. Codespaces prebuilds are similar to prebuilt container images, with some additional focus on the other code in your repo.

GitHub Codespaces prebuilds help to speed up the creation of new codespaces for large or complex repositories. A prebuild assembles the main components of a codespace for a particular combination of repository, branch, and devcontainer.json file.

By default, whenever you push changes to your repository, GitHub Codespaces uses GitHub Actions to automatically update your prebuilds.

You can learn more about codespaces prebuilds and how to manage them in the codespaces docs.

How do I prebuild my image?

We try to make prebuilding an image and using a prebuilt image as easy as possible. Let’s walk through the couple of steps to get started.

Prebuilding an image:

Using a prebuilt image:

  • Determine the published URL of the prebuilt image you want to use
  • Reference it in your devcontainer.json, Dockerfile, or Docker Compose file

Prebuild Examples

As mentioned above, let’s walk through a couple examples of these steps, one using Craig’s Kubernetes repo, and the other using our devcontainers/images repo.

Kubernetes

  • It’s a fork of the main Kubernetes repo and contributes a prebuilt dev container for use in the main Kubernetes repo or any other forks
  • The dev container it’s prebuilding is defined in the .github/.devcontainer folder
  • Any time a change is made to the dev container, the repo currently uses the dev container GitHub Action to build the image and push it to GHCR
    • You can check out its latest prebuilt image in the Packages tab of its GitHub Repo. In this tab, you can see its GHCR URL is ghcr.io/craiglpeters/kubernetes-devcontainer:latest
  • The main Kubernetes repo and any fork of it can now define a .devcontainer folder and reference this prebuilt image through: "image": "ghcr.io/craiglpeters/kubernetes-devcontainer:latest"

Dev container spec images

  • This repo prebuilds a variety of dev containers, each of which is defined in their individual folders in the src folder
  • Any time a change is made to the dev container, the repo uses a GitHub Action to build the image and push it to MCR
    • Using the Python image as an example again, its MCR URL is mcr.microsoft.com/devcontainers/python
  • Any projects can now reference this prebuilt image through: "image": "mcr.microsoft.com/devcontainers/python"

Where do the dependencies come from?

If your devcontainer.json is as simple as just an image property referencing a prebuilt image, you may wonder: How can I tell what dependencies will be installed for my project? And how can I modify them?

Let’s walk through the Kubernetes prebuild as an example of how you can determine which dependencies are installed and where:

  • Start at your end user dev container
    • We start at the .devcontainer/devcontainer.json designed for end use in the Kubernetes repo and other forks of it
      • It sets a few properties, such as hostRequirements, onCreateCommand, and otherPortsAttributes
      • We see it references a prebuilt image, which will include dependencies that don’t need to be explicitly mentioned in this end user dev container. Let’s next go explore the dev container defining this prebuilt image
  • Explore the dev container defining your prebuilt image
  • Explore content in the prebuilt dev container’s config
    • Each Feature defines additional functionality
    • We can explore what each of them installs in their associated repo. Most appear to be defined in the devcontainers/features repo as part of the dev container spec
  • Modify and rebuild as desired
    • If I’d like to add more content to my dev container, I can either modify my end user dev container (i.e. the one designed for the main Kubernetes repo), or modify the config defining the prebuilt image (i.e. the content in Craig’s dev container)
      • For universal changes that anyone using the prebuilt image should get, update the prebuilt image
      • For more project or user specific changes (i.e. a language I need in my project but other forks won’t necessarily need, or user settings I prefer for my editor environment), update the end user dev container
    • Features are a great way to add dependencies in a clear, easily packaged way

Benefits

There are a variety of benefits (some of which we’ve already explored) to creating and using prebuilt images:

  • Faster container startup
    • Pull an already built dev container config rather than having to build it freshly on any new machine
  • Simpler configuration
    • Your devcontainer.json can be as simple as just an image property
  • Pin to a specific version of tools
    • This can improve supply-chain security and avoid breaks

Tips and Tricks

  • We explored the prebuilt images we host as part of the spec in devcontainers/images. These can form a great base for other dev containers you’d like to create for more complex scenarios
  • The spec has a concept of Development container “Templates” which are source files packaged together that encode configuration for a complete development environment
  • You can include Dev Container configuration and Feature metadata in prebuilt images via image labels. This makes the image self-contained since these settings are automatically picked up when the image is referenced - whether directly, in a FROM in a referenced Dockerfile, or in a Docker Compose file. You can learn more in our reference docs
  • You can use multi-stage Dockerfiles to create a prod container from your dev container
    • You’d typically start with your prod image, then add to it
    • Features provide a quick way to add development and CI specific layers that you wouldn’t use in production
    • For more information and an example, check out our discussion on multi-stage builds

Feedback and Closing

We hope this guide will help you optimize your dev container workflows! We can’t wait to hear your tips, tricks, and feedback. How are you prebuilding your images? Would anything in the spec or tooling make the process easier for you?

If you haven’t already, we recommend joining our dev container community Slack channel where you can connect with the dev container spec maintainers and community at large. If you have any feature requests or experience any issues as you use the above tools, please feel free to also open an issue in the corresponding repo in the dev containers org on GitHub.

]]>
["@bamurtaugh", "@craiglpeters"]
Best Practices: Authoring a Dev Container Feature2023-06-14T00:00:00+02:002023-06-14T00:00:00+02:00https://devcontainers.github.io/guide/feature-authoring-best-practicesLast November I wrote about the basics around authoring a Dev Container Feature. Since then, hundreds of Features have been written by the community. The flexibility of Features has enabled a wide variety of use cases, from installing a single tool to setting up specific aspects of a project’s development environment that can be shared across repositories. To that effect, many different patterns for Feature authorship have emerged, and the core team has learned a lot about what works well and what doesn’t.

Utilize the test command

Bundled with the devcontainer cli is the devcontainer features test command. This command is designed to help Feature authors test their Feature in a variety of scenarios. It is highly recommended that Feature authors use this command to test their Feature before publishing. Some documentation on the test command can be found here, and an example can be found in the Feature quick start repo. This repo is updated periodically as new functionality is added to the reference implementation.

Feature idempotency

The most useful Features are idempotent. This means that if a Feature is installed multiple times with different options (something that will come into play with Feature Dependencies), the Feature should be able to handle this gracefully. This is especially important for option-rich Features that you anticipate others may depend on in the future.

🔧 There is an open spec proposal for installing the same Feature twice in a given devcontainer.json (devcontainers/spec#44). While the syntax to do so in a given devcontainer.json is not yet defined, Feature dependencies will effectively allow for this.

For Features that install a versioned tool (eg: version x of go and version y of ruby ), a robust Feature should be able to install multiple versions of the tool. If your tool has a version manager (java’s SDKMAN, ruby’s rvm) it is usually as simple as installing the version manager and then running a command to install the desired version of that tool.

For instances where there isn’t an existing version manager available, a well-designed Feature should consider installing distict versions of itself to a well known location. A pattern that many Features utilize successfully is writing each version of each tool to a central folder and symlinking the “active” version to a folder on the PATH.

Features can redefine the PATH variable with containerEnv, like so:

# devcontainer-feature.json
"containerEnv": {
    "PATH": "/usr/local/myTool/bin:${PATH}"
}

🔧 A spec proposal is open for simplifying the process of adding a path to the $PATH variable: (devcontainers/spec#251).

To make testing for idempotency easy, this change to the reference implementation introduces a new mode to the devcontainer features test command that will attempt to install a Feature multiple times. This is useful for testing that a Feature is idempotent, and also for testing that a Feature is able to logically “juggle” multiple versions of a tool.

Writing your install script

🔧 Many of the suggestions in this section may benefit from the Feature library/code reuse proposal.

This section includes some tips for the contents of the install.sh entrypoint script.

Detect Platform/OS

🔧 A spec proposal is open for detecting the platform/OS and providing better warnings (devcontainers/spec#58).

Features are often designed to work on a subset of possible base images. For example, the majority of Features in the devcontainers/features repo are designed to work broadly with debian-derived images. The limitation is often simply due to the wide array of base images available, and the fact that many Features will use an OS-specific package manager. To make it easy for users to understand which base images a Feature is designed to work with, it is recommended that Features include a check for the OS and provide a helpful error message if the OS is not supported.

One possible way to implement this check is shown below.

# Source /etc/os-release to get OS info
# Looks something like:
#     PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
#     NAME="Debian GNU/Linux"
#     VERSION_ID="11"
#     VERSION="11 (bullseye)"
#     VERSION_CODENAME=bullseye
#     ID=debian
#     HOME_URL="https://www.debian.org/"
#     SUPPORT_URL="https://www.debian.org/support"
#     BUG_REPORT_URL="https://bugs.debian.org/"
. /etc/os-release
# Store host architecture
architecture="$(dpkg --print-architecture)"

DOCKER_MOBY_ARCHIVE_VERSION_CODENAMES="buster bullseye focal bionic xenial"
if [[ "${DOCKER_MOBY_ARCHIVE_VERSION_CODENAMES}" != *"${VERSION_CODENAME}"* ]]; then
    print_error "Unsupported  distribution version '${VERSION_CODENAME}'. To resolve, either: (1) set feature option '\"moby\": false' , or (2) choose a compatible OS distribution"
    print_error "Supported distributions include:  ${DOCKER_MOBY_ARCHIVE_VERSION_CODENAMES}"
    exit 1
fi

If you are targeting distros that may not have your desired scripting language installed (eg: bash is often not installed on alpine images), you can either use plain /bin/sh - which is available virtually everywhere - or you can verify (and install) the scripting language in a small bootstrap script as shown below.

#!/bin/sh 

# ... 
# ...

if [ "$(id -u)" -ne 0 ]; then
    echo -e 'Script must be run as root. Use sudo, su, or add "USER root" to your Dockerfile before running this script.'
    exit 1
fi

# If we're using Alpine, install bash before executing
. /etc/os-release
if [ "${ID}" = "alpine" ]; then
    apk add --no-cache bash
fi

exec /bin/bash "$(dirname $0)/main.sh" "$@"
exit $?

Validating functionality against several base images can be done by using the devcontainer features test command with the --base-image flag, or with a scenario. For example, one could add a workflow like this to their repo.

name: "Test Features matrixed with a set of base images"
on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  test:
    runs-on: ubuntu-latest
    continue-on-error: true
    strategy:
      matrix:
        features: [
            "anaconda",
            "aws-cli",
            "azure-cli",
            # ...
        ]
        baseImage:
          [
            "ubuntu:bionic",
            "ubuntu:focal",
            "ubuntu:jammy",
            "debian:11",
            "debian:12",
            "mcr.microsoft.com/devcontainers/base:ubuntu",
            "mcr.microsoft.com/devcontainers/base:debian",
          ]
    steps:
      - uses: actions/checkout@v3

      - name: "Install latest devcontainer CLI"
        run: npm install -g @devcontainers/cli
        
      - name: "Generating tests for '${{ matrix.features }}' against '${{ matrix.baseImage }}'"
        run: devcontainer features test  --skip-scenarios -f ${{ matrix.features }} -i ${{ matrix.baseImage }}
         

Detect the non-root user

Feature installation scripts are run as root. In contrast, many dev containers have a remoteUser set (either implicitly through image metadata or directly in the devcontainer.json). In a Feature’s installation script, one should be mindful of the final user and account for instances where the user is not root.

Feature authors should take advantage of the _REMOTE_USER and similar variables injected during the build.

# Install tool in effective remoteUser's bin folder
mkdir -p "$_REMOTE_USER_HOME/bin"
curl $TOOL_DOWNLOAD_LINK -o "$_REMOTE_USER_HOME/bin/$TOOL"
chown $_REMOTE_USER:$_REMOTE_USER "$_REMOTE_USER_HOME/bin/$TOOL"
chmod 755 "$_REMOTE_USER_HOME/bin/$TOOL"

Implement redundant paths/strategies

Most Features in the index today have some external/upstream dependency. Very often these upstream dependencies can change (ie: versioning pattern, rotated GPG key, etc…) that may cause a Feature to fail to install. To mitigate this, one strategy is to implement multiple paths to install a given tool (if available). For example, a Feature that installs go might try to install it from the upstream package manager, and if not fall back to a GitHub release.

Writing several scenario tests that force the Feature to go down distinct installation paths will help you catch cases where a given path no longer works.

]]>
["@joshspicer"]
Working with GitLab CI2023-02-15T00:00:00+01:002023-02-15T00:00:00+01:00https://devcontainers.github.io/guide/gitlab-ciFor simple use cases you can use your development container (dev container) for CI without much issue. Once you begin using more advanced dev container functionality such as Features, you will need dev container tooling in your CI pipeline. While GitHub CI has the devcontainers-ci GitHub Action, there is no such analog in GitLab CI. To achieve the goal of using your dev container in GitLab CI, the container must be pre-built.

This document will guide you on how to build a dev container with GitLab CI, push that dev container to the GitLab Container Registry, and finally reference that dev container in your main project for both local development and GitLab CI.

For the purpose of this document, we will assume the main project is named my-project and lives under the my-user path. The example here uses a few Features, which is what forces the container to be pre-built.

The Development Container GitLab project

Create a project in GitLab where the stand-alone dev container project will live. As the main project is assumed to be named my-project, let’s assume the dev container project name will be my-project-dev-container

Development Container .devcontainer/devcontainer.json

The example here is a CDK project for Python makes use of both the AWS CLI and the community-maintained AWS CDK Features.

.devcontainer/devcontainer.json:

{
  "build": {
    "context": "..",
    "dockerfile": "Dockerfile"
  },
  "features": {
    "ghcr.io/devcontainers/features/aws-cli:1": {},
    "ghcr.io/devcontainers-contrib/features/aws-cdk:2": {}
  },
  "customizations": {
    "vscode": {
      "settings": {
        "python.formatting.provider": "black"
      }
    }
  }
}

Development Container Dockerfile

As this is a Python project working with CDK, the Dockerfile will begin by using the latest Python dev container image and then install some basic packages via pip.

Dockerfile:

FROM mcr.microsoft.com/devcontainers/python:latest

RUN pip3 --disable-pip-version-check --no-cache-dir install aws_cdk_lib constructs jsii pylint \
    && rm -rf /tmp/pip-tmp

Development Container .gitlab-ci.yml

Since there is no GitLab CI equivalent to devcontainers-ci GitHub Action, we will need to install the devcontainers CLI manually. The following will:

  1. Install the packages that the devcontainers CLI requires
  2. Install the devcontainers CLI itself
  3. Login to GitLab Container Repository
  4. Build the dev container and push it to the GitLab Container Repository

.gitlab-ci.yml:

image: docker:latest

variables:
  DOCKER_TLS_CERTDIR: "/certs"

services:
  - docker:dind

before_script:
  - apk add --update nodejs npm python3 make g++
  - npm install -g @devcontainers/cli

build:
  stage: build
  script:
    - docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
    - devcontainer build --workspace-folder . --push true --image-name ${CI_REGISTRY_IMAGE}:latest

The Main GitLab project

Main .devcontainer/devcontainer.json

.devcontainer/devcontainer.json:

{
  "image": "registry.gitlab.com/my-user/my-project-dev-container"
}

Main .gitlab.ci.yml

Assuming the dev container project name is based off the main project name, the ${CI_REGISTRY_NAME} variable can be used. This configuration performs some basic sanity checks and linting once merge requests are submitted.

.gitlab-ci.json:

image: ${CI_REGISTRY_IMAGE}-dev-container:latest

before_script:
  - python --version
  - cdk --version

stages:
  - Build
  - Lint

py_compile:
  stage: Build
  script:
    - find . -type f -name "*.py" -print | xargs -n1 python3 -m py_compile
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

cdk synth:
  stage: Build
  script:
    - JSII_DEPRECATED=fail cdk --app "python3 app.py" synth
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

Pylint:
  stage: Lint
  script:
    - pylint *
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

Black code format:
  stage: Lint
  script:
    - black --check --diff .
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

Conclusion

It’s worth noting that the best practice would be to pin the versions of the various packages installed by pip, apk, npm and the like. Version pinning was omitted from this guide so that it can be executed as-is without issue.

The above provides a starting point for a dev container that’s used for both local development and in GitLab CI. It can easily be customized for other languages and tool chains. Take it and make it your own, happy coding!

]]>
["@raginjason"]
Using Images, Dockerfiles, and Docker Compose2022-12-16T00:00:00+01:002022-12-16T00:00:00+01:00https://devcontainers.github.io/guide/dockerfileWhen creating a development container, you have a variety of different ways to customize your environment like “Features” or lifecycle scripts. However, if you are familiar with containers, you may want to use a Dockerfile or Docker Compose / Compose to customize your environment. This article will walk through how to use these formats with the Dev Container spec.

Using a Dockerfile

To keep things simple, many Dev Container Templates use container image references.

{
    "image": "mcr.microsoft.com/devcontainers/base:ubuntu"
}

However, Dockerfiles are a great way to extend images, add additional native OS packages, or make minor edits to the OS image. You can reuse any Dockerfile, but let’s walk through how to create one from scratch.

First, add a file named Dockerfile next to your devcontainer.json. For example:

FROM mcr.microsoft.com/devcontainers/base:ubuntu
# Install the xz-utils package
RUN apt-get update && apt-get install -y xz-utils

Next, remove the image property from devcontainer.json (if it exists) and add the build and dockerfile properties instead:

{
    "build": {
        // Path is relative to the devcontainer.json file.
        "dockerfile": "Dockerfile"
    }
}

That’s it! When you start up your Dev Container, the Dockerfile will be automatically built with no additional work. See Dockerfile scenario reference for more information on other related devcontainer.json properties.

Iterating on an image that includes Dev Container metadata

Better yet, you can can use a Dockerfile as a part of authoring an image you can share with others. You can even add Dev Container settings and metadata right into the image itself. This avoids having to duplicate config and settings in multiple devcontainer.json files and keeps them in sync with your images!

See the guide on pre-building to learn more!

Using Docker Compose

Docker Compose is a great way to define a multi-container development environment. Rather than adding things like databases or redis to your Dockerfile, you can reference existing images for these services and focus your Dev Container’s content on tools and utilities you need for development.

Using an image with Docker Compose

As mentioned in the Dockerfile section, to keep things simple, many Dev Container Templates use container image references.

{
    "image": "mcr.microsoft.com/devcontainers/base:ubuntu"
}

Let’s create a docker-compose.yml file next to your devcontainer.json that references the same image and includes a PostgreSQL database:

version: '3.8'
services:
  devcontainer:
    image: mcr.microsoft.com/devcontainers/base:ubuntu
    volumes:
      - ../..:/workspaces:cached
    network_mode: service:db
    command: sleep infinity

  db:
    image: postgres:latest
    restart: unless-stopped
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      POSTGRES_DB: postgres

volumes:
  postgres-data:

In this example:

  • ../..:/workspaces:cached mounts the workspace folder from the local source tree into the Dev Container.
  • network_mode: service:db puts the Dev Container on the same network as the database, so that it can access it on localhost.
  • The db section uses the Postgres image with a few settings.

Next, let’s configure devcontainer.json to use it.

{
    "dockerComposeFile": "docker-compose.yml",
    "service": "devcontainer",
    "workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"
}

In this example:

  • service indicates which service in the docker-compose.yml file is the Dev Container.
  • dockerComposeFile indicates where to find the docker-compose.yml file.
  • workspaceFolder indicates where to mount the workspace folder. This corresponds to a sub-folder under the mount point from ../..:/workspaces:cached in the docker-compose.yml file.

That’s it!

Using a Dockerfile with Docker Compose

You can also combine these scenarios and use Dockerfile with Docker Compose. This time we’ll update docker-compose.yml to reference the Dockerfile by replacing image with a similar build section:

version: '3.8'
services:
  devcontainer:
    build: 
      context: .
      dockerfile: Dockerfile
    volumes:
      - ../..:/workspaces:cached      
    network_mode: service:db
    command: sleep infinity

  db:
    image: postgres:latest
    restart: unless-stopped
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      POSTGRES_DB: postgres

volumes:
  postgres-data:

Finally, as in the Dockerfile example, you can use this same setup to create a Dev Container image that you can share with others. You can also add Dev Container settings and metadata right into the image itself.

See the guide on pre-building to learn more!

]]>
["@chuxel"]
Authoring a Dev Container Feature2022-11-01T00:00:00+01:002022-11-01T00:00:00+01:00https://devcontainers.github.io/guide/author-a-featureDevelopment container “Features” are self-contained, shareable units of installation code and development container configuration. We define a pattern for authoring and self-publishing Features.

In this document, we’ll outline a “quickstart” to help you get up-and-running with creating and sharing your first Feature. You may review an example along with guidance in our devcontainers/feature-starter repo as well.

Note: While this walkthrough will illustrate the use of GitHub and the GitHub Container Registry, you can use your own source control system and publish to any OCI Artifact supporting container registry instead.

Create a repo

Start off by creating a repository to host your Feature. In this guide, we’ll use a public GitHub repository.

For the simplest getting started experience, you may use our example feature-starter repo. You may select the green Use this template button on the repo’s page.

You may also create your own repo on GitHub if you’d prefer.

Create a folder

Once you’ve forked the feature-starter repo (or created your own), you’ll want to create a folder for your Feature. You may create one within the src folder.

If you’d like to create multiple Features, you may add multiple folders within src.

Add files

At a minimum, a Feature will include a devcontainer-feature.json and an install.sh entrypoint script.

There are many possible properties for devcontainer-feature.json, which you may review in the Features spec.

Below is a hello world example devcontainer-feature.json and install.sh. You may review the devcontainers/features repo for more examples.

devcontainer-feature.json:

{
    "name": "Hello, World!",
    "id": "hello",
    "version": "1.0.2",
    "description": "A hello world feature",
    "options": {
        "greeting": {
            "type": "string",
            "proposals": [
                "hey",
                "hello",
                "hi",
                "howdy"
            ],
            "default": "hey",
            "description": "Select a pre-made greeting, or enter your own"
        }
    }
}

install.sh:

#!/bin/sh
set -e

echo "Activating feature 'hello'"

GREETING=${GREETING:-undefined}
echo "The provided greeting is: $GREETING"

cat > /usr/local/bin/hello \
<< EOF
#!/bin/sh
RED='\033[0;91m'
NC='\033[0m' # No Color
echo "\${RED}${GREETING}, \$(whoami)!\${NC}"
EOF

chmod +x /usr/local/bin/hello

Publishing

The feature-starter repo contains a GitHub Action workflow that will publish each feature to GHCR. By default, each feature will be prefixed with the <owner/<repo> namespace. Using the hello world example from above, it can be referenced in a devcontainer.json with: ghcr.io/devcontainers/feature-starter/color:1.

Note: You can use the devcontainer features publish command from the Dev Container CLI if you are not using GitHub Actions.

The provided GitHub Action will also publish a third “metadata” package with just the namespace, eg: `ghcr.io/devcontainers/feature-starter. This is useful for supporting tools to crawl metadata about available Features in the collection without downloading all the Features individually.

By default, GHCR packages are marked as private. To stay within the free tier, Features need to be marked as public.

This can be done by navigating to the Feature’s “package settings” page in GHCR, and setting the visibility to public. The URL may look something like:

https://github.com/users/<owner>/packages/container/<repo>%2F<featureName>/settings

Changing package visibility to public

Adding Features to the Index

If you’d like your Features to appear in our public index so that other community members can find them, you can do the following:

Note: Add a single entry per repository, regardless of how many Features it contains. The ociReference should be the collection namespace root — e.g., ghcr.io/<owner>/<repo>not a path to an individual Feature (e.g., avoid ghcr.io/<owner>/<repo>/<featureName>). The site will automatically discover all Features in the collection from that one entry. The ociReference value must not include a URL scheme such as http:// or https://.

Feature collections are scanned to populate a Feature index on the containers.dev site and allow them to appear in Dev Container creation UX in supporting tools like VS Code Dev Containers and GitHub Codespaces.

]]>
["@joshspicer"]