Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-Docker
Example workflows

Example workflows

This page showcases different examples of how you can customize and use the Docker GitHub Actions in your CI pipelines.

Push to multi-registries

The following workflow will connect you to Docker Hub and GitHub Container Registry and push the image to both registries:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: |
            user/app:latest
            user/app:1.0.0
            ghcr.io/user/app:latest
            ghcr.io/user/app:1.0.0

Manage tags and labels

If you want an “automatic” tag management and OCI Image Format Specification for labels, you can do it in a dedicated setup step. The following workflow will use the Docker Metadata Action to handle tags and labels based on GitHub Actions events and Git metadata:

name: ci

on:
  schedule:
    - cron: "0 10 * * *"
  push:
    branches:
      - "**"
    tags:
      - "v*.*.*"
  pull_request:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Docker meta
        id: meta
        uses: docker/metadata-action@v4
        with:
          # list of Docker images to use as base name for tags
          images: |
            name/app
            ghcr.io/username/app
          # generate Docker tags based on the following events/attributes
          tags: |
            type=schedule
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{major}}
            type=sha
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GHCR
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

Multi-platform images

You can build multi-platform images using the platforms option, as described in the following example.

Note

  • For a list of available platforms, see the Docker Setup Buildx action.
  • If you want support for more platforms, you can use QEMU with the Docker Setup QEMU action.
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: user/app:latest

Cache

This page contains examples on using the cache storage backends with GitHub actions.

Note

See Cache storage backends for more details about cache storage backends.

Inline cache

In most cases you want to use the inline cache exporter. However, note that the inline cache exporter only supports min cache mode. To use max cache mode, push the image and the cache separately using the registry cache exporter with the cache-to option, as shown in the registry cache example.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=registry,ref=user/app:latest
          cache-to: type=inline

Registry cache

You can import/export cache from a cache manifest or (special) image configuration on the registry with the registry cache exporter.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=registry,ref=user/app:buildcache
          cache-to: type=registry,ref=user/app:buildcache,mode=max

GitHub cache

Cache backend API

Warning

This cache exporter is experimental. Please provide feedback on BuildKit repository if you experience any issues.

The GitHub Actions cache exporter backend uses the GitHub Cache API to fetch and upload cache blobs. That’s why you should only use this cache backend in a GitHub Action workflow, as the url ( $ACTIONS_CACHE_URL ) and token ( $ACTIONS_RUNTIME_TOKEN ) attributes only get populated in a workflow context.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=gha
          cache-to: type=gha,mode=max

Local cache

Warning

At the moment, old cache entries aren’t deleted, so the cache size keeps growing. The following example uses the Move cache step as a workaround (see moby/buildkit#1896 for more info).

You can also leverage GitHub cache using the actions/cache and local cache exporter with this action:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Cache Docker layers
        uses: actions/cache@v3
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-buildx-
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
      -
        # Temp fix
        # https://github.com/docker/build-push-action/issues/252
        # https://github.com/moby/buildkit/issues/1896
        name: Move cache
        run: |
          rm -rf /tmp/.buildx-cache
          mv /tmp/.buildx-cache-new /tmp/.buildx-cache

Secrets

In the following example uses and exposes the GITHUB_TOKEN secret as provided by GitHub in your workflow.

First, create a Dockerfile that uses the secret:

# syntax=docker/dockerfile:1
FROM alpine
RUN --mount=type=secret,id=github_token \
  cat /run/secrets/github_token

In this example, the secret name is github_token . The following workflow exposes this secret using the secrets input:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          tags: user/app:latest
          secrets: |
            "github_token=${{ secrets.GITHUB_TOKEN }}"

Note

You can also expose a secret file to the build with the secret-files input:

secret-files: |
  "MY_SECRET=./secret.txt"

If you’re using GitHub secrets and need to handle multi-line value, you will need to place the key-value pair between quotes:

secrets: |
  "MYSECRET=${{ secrets.GPG_KEY }}"
  GIT_AUTH_TOKEN=abcdefghi,jklmno=0123456789
  "MYSECRET=aaaaaaaa
  bbbbbbb
  ccccccccc"
  FOO=bar
  "EMPTYLINE=aaaa

  bbbb
  ccc"
  "JSON_SECRET={""key1"":""value1"",""key2"":""value2""}"
Key Value
MYSECRET ***********************
GIT_AUTH_TOKEN abcdefghi,jklmno=0123456789
MYSECRET aaaaaaaa\nbbbbbbb\nccccccccc
FOO bar
EMPTYLINE aaaa\n\nbbbb\nccc
JSON_SECRET {"key1":"value1","key2":"value2"}

Note

Double escapes are needed for quote signs.

Export image to Docker

You may want your build result to be available in the Docker client through docker images to be able to use it in another step of your workflow:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          load: true
          tags: myimage:latest
      -
        name: Inspect
        run: |
          docker image inspect myimage:latest

Test your image before pushing it

In some cases, you might want to validate that the image works as expected before pushing it.

The following workflow implements several steps to achieve this:

  • Build and export the image to Docker
  • Test your image
  • Multi-platform build and push the image
name: ci

on:
  push:
    branches:
      - "main"

env:
  TEST_TAG: user/app:test
  LATEST_TAG: user/app:latest

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and export to Docker
        uses: docker/build-push-action@v3
        with:
          context: .
          load: true
          tags: ${{ env.TEST_TAG }}
      -
        name: Test
        run: |
          docker run --rm ${{ env.TEST_TAG }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ env.LATEST_TAG }}

Note

This workflow doesn’t actually build the linux/amd64 image twice. The image is built once, and the following steps uses the internal cache for from the first Build and push step. The second Build and push step only builds linux/arm64 .

Local registry

For testing purposes you may need to create a local registry to push images into:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    services:
      registry:
        image: registry:2
        ports:
          - 5000:5000
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          driver-opts: network=host
      -
        name: Build and push to local registry
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: localhost:5000/name/app:latest
      -
        name: Inspect
        run: |
          docker buildx imagetools inspect localhost:5000/name/app:latest

Share built image between jobs

As each job is isolated in its own runner, you can’t use your built image between jobs, except if you’re using self-hosted runners However, you can pass data between jobs in a workflow using the actions/upload-artifact and actions/download-artifact actions:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and export
        uses: docker/build-push-action@v3
        with:
          context: .
          tags: myimage:latest
          outputs: type=docker,dest=/tmp/myimage.tar
      -
        name: Upload artifact
        uses: actions/upload-artifact@v3
        with:
          name: myimage
          path: /tmp/myimage.tar

  use:
    runs-on: ubuntu-latest
    needs: build
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Download artifact
        uses: actions/download-artifact@v3
        with:
          name: myimage
          path: /tmp
      -
        name: Load image
        run: |
          docker load --input /tmp/myimage.tar
          docker image ls -a

Named contexts

You can define additional build contexts, and access them in your Dockerfile with FROM name or --from=name . When Dockerfile defines a stage with the same name it’s overwritten.

This can be useful with GitHub Actions to reuse results from other builds or pin an image to a specific tag in your workflow.

Pin image to a tag

Replace alpine:latest with a pinned one:

# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          build-contexts: |
            alpine=docker-image://alpine:3.16
          tags: myimage:latest

Use image in subsequent steps

By default, the Docker Setup Buildx action uses docker-container as a build driver, so built Docker images aren’t loaded automatically.

With named contexts you can reuse the built image:

# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build base image
        uses: docker/build-push-action@v3
        with:
          context: base
          load: true
          tags: my-base-image:latest
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          build-contexts: |
            alpine=docker-image://my-base-image:latest
          tags: myimage:latest

Copy images between registries

Multi-platform images built using Buildx can be copied from one registry to another using the buildx imagetools create command:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: |
            user/app:latest
            user/app:1.0.0
      -
        name: Push image to GHCR
        run: |
          docker buildx imagetools create \
            --tag ghcr.io/user/app:latest \
            --tag ghcr.io/user/app:1.0.0 \
            user/app:latest

Update Docker Hub repository description

You can update the Docker Hub repository description using a third party action called Docker Hub Description with this action:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
      -
        name: Update repo description
        uses: peter-evans/dockerhub-description@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_PASSWORD }}
          repository: user/app
Introduction to GitHub Actions

Introduction to GitHub Actions

GitHub Actions is a popular CI/CD platform for automating your build, test, and deployment pipeline. Docker provides a set of official GitHub Actions for you to use in your workflows. These official actions are reusable, easy-to-use components for building, annotating, and pushing images.

The following GitHub Actions are available:

  • Build and push Docker images: build and push Docker images with BuildKit.
  • Docker Login: sign in to a Docker registry.
  • Docker Setup Buildx: initiates a BuildKit builder.
  • Docker Metadata action: extracts metadata from Git reference and GitHub events.
  • Docker Setup QEMU: installs QEMU static binaries for multi-arch builds.
  • Docker Buildx Bake: enables using high-level builds with Bake.

Using Docker’s actions provides an easy-to-use interface, while still allowing flexibility for customizing build parameters.

Get started with GitHub Actions

This tutorial walks you through the process of setting up and using Docker GitHub Actions for building Docker images, and pushing images to Docker Hub. You will complete the following steps:

  1. Create a new repository on GitHub.
  2. Define the GitHub Actions workflow.
  3. Run the workflow.

To follow this tutorial, you need a Docker ID and a GitHub account.

Step one: Create the repository

Create a GitHub repository and configure the Docker Hub secrets.

  1. Create a new GitHub repository using this template repository.

    The repository contains a simple Dockerfile, and nothing else. Feel free to use another repository containing a working Dockerfile if you prefer.

  2. Open the repository Settings , and go to Secrets > Actions .

  3. Create a new secret named DOCKER_HUB_USERNAME and your Docker ID as value.

  4. Create a new Personal Access Token (PAT) for Docker Hub. You can name this token clockboxci .

  5. Add the PAT as a second secret in your GitHub repository, with the name DOCKER_HUB_ACCESS_TOKEN .

With your repository created, and secrets configured, you’re now ready for action!

Step two: Set up the workflow

Set up your GitHub Actions workflow for building and pushing the image to Docker Hub.

  1. Go to your repository on GitHub and then select the Actions tab.
  2. Select set up a workflow yourself .

    This takes you to a page for creating a new GitHub actions workflow file in your repository, under .github/workflows/main.yml by default.

  3. In the editor window, copy and paste the following YAML configuration.

    name: ci
    
    on:
      push:
        branches:
          - "main"
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
    • name : the name of this workflow.
    • on.push.branches : specifies that this workflow should run on every push event for the branches in the list.
    • jobs : creates a job ID ( build ) and declares the type of machine that the job should run on.

For more information about the YAML syntax used here, see Workflow syntax for GitHub Actions.

Step three: Define the workflow steps

Now the essentials: what steps to run, and in what order to run them.

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_HUB_USERNAME }}
          password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          file: ./Dockerfile
          push: true
          tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest

The previous YAML snippet contains a sequence of steps that:

  1. Checks out the repository on the build machine.
  2. Signs in to Docker Hub, using the Docker Login action and your Docker Hub credentials.
  3. Creates a BuildKit builder instance using the Docker Setup Buildx action.
  4. Builds the container image and pushes it to the Docker Hub repository, using Build and push Docker images.

    The with key lists a number of input parameters that configures the step:

    • context : the build context.
    • file : filepath to the Dockerfile.
    • push : tells the action to upload the image to a registry after building it.
    • tags : tags that specify where to push the image.

Add these steps to your workflow file. The full workflow configuration should look as follows:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_HUB_USERNAME }}
          password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          file: ./Dockerfile
          push: true
          tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest

Run the workflow

Save the workflow file and run the job.

  1. Select Start commit and push the changes to the main branch.

    After pushing the commit, the workflow starts automatically.

  2. Go to the Actions tab. It displays the workflow.

    Selecting the workflow shows you the breakdown of all the steps.

  3. When the workflow is complete, go to your repositories on Docker Hub.

    If you see the new repository in that list, it means the GitHub Actions successfully pushed the image to Docker Hub!

Next steps

This tutorial has shown you how to create a simple GitHub Actions workflow, using the official Docker actions, to build and push an image to Docker Hub.

There are many more things you can do to customize your workflow to better suit your needs. To learn more about some of the more advanced use cases, take a look at the advanced examples, such as building multi-platform images, or using cache storage backends and also how to configure your builder.

Read article
Continuous integration with Docker

Continuous integration with Docker

Continuous Integration (CI) is the part of the development process where you’re looking to get your code changes merged with the main branch of the project. At this point, development teams run tests and builds to vet that the code changes don’t cause any unwanted or unexpected behaviors.

Git branches about to get merged

There are several uses for Docker at this stage of development, even if you don’t end up packaging your application as a container image.

Docker as a build environment

Containers are reproducible, isolated environments that yield predictable results. Building and testing your application in a Docker container makes it easier to prevent unexpected behaviors from occurring. Using a Dockerfile, you define the exact requirements for the build environment, including programming runtimes, operating system, binaries, and more.

Using Docker to manage your build environment also eases maintenance. For example, updating to a new version of a programming runtime can be as simple as changing a tag or digest in a Dockerfile. No need to SSH into a pet VM to manually reinstall a newer version and update the related configuration files.

Additionally, just as you expect third-party open source packages to be secure, the same should go for your build environment. You can scan and index a builder image, just like you would for any other containerized application.

The following links provide instructions for how you can get started using Docker for building your applications in CI:

  • GitHub Actions
  • GitLab
  • Circle CI
  • Render

Docker in Docker

You can also use a Dockerized build environment to build container images using Docker. That is, your build environment runs inside a container which itself is equipped to run Docker builds. This method is referred to as “Docker in Docker”.

Docker provides an official Docker image that you can use for this purpose.

What’s next

Docker maintains a set of official GitHub Actions that you can use to build, annotate, and push container images on the GitHub Actions platform. See Introduction to GitHub Actions to learn more and get started.

Read article
Docker container driver

Docker container driver

The buildx Docker container driver allows creation of a managed and customizable BuildKit environment in a dedicated Docker container.

Using the Docker container driver has a couple of advantages over the default Docker driver. For example:

  • Specify custom BuildKit versions to use.
  • Build multi-arch images, see QEMU
  • Advanced options for cache import and export

Synopsis

Run the following command to create a new builder, named container , that uses the Docker container driver:

$ docker buildx create \
  --name container \
  --driver=docker-container \
  --driver-opt=[key=value,...]
container

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
image String  Sets the image to use for running BuildKit.
network String  Sets the network mode for running the BuildKit container.
cgroup-parent String /docker/buildx Sets the cgroup parent of the BuildKit container if Docker is using the cgroupfs driver.
env.<key> String  Sets the environment variable key to the specified value in the BuildKit container.

Usage

When you run a build, Buildx pulls the specified image (by default, moby/buildkit ){:target=”blank” rel=”noopener” class=””}. When the container has started, Buildx submits the build submitted to the containerized build server.

$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...

Loading to local image store

Unlike when using the default docker driver, images built with the docker-container driver must be explicitly loaded into the local image store. Use the --load flag:

$ docker buildx build --load -t <image> --builder=container .
...
 => exporting to oci image format                                                                                                      7.7s
 => => exporting layers                                                                                                                4.9s
 => => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3                                      0.0s
 => => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f                                        0.0s
 => => sending tarball                                                                                                                 2.8s
 => importing to docker

The image becomes available in the image store when the build finishes:

$ docker image ls
REPOSITORY                       TAG               IMAGE ID       CREATED             SIZE
<image>                          latest            adf3eec768a1   2 minutes ago       197MB

Cache persistence

The docker-container driver supports cache persistence, as it stores all the BuildKit state and related cache into a dedicated Docker volume.

To persist the docker-container driver’s cache, even after recreating the driver using docker buildx rm and docker buildx create , you can destroy the builder using the --keep-state flag:

For example, to create a builder named container and then remove it while persisting state:

# setup a builder
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT              STATUS   BUILDKIT PLATFORMS
container *     docker-container
  container0    desktop-linux                running  v0.10.5  linux/amd64
$ docker volume ls
DRIVER    VOLUME NAME
local     buildx_buildkit_container0_state

# remove the builder while persisting state
$ docker buildx rm --keep-state container
$ docker volume ls
DRIVER    VOLUME NAME
local     buildx_buildkit_container0_state

# the newly created driver with the same name will have all the state of the previous one!
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container

QEMU

The docker-container driver supports using QEMU (user mode) to build non-native platforms. Use the --platform flag to specify which architectures that you want to build for.

For example, to build a Linux image for amd64 and arm64 :

$ docker buildx build \
  --builder=container \
  --platform=linux/amd64,linux/arm64 \
  -t <registry>/<image> \
  --push .

Warning

QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.

Custom network

You can customize the network that the builder container uses. This is useful if you need to use a specific network for your builds.

For example, let’s create a network named foonet :

$ docker network create foonet

Now create a docker-container builder that will use this network:

$ docker buildx create --use \
  --name mybuilder \
  --driver docker-container \
  --driver-opt "network=foonet"

Boot and inspect mybuilder :

$ docker buildx inspect --bootstrap

Inspect the builder container and see what network is being used:

$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
map[foonet:0xc00018c0c0]

Further reading

For more information on the Docker container driver, see the buildx reference.

Read article
Docker driver

Docker driver

The Buildx Docker driver is the default driver. It uses the BuildKit server components built directly into the Docker engine. The Docker driver requires no configuration.

Unlike the other drivers, builders using the Docker driver can’t be manually created. They’re only created automatically from the Docker context.

Images built with the Docker driver are automatically loaded to the local image store.

Synopsis

# The Docker driver is used by buildx by default
docker buildx build .

It’s not possible to configure which BuildKit version to use, or to pass any additional BuildKit parameters to a builder using the Docker driver. The BuildKit version and parameters are preset by the Docker engine internally.

If you need additional configuration and flexibility, consider using the Docker container driver.

Further reading

For more information on the Docker driver, see the buildx reference.

Read article
Drivers overview

Drivers overview

Buildx drivers are configurations for how and where the BuildKit backend runs. Driver settings are customizable and allows fine-grained control of the builder. Buildx supports the following drivers:

  • docker : uses the BuildKit library bundled into the Docker daemon.
  • docker-container : creates a dedicated BuildKit container using Docker.
  • kubernetes : creates BuildKit pods in a Kubernetes cluster.
  • remote : connects directly to a manually managed BuildKit daemon.

Different drivers support different use cases. The default docker driver prioritizes simplicity and ease of use. It has limited support for advanced features like caching and output formats, and isn’t configurable. Other drivers provide more flexibility and are better at handling advanced scenarios.

The following table outlines some differences between drivers.

Feature docker docker-container kubernetes remote
Automatically load image ✠  Â
Cache export Inline only ✠✠âœ
Tarball output  ✠✠âœ
Multi-arch images  ✠✠âœ
BuildKit configuration  ✠✠Managed externally

List available builders

Use docker buildx ls to see builder instances available on your system, and the drivers they’re using.

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT      STATUS   BUILDKIT PLATFORMS
default         docker
  default       default              running  20.10.17 linux/amd64, linux/386

Depending on your setup, you may find multiple builders in your list that use the Docker driver. For example, on a system that runs both a manually installed version of dockerd, as well as Docker Desktop, you might see the following output from docker buildx ls :

NAME/NODE       DRIVER/ENDPOINT STATUS  BUILDKIT PLATFORMS
default         docker
  default       default         running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
  desktop-linux desktop-linux   running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6

This is because the Docker driver builders are automatically pulled from the available Docker Contexts. When you add new contexts using docker context create , these will appear in your list of buildx builders.

The asterisk ( * ) next to the builder name indicates that this is the selected builder which gets used by default, unless you specify a builder using the --builder option.

Create a new builder

Use the docker buildx create command to create a builder, and specify the driver using the --driver option.

$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>

This creates a new builder instance with a single build node. After creating a new builder you can also append new nodes to it.

To use a remote node for your builders, you can set the DOCKER_HOST environment variable or provide a remote context name when creating the builder.

Switch between builders

To switch between different builders, use the docker buildx use <name> command. After running this command, the build commands will automatically use this builder.

What’s next

Read about each of the Buildx drivers to learn about how they work and how to use them:

  • Docker driver
  • Docker container driver
  • Kubernetes driver
  • Remote driver
Read article