Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-Docker
Amazon S3 cache

Amazon S3 cache

Warning

This cache backend is unreleased. You can use it today, by using the moby/buildkit:master image in your Buildx driver.

The s3 cache storage uploads your resulting build cache to Amazon S3 file storage service, into a specified bucket.

Note

This cache storage backend requires using a different driver than the default docker driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):

$ docker buildx create --use --driver=docker-container

Synopsis

$ docker buildx build --push -t <user>/<image> \
  --cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
  --cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .

The following table describes the available CSV parameters that you can pass to --cache-to and --cache-from .

Name Option Type Default Description
region cache-to , cache-from String  Geographic location.
bucket cache-to , cache-from String  Name of the S3 bucket used for caching
name cache-to , cache-from String  Name of the cache image
access_key_id cache-to , cache-from String  See authentication
secret_access_key cache-to , cache-from String  See authentication
session_token cache-to , cache-from String  See authentication
mode cache-to min , max min Cache layers to export, see cache mode.

Authentication

access_key_id , secret_access_key , and session_token , if left unspecified, are read from environment variables on the BuildKit server following the scheme for the AWS Go SDK. The environment variables are read from the server, not the Buildx client.

Further reading

For an introduction to caching see Optimizing builds with cache.

For more information on the s3 cache backend, see the BuildKit README.

Garbage collection

Garbage collection

While docker builder prune or docker buildx prune commands run at once, garbage collection runs periodically and follows an ordered list of prune policies.

Garbage collection runs in the BuildKit daemon. The daemon clears the build cache when the cache size becomes too big, or when the cache age expires. The following sections describe how you can configure both the size and age parameters by defining garbage collection policies.

Configuration

Depending on the driver used by your builder instance, the garbage collection will use a different configuration file.

If you’re using the docker driver, garbage collection can be configured in the Docker Daemon configuration. file:

{
  "builder": {
    "gc": {
      "enabled": true,
      "defaultKeepStorage": "10GB",
      "policy": [
          {"keepStorage": "10GB", "filter": ["unused-for=2200h"]},
          {"keepStorage": "50GB", "filter": ["unused-for=3300h"]},
          {"keepStorage": "100GB", "all": true}
      ]
    }
  }
}

For other drivers, garbage collection can be configured using the BuildKit configuration file:

[worker.oci]
  gc = true
  gckeepstorage = 10000
  [[worker.oci.gcpolicy]]
    keepBytes = 512000000
    keepDuration = 172800
    filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
  [[worker.oci.gcpolicy]]
    all = true
    keepBytes = 1024000000

Default policies

Default garbage collection policies are applied to all builders if not already set:

GC Policy rule#0:
        All:            false
        Filters:        type==source.local,type==exec.cachemount,type==source.git.checkout
        Keep Duration:  48h0m0s
        Keep Bytes:     512MB
GC Policy rule#1:
        All:            false
        Keep Duration:  1440h0m0s
        Keep Bytes:     26GB
GC Policy rule#2:
        All:            false
        Keep Bytes:     26GB
GC Policy rule#3:
        All:            true
        Keep Bytes:     26GB
  • rule#0 : if build cache uses more than 512MB delete the most easily reproducible data after it has not been used for 2 days.
  • rule#1 : remove any data not used for 60 days.
  • rule#2 : keep the unshared build cache under cap.
  • rule#3 : if previous policies were insufficient start deleting internal data to keep build cache under cap.

Note

“Keep bytes” defaults to 10% of the size of the disk. If the disk size cannot be determined, it defaults to 2GB.

Read article
Optimizing builds with cache management

Optimizing builds with cache management

You will likely find yourself rebuilding the same Docker image over and over again. Whether it’s for the next release of your software, or locally during development. Because building images is a common task, Docker provides several tools that speed up builds.

The most important feature for improving build speeds is Docker’s build cache.

How does the build cache work?

Understanding Docker’s build cache helps you write better Dockerfiles that result in faster builds.

Have a look at the following example, which shows a simple Dockerfile for a program written in C.

# syntax=docker/dockerfile:1
FROM ubuntu:latest

RUN apt-get update && apt-get install -y build-essentials
COPY main.c Makefile /src/
WORKDIR /src/
RUN make build

Each instruction in this Dockerfile translates (roughly) to a layer in your final image. You can think of image layers as a stack, with each layer adding more content on top of the layers that came before it:

Image layer diagram showing the above commands chained together one after the other

Whenever a layer changes, that layer will need to be re-built. For example, suppose you make a change to your program in the main.c file. After this change, the COPY command will have to run again in order for those changes to appear in the image. In other words, Docker will invalidate the cache for this layer.

Image layer diagram, but now with the link between COPY and WORKDIR marked as invalid

If a layer changes, all other layers that come after it are also affected. When the layer with the COPY command gets invalidated, all layers that follow will need to run again, too:

Image layer diagram, but now with all links after COPY marked as invalid

And that’s the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldn’t build anything differently, they still need to re-run.

Note

Suppose you have a RUN apt-get update && apt-get upgrade -y step in your Dockerfile to upgrade all the software packages in your Debian-based image to the latest version.

This doesn’t mean that the images you build are always up to date. Rebuilding the image on the same host one week later will still get you the same packages as before. The only way to force a rebuild is by making sure that a layer before it has changed, or by clearing the build cache using docker builder prune .

How can I use the cache efficiently?

Now that you understand how the cache works, you can begin to use the cache to your advantage. While the cache will automatically work on any docker build that you run, you can often refactor your Dockerfile to get even better performance. These optimizations can save precious seconds (or even minutes) off of your builds.

Order your layers

Putting the commands in your Dockerfile into a logical order is a great place to start. Because a change causes a rebuild for steps that follow, try to make expensive steps appear near the beginning of the Dockerfile. Steps that change often should appear near the end of the Dockerfile, to avoid triggering rebuilds of layers that haven’t changed.

Consider the following example. A Dockerfile snippet that runs a JavaScript build from the source files in the current directory:

# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY . .          # Copy over all files in the current directory
RUN npm install   # Install dependencies
RUN npm build     # Run build

This Dockerfile is rather inefficient. Updating any file causes a reinstall of all dependencies every time you build the Docker image &emdash; even if the dependencies didn’t change since last time!

Instead, the COPY command can be split in two. First, copy over the package management files (in this case, package.json and yarn.lock ). Then, install the dependencies. Finally, copy over the project source code, which is subject to frequent change.

# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY package.json yarn.lock .    # Copy package management files
RUN npm install                  # Install dependencies
COPY . .                         # Copy over project files
RUN npm build                    # Run build

By installing dependencies in earlier layers of the Dockerfile, there is no need to rebuild those layers when a project file has changed.

Keep layers small

One of the best things you can do to speed up image building is to just put less stuff into your build. Fewer parts means the cache stay smaller, but also that there should be fewer things that could be out-of-date and need rebuilding.

To get started, here are a few tips and tricks:

Don’t include unnecessary files

Be considerate of what files you add to the image.

Running a command like COPY . /src will COPY your entire build context into the image. If you’ve got logs, package manager artifacts, or even previous build results in your current directory, those will also be copied over. This could make your image larger than it needs to be, especially as those files are usually not useful.

Avoid adding unnecessary files to your builds by explicitly stating the files or directories you intend to copy over. For example, you might only want to add a Makefile and your src directory to the image filesystem. In that case, consider adding this to your Dockerfile:

COPY ./src ./Makefile /src

As opposed to this:

COPY . /src

You can also create a .dockerignore file, and use that to specify which files and directories to exclude from the build context.

Use your package manager wisely

Most Docker image builds involve using a package manager to help install software into the image. Debian has apt , Alpine has apk , Python has pip , NodeJS has npm , and so on.

When installing packages, be considerate. Make sure to only install the packages that you need. If you’re not going to use them, don’t install them. Remember that this might be a different list for your local development environment and your production environment. You can use multi-stage builds to split these up efficiently.

Use the dedicated RUN cache

The RUN command supports a specialized cache, which you can use when you need a more fine-grained cache between runs. For example, when installing packages, you don’t always need to fetch all of your packages from the internet each time. You only need the ones that have changed.

To solve this problem, you can use RUN --mount type=cache . For example, for your Debian-based image you might use the following:

RUN \
    --mount=type=cache,target=/var/cache/apt \
    apt-get update && apt-get install -y git

Using the explicit cache with the --mount flag keeps the contents of the target directory preserved between builds. When this layer needs to be rebuilt, then it’ll use the apt cache in /var/cache/apt .

Minimize the number of layers

Keeping your layers small is a good first step, and the logical next step is to reduce the number of layers that you have. Fewer layers mean that you have less to rebuild, when something in your Dockerfile changes, so your build will complete faster.

The following sections outline some tips you can use to keep the number of layers to a minimum.

Use an appropriate base image

Docker provides over 170 pre-built official images for almost every common development scenario. For example, if you’re building a Java web server, use a dedicated image such as openjdk . Even when there’s not an official image for what you might want, Docker provides images from verified publishers and open source partners that can help you on your way. The Docker community often produces third-party images to use as well.

Using official images saves you time and ensures you stay up to date and secure by default.

Use multi-stage builds

Multi-stage builds let you split up your Dockerfile into multiple distinct stages. Each stage completes a step in the build process, and you can bridge the different stages to create your final image at the end. The Docker builder will work out dependencies between the stages and run them using the most efficient strategy. This even allows you to run multiple builds concurrently.

Multi-stage builds use two or more FROM commands. The following example illustrates building a simple web server that serves HTML from your docs directory in Git:

# syntax=docker/dockerfile:1

# stage 1
FROM alpine as git
RUN apk add git

# stage 2
FROM git as fetch
WORKDIR /repo
RUN git clone https://github.com/your/repository.git .

# stage 3
FROM nginx as site
COPY --from=fetch /repo/docs/ /usr/share/nginx/html

This build has 3 stages: git , fetch and site . In this example, git is the base for the fetch stage. It uses the COPY --from flag to copy the data from the docs/ directory into the Nginx server directory.

Each stage has only a few instructions, and when possible, Docker will run these stages in parallel. Only the instructions in the site stage will end up as layers in the final image. The entire git history doesn’t get embedded into the final result, which helps keep the image small and secure.

Combine commands together wherever possible.

Most Dockerfile commands, and RUN commands in particular, can often be joined together. For example, instead of using RUN like this:

RUN echo "the first command"
RUN echo "the second command"

It’s possible to run both of these commands inside a single RUN , which means that they will share the same cache! This can is achievable using the && shell operator to run one command after another:

RUN echo "the first command" && echo "the second command"
# or to split to multiple lines
RUN echo "the first command" && \
    echo "the second command"

Another shell feature that allows you to simplify and concatenate commands in a neat way are heredocs . It enables you to create multi-line scripts with good readability:

RUN <<EOF
set -e
echo "the first command"
echo "the second command"
EOF

(Note the set -e command to exit immediately after any command fails, instead of continuing.)

Other resources

For more information on using cache to do efficient builds, see:

  • Garbage collection
  • Cache storage backends
Read article
Configuring your builder

Configuring your builder

This page contains instructions on configuring your BuildKit instances when using our Setup Buildx Action.

BuildKit container logs

To display BuildKit container logs when using the docker-container driver, you must either enable step debug logging, or set the --debug buildkitd flag in the Docker Setup Buildx action:

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          buildkitd-flags: --debug
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .

Logs will be available at the end of a job:

BuildKit container logs

Daemon configuration

You can provide a BuildKit configuration to your builder if you’re using the docker-container driver (default) with the config or config-inline inputs:

Registry mirror

You can configure a registry mirror using an inline block directly in your workflow with the config-inline input:

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          config-inline: |
            [registry."docker.io"]
              mirrors = ["mirror.gcr.io"]

For more information about using a registry mirror, see Registry mirror.

Max parallelism

You can limit the parallelism of the BuildKit solver which is particularly useful for low-powered machines.

You can use the config-inline input like the previous example, or you can use a dedicated BuildKit config file from your repository if you want with the config input:

# .github/buildkitd.toml
[worker.oci]
  max-parallelism = 4
name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          config: .github/buildkitd.toml

Append additional nodes to the builder

Buildx supports running builds on multiple machines. This is useful for building multi-platform images on native nodes for more complicated cases that aren’t handled by QEMU. Building on native nodes generally has better performance, and allows you to distribute the build across multiple machines.

You can append nodes to the builder you’re creating using the append option. It takes input in the form of a YAML string document to remove limitations intrinsically linked to GitHub Actions: you can only use strings in the input fields:

Name Type Description
name String Name of the node. If empty, it’s the name of the builder it belongs to, with an index number suffix. This is useful to set it if you want to modify/remove a node in an underlying step of you workflow.
endpoint String Docker context or endpoint of the node to add to the builder
driver-opts List List of additional driver-specific options
buildkitd-flags String Flags for buildkitd daemon
platforms String Fixed platforms for the node. If not empty, values take priority over the detected ones.

Here is an example using remote nodes with the remote driver and TLS authentication:

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          driver: remote
          endpoint: tcp://oneprovider:1234
          append: |
            - endpoint: tcp://graviton2:1234
              platforms: linux/arm64
            - endpoint: tcp://linuxone:1234
              platforms: linux/s390x
        env:
          BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.ONEPROVIDER_CA }}
          BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.ONEPROVIDER_CERT }}
          BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.ONEPROVIDER_KEY }}
          BUILDER_NODE_1_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
          BUILDER_NODE_1_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
          BUILDER_NODE_1_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}
          BUILDER_NODE_2_AUTH_TLS_CACERT: ${{ secrets.LINUXONE_CA }}
          BUILDER_NODE_2_AUTH_TLS_CERT: ${{ secrets.LINUXONE_CERT }}
          BUILDER_NODE_2_AUTH_TLS_KEY: ${{ secrets.LINUXONE_KEY }}

Authentication for remote builders

The following examples show how to handle authentication for remote builders, using SSH or TLS.

SSH authentication

To be able to connect to an SSH endpoint using the docker-container driver, you have to set up the SSH private key and configuration on the GitHub Runner:

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Set up SSH
        uses: MrSquaare/ssh-setup-action@523473d91581ccbf89565e12b40faba93f2708bd # v1.1.0
        with:
          host: graviton2
          private-key: ${{ secrets.SSH_PRIVATE_KEY }}
          private-key-name: aws_graviton2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          endpoint: ssh://me@graviton2

TLS authentication

You can also set up a remote BuildKit instance using the remote driver. To ease the integration in your workflow, you can use an environment variables that sets up authentication using the BuildKit client certificates for the tcp:// :

  • BUILDER_NODE_<idx>_AUTH_TLS_CACERT
  • BUILDER_NODE_<idx>_AUTH_TLS_CERT
  • BUILDER_NODE_<idx>_AUTH_TLS_KEY

The <idx> placeholder is the position of the node in the list of nodes.

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          driver: remote
          endpoint: tcp://graviton2:1234
        env:
          BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
          BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
          BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}

Standalone mode

If you don’t have the Docker CLI installed on the GitHub Runner, the Buildx binary gets invoked directly, instead of calling it as a Docker CLI plugin. This can be useful if you want to use the kubernetes driver in your self-hosted runner:

name: ci

on:
  push:

jobs:
  buildx:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          driver: kubernetes
      -
        name: Build
        run: |
          buildx build .

Isolated builders

The following example shows how you can select different builders for different jobs.

An example scenario where this might be useful is when you are using a monorepo, and you want to pinpoint different packages to specific builders. For example, some packages may be particularly resource-intensive to build and require more compute. Or they require a builder equipped with a particular capability or hardware.

For more information about remote builder, see remote driver and the append builder nodes example.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        uses: docker/setup-buildx-action@v2
        id: builder1
      -
        uses: docker/setup-buildx-action@v2
        id: builder2
      -
        name: Builder 1 name
        run: echo ${{ steps.builder1.outputs.name }}
      -
        name: Builder 2 name
        run: echo ${{ steps.builder2.outputs.name }}
      -
        name: Build against builder1
        uses: docker/build-push-action@v3
        with:
          builder: ${{ steps.builder1.outputs.name }}
          context: .
          target: mytarget1
      -
        name: Build against builder2
        uses: docker/build-push-action@v3
        with:
          builder: ${{ steps.builder2.outputs.name }}
          context: .
          target: mytarget2
Read article
Example workflows

Example workflows

This page showcases different examples of how you can customize and use the Docker GitHub Actions in your CI pipelines.

Push to multi-registries

The following workflow will connect you to Docker Hub and GitHub Container Registry and push the image to both registries:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: |
            user/app:latest
            user/app:1.0.0
            ghcr.io/user/app:latest
            ghcr.io/user/app:1.0.0

Manage tags and labels

If you want an “automatic” tag management and OCI Image Format Specification for labels, you can do it in a dedicated setup step. The following workflow will use the Docker Metadata Action to handle tags and labels based on GitHub Actions events and Git metadata:

name: ci

on:
  schedule:
    - cron: "0 10 * * *"
  push:
    branches:
      - "**"
    tags:
      - "v*.*.*"
  pull_request:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Docker meta
        id: meta
        uses: docker/metadata-action@v4
        with:
          # list of Docker images to use as base name for tags
          images: |
            name/app
            ghcr.io/username/app
          # generate Docker tags based on the following events/attributes
          tags: |
            type=schedule
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{major}}
            type=sha
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GHCR
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

Multi-platform images

You can build multi-platform images using the platforms option, as described in the following example.

Note

  • For a list of available platforms, see the Docker Setup Buildx action.
  • If you want support for more platforms, you can use QEMU with the Docker Setup QEMU action.
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: user/app:latest

Cache

This page contains examples on using the cache storage backends with GitHub actions.

Note

See Cache storage backends for more details about cache storage backends.

Inline cache

In most cases you want to use the inline cache exporter. However, note that the inline cache exporter only supports min cache mode. To use max cache mode, push the image and the cache separately using the registry cache exporter with the cache-to option, as shown in the registry cache example.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=registry,ref=user/app:latest
          cache-to: type=inline

Registry cache

You can import/export cache from a cache manifest or (special) image configuration on the registry with the registry cache exporter.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=registry,ref=user/app:buildcache
          cache-to: type=registry,ref=user/app:buildcache,mode=max

GitHub cache

Cache backend API

Warning

This cache exporter is experimental. Please provide feedback on BuildKit repository if you experience any issues.

The GitHub Actions cache exporter backend uses the GitHub Cache API to fetch and upload cache blobs. That’s why you should only use this cache backend in a GitHub Action workflow, as the url ( $ACTIONS_CACHE_URL ) and token ( $ACTIONS_RUNTIME_TOKEN ) attributes only get populated in a workflow context.

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=gha
          cache-to: type=gha,mode=max

Local cache

Warning

At the moment, old cache entries aren’t deleted, so the cache size keeps growing. The following example uses the Move cache step as a workaround (see moby/buildkit#1896 for more info).

You can also leverage GitHub cache using the actions/cache and local cache exporter with this action:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Cache Docker layers
        uses: actions/cache@v3
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-buildx-
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
      -
        # Temp fix
        # https://github.com/docker/build-push-action/issues/252
        # https://github.com/moby/buildkit/issues/1896
        name: Move cache
        run: |
          rm -rf /tmp/.buildx-cache
          mv /tmp/.buildx-cache-new /tmp/.buildx-cache

Secrets

In the following example uses and exposes the GITHUB_TOKEN secret as provided by GitHub in your workflow.

First, create a Dockerfile that uses the secret:

# syntax=docker/dockerfile:1
FROM alpine
RUN --mount=type=secret,id=github_token \
  cat /run/secrets/github_token

In this example, the secret name is github_token . The following workflow exposes this secret using the secrets input:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          tags: user/app:latest
          secrets: |
            "github_token=${{ secrets.GITHUB_TOKEN }}"

Note

You can also expose a secret file to the build with the secret-files input:

secret-files: |
  "MY_SECRET=./secret.txt"

If you’re using GitHub secrets and need to handle multi-line value, you will need to place the key-value pair between quotes:

secrets: |
  "MYSECRET=${{ secrets.GPG_KEY }}"
  GIT_AUTH_TOKEN=abcdefghi,jklmno=0123456789
  "MYSECRET=aaaaaaaa
  bbbbbbb
  ccccccccc"
  FOO=bar
  "EMPTYLINE=aaaa

  bbbb
  ccc"
  "JSON_SECRET={""key1"":""value1"",""key2"":""value2""}"
Key Value
MYSECRET ***********************
GIT_AUTH_TOKEN abcdefghi,jklmno=0123456789
MYSECRET aaaaaaaa\nbbbbbbb\nccccccccc
FOO bar
EMPTYLINE aaaa\n\nbbbb\nccc
JSON_SECRET {"key1":"value1","key2":"value2"}

Note

Double escapes are needed for quote signs.

Export image to Docker

You may want your build result to be available in the Docker client through docker images to be able to use it in another step of your workflow:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          load: true
          tags: myimage:latest
      -
        name: Inspect
        run: |
          docker image inspect myimage:latest

Test your image before pushing it

In some cases, you might want to validate that the image works as expected before pushing it.

The following workflow implements several steps to achieve this:

  • Build and export the image to Docker
  • Test your image
  • Multi-platform build and push the image
name: ci

on:
  push:
    branches:
      - "main"

env:
  TEST_TAG: user/app:test
  LATEST_TAG: user/app:latest

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and export to Docker
        uses: docker/build-push-action@v3
        with:
          context: .
          load: true
          tags: ${{ env.TEST_TAG }}
      -
        name: Test
        run: |
          docker run --rm ${{ env.TEST_TAG }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ env.LATEST_TAG }}

Note

This workflow doesn’t actually build the linux/amd64 image twice. The image is built once, and the following steps uses the internal cache for from the first Build and push step. The second Build and push step only builds linux/arm64 .

Local registry

For testing purposes you may need to create a local registry to push images into:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    services:
      registry:
        image: registry:2
        ports:
          - 5000:5000
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          driver-opts: network=host
      -
        name: Build and push to local registry
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: localhost:5000/name/app:latest
      -
        name: Inspect
        run: |
          docker buildx imagetools inspect localhost:5000/name/app:latest

Share built image between jobs

As each job is isolated in its own runner, you can’t use your built image between jobs, except if you’re using self-hosted runners However, you can pass data between jobs in a workflow using the actions/upload-artifact and actions/download-artifact actions:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and export
        uses: docker/build-push-action@v3
        with:
          context: .
          tags: myimage:latest
          outputs: type=docker,dest=/tmp/myimage.tar
      -
        name: Upload artifact
        uses: actions/upload-artifact@v3
        with:
          name: myimage
          path: /tmp/myimage.tar

  use:
    runs-on: ubuntu-latest
    needs: build
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Download artifact
        uses: actions/download-artifact@v3
        with:
          name: myimage
          path: /tmp
      -
        name: Load image
        run: |
          docker load --input /tmp/myimage.tar
          docker image ls -a

Named contexts

You can define additional build contexts, and access them in your Dockerfile with FROM name or --from=name . When Dockerfile defines a stage with the same name it’s overwritten.

This can be useful with GitHub Actions to reuse results from other builds or pin an image to a specific tag in your workflow.

Pin image to a tag

Replace alpine:latest with a pinned one:

# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          build-contexts: |
            alpine=docker-image://alpine:3.16
          tags: myimage:latest

Use image in subsequent steps

By default, the Docker Setup Buildx action uses docker-container as a build driver, so built Docker images aren’t loaded automatically.

With named contexts you can reuse the built image:

# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build base image
        uses: docker/build-push-action@v3
        with:
          context: base
          load: true
          tags: my-base-image:latest
      -
        name: Build
        uses: docker/build-push-action@v3
        with:
          context: .
          build-contexts: |
            alpine=docker-image://my-base-image:latest
          tags: myimage:latest

Copy images between registries

Multi-platform images built using Buildx can be copied from one registry to another using the buildx imagetools create command:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: |
            user/app:latest
            user/app:1.0.0
      -
        name: Push image to GHCR
        run: |
          docker buildx imagetools create \
            --tag ghcr.io/user/app:latest \
            --tag ghcr.io/user/app:1.0.0 \
            user/app:latest

Update Docker Hub repository description

You can update the Docker Hub repository description using a third party action called Docker Hub Description with this action:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: user/app:latest
      -
        name: Update repo description
        uses: peter-evans/dockerhub-description@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_PASSWORD }}
          repository: user/app
Read article
Introduction to GitHub Actions

Introduction to GitHub Actions

GitHub Actions is a popular CI/CD platform for automating your build, test, and deployment pipeline. Docker provides a set of official GitHub Actions for you to use in your workflows. These official actions are reusable, easy-to-use components for building, annotating, and pushing images.

The following GitHub Actions are available:

  • Build and push Docker images: build and push Docker images with BuildKit.
  • Docker Login: sign in to a Docker registry.
  • Docker Setup Buildx: initiates a BuildKit builder.
  • Docker Metadata action: extracts metadata from Git reference and GitHub events.
  • Docker Setup QEMU: installs QEMU static binaries for multi-arch builds.
  • Docker Buildx Bake: enables using high-level builds with Bake.

Using Docker’s actions provides an easy-to-use interface, while still allowing flexibility for customizing build parameters.

Get started with GitHub Actions

This tutorial walks you through the process of setting up and using Docker GitHub Actions for building Docker images, and pushing images to Docker Hub. You will complete the following steps:

  1. Create a new repository on GitHub.
  2. Define the GitHub Actions workflow.
  3. Run the workflow.

To follow this tutorial, you need a Docker ID and a GitHub account.

Step one: Create the repository

Create a GitHub repository and configure the Docker Hub secrets.

  1. Create a new GitHub repository using this template repository.

    The repository contains a simple Dockerfile, and nothing else. Feel free to use another repository containing a working Dockerfile if you prefer.

  2. Open the repository Settings , and go to Secrets > Actions .

  3. Create a new secret named DOCKER_HUB_USERNAME and your Docker ID as value.

  4. Create a new Personal Access Token (PAT) for Docker Hub. You can name this token clockboxci .

  5. Add the PAT as a second secret in your GitHub repository, with the name DOCKER_HUB_ACCESS_TOKEN .

With your repository created, and secrets configured, you’re now ready for action!

Step two: Set up the workflow

Set up your GitHub Actions workflow for building and pushing the image to Docker Hub.

  1. Go to your repository on GitHub and then select the Actions tab.
  2. Select set up a workflow yourself .

    This takes you to a page for creating a new GitHub actions workflow file in your repository, under .github/workflows/main.yml by default.

  3. In the editor window, copy and paste the following YAML configuration.

    name: ci
    
    on:
      push:
        branches:
          - "main"
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
    • name : the name of this workflow.
    • on.push.branches : specifies that this workflow should run on every push event for the branches in the list.
    • jobs : creates a job ID ( build ) and declares the type of machine that the job should run on.

For more information about the YAML syntax used here, see Workflow syntax for GitHub Actions.

Step three: Define the workflow steps

Now the essentials: what steps to run, and in what order to run them.

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_HUB_USERNAME }}
          password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          file: ./Dockerfile
          push: true
          tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest

The previous YAML snippet contains a sequence of steps that:

  1. Checks out the repository on the build machine.
  2. Signs in to Docker Hub, using the Docker Login action and your Docker Hub credentials.
  3. Creates a BuildKit builder instance using the Docker Setup Buildx action.
  4. Builds the container image and pushes it to the Docker Hub repository, using Build and push Docker images.

    The with key lists a number of input parameters that configures the step:

    • context : the build context.
    • file : filepath to the Dockerfile.
    • push : tells the action to upload the image to a registry after building it.
    • tags : tags that specify where to push the image.

Add these steps to your workflow file. The full workflow configuration should look as follows:

name: ci

on:
  push:
    branches:
      - "main"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_HUB_USERNAME }}
          password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          file: ./Dockerfile
          push: true
          tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest

Run the workflow

Save the workflow file and run the job.

  1. Select Start commit and push the changes to the main branch.

    After pushing the commit, the workflow starts automatically.

  2. Go to the Actions tab. It displays the workflow.

    Selecting the workflow shows you the breakdown of all the steps.

  3. When the workflow is complete, go to your repositories on Docker Hub.

    If you see the new repository in that list, it means the GitHub Actions successfully pushed the image to Docker Hub!

Next steps

This tutorial has shown you how to create a simple GitHub Actions workflow, using the official Docker actions, to build and push an image to Docker Hub.

There are many more things you can do to customize your workflow to better suit your needs. To learn more about some of the more advanced use cases, take a look at the advanced examples, such as building multi-platform images, or using cache storage backends and also how to configure your builder.

Read article