This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
With no configuration required, Atomist already draws vulnerability data from several public advisories. You can extend this by adding your own, custom advisories if you wish.
To add your own advisories:
Create a repository called atomist-advisories
in the GitHub account where
youâve installed the Atomist GitHub app.
In the default branch of the repository, add a new JSON file called
<source>/<source id>.json
, where:
source
should be the name of your companysource-id
has to be a unique id for the advisory within source
.The JSON file must follow the schema defined in Open Source Vulnerability format.
Refer to the GitHub Advisory Database for examples of advisories.
Delete an advisory from the database by removing the corresponding JSON advisory
file from the atomist-advisories
repository.
Note
Atomist only considers additions, changes and removals of JSON advisory files in the repositoryâs default branch.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
This page describes the configurable settings for Atomist. Enabling any of these settings instructs Atomist to carry out an action whenever a specific Git event occurs. These features require that you install the Atomist GitHub app in your GitHub organization.
To view and manage these settings, go to the settings page on the Atomist website.
Extract software bill of material from container images, and match packages with data from vulnerability advisories. Identify when new vulnerabilities get introduced, and display them as GitHub status check on the pull request that introduces them.
Pin base image tags to digests in Dockerfiles, and check for supported tags on Docker official images. Automatically creates a pull request pinning the Dockerfile to the latest digest for the base image tag used.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
To get started with Atomist, youâll need to:
Before you can begin the setup, you need a Docker ID. If you donât already have one, you can register here.
This section describes how to integrate Atomist with your container registry. Follow the applicable instructions depending on the type of container registry you use. After completing this setup, Atomist will have read-only access to your registry, and gets notified about pushed or deleted images.
Using Docker Hub? ð³
If you are using Docker Hub as your container registry, you can skip this step and go straight to linking images to Git source. Atomist integrates seamlessly with your Docker Hub organizations.
When setting up an Amazon Elastic Container Registry (ECR) integration with Atomist, the following AWS resources are required:
This procedure uses pre-defined CloudFormation templates to create the necessary
IAM role and Amazon EventBridge. This template protects you from
confused deputy attacks by ensuring a unique ExternalId
, along
with the appropriate condition on the IAM role statement.
Fill out all the fields, except Trusted Role ARN. The trusted role identity is known only after applying the CloudFormation template.
Choose basic auth credentials to protect the endpoint that AWS uses to notify Atomist. The URL and the basic auth credentials are parameters to the CloudFormation template.
Now create the CloudFormation stack. Before creating the stack, AWS asks you to enter three parameters.
Url
: the API endpoint copied from AtomistUsername
, Password
: basic authentication credentials for the endpoint.
Must match what you entered in the Atomist workspace.Use the following Launch Stack buttons to start reviewing the details in your AWS account.
Note
Before creating the stack, AWS will ask for acknowledgement that creating this stack requires a capability. This stack creates a role that will grant Atomist read-only access to ECR resources.
After creating the stack, copy the Value for the AssumeRoleArn key from the Outputs tab in AWS.
Paste the copied AssumeRoleArn value into the Trusted Role ARN field on the Atomist configuration page.
Select Save Configuration.
Atomist tests the connection with your ECR registry. A green check mark displays beside the integration if a successful connection is made.
To integrate Atomist with GitHub Container Registry, connect your GitHub account, and enter a personal access token for Atomist to use when pulling container images.
Fill out the fields and select Save Configuration.
Atomist requires the Personal access token for connecting images to
private repositories. The token must have the
read:packages
scope.
Leave the Personal access token field blank if you only want to index images in public repositories.
Setting up an Atomist integration with Google Container Registry and Google Artifact Registry involves:
gcr
topic to watch for activity in the
registry.To complete the following procedure requires administratorâs permissions in the project.
Set the following environment variables. You will use them in the next steps
when configuring the Google Cloud resources, using the gcloud
CLI.
export SERVICE_ACCOUNT_ID="atomist-integration" # can be anything you like
export PROJECT_ID="YOUR_GCP_PROJECT_ID"
Create the service account.
gcloud iam service-accounts create ${SERVICE_ACCOUNT_ID} \
--project ${PROJECT_ID} \
--description="Atomist Integration Service Account" \
--display-name="Atomist Integration"
Grant the service account read-only access to the artifact registry.
The role name differs depending on whether you use Artifact Registry or Container Registry:
roles/artifactregistry.reader
for Google Artifact Registryroles/object.storageViewer
for Google Container Registrygcloud projects add-iam-policy-binding ${PROJECT_ID} \
--project ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/artifactregistry.reader" # change this if you use GCR
Grant service account access to Atomist.
gcloud iam service-accounts add-iam-policy-binding "${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \
--project ${PROJECT_ID} \
--member="serviceAccount:atomist-bot@atomist.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountTokenCreator"
Fill out the following fields:
PROJECT_ID
used in earlier steps.Select Save Configuration. Atomist will test the connection. Green check marks indicate a successful connection.
Next, create a new PubSub subscription on the gcr
topic in registry. This
subscription notifies Atomist about new or deleted images in the registry.
Copy the URL in the Events Webhook field to your clipboard. This will be
the PUSH_ENDPOINT_URI
for the PubSub subscription.
Define the following three variable values, in addition to the PROJECT_ID
and SERVICE_ACCOUNT_ID
from earlier:
PUSH_ENDPOINT_URL
: the webhook URL copied from the Atomist workspace.SERVICE_ACCOUNT_EMAIL
: the service account address; a combination of the
service account ID and project ID.SUBSCRIPTION
: the name of the PubSub (can be anything).PUSH_ENDPOINT_URI={COPY_THIS_FROM_ATOMIST}
SERVICE_ACCOUNT_EMAIL="${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com"
SUBSCRIPTION="atomist-integration-subscription"
Create the PubSub for the gcr
topic.
gcloud pubsub subscriptions create ${SUBSCRIPTION} \
--topic='gcr' \
--push-auth-token-audience='atomist' \
--push-auth-service-account="${SERVICE_ACCOUNT_EMAIL}" \
--push-endpoint="${PUSH_ENDPOINT_URI}"
When the first image push is successfully detected, a green check mark on the integration page will indicate that the integration works.
Atomist can index images in a JFrog Artifactory repository by means of a monitoring agent.
The agent scans configured repositories at regular intervals, and send newly discovered imagesâ metadata to the Atomist data plane.
In the following example, https://hal9000.atomist.com
is a private registry
only visible on an internal network.
docker run -ti atomist/docker-registry-broker:latest\
index-image remote \
--workspace AQ1K5FIKA \
--api-key team::6016307E4DF885EAE0579AACC71D3507BB38E1855903850CF5D0D91C5C8C6DC0 \
--artifactory-url https://hal9000.docker.com \
--artifactory-repository atomist-docker-local \
--container-registry-host atomist-docker-local.hal9000.docker.com
--username admin \
--password password
Parameter | Description |
---|---|
workspace |
ID of your Atomist workspace. |
api-key |
Atomist API key. |
artifactory-url |
Base URL of the Artifactory instance. Must not contain trailing slashes. |
artifactory-repository |
The name of the container registry to watch. |
container-registry-host |
The hostname associated with the Artifactory repository containing images, if different from artifactory-url . |
username |
Username for HTTP basic authentication with Artifactory. |
password |
Password for HTTP basic authentication with Artifactory. |
Knowing the source repository of an image is a prerequisite for Atomist to interact with the Git repository. For Atomist to be able to link scanned images back to a Git repository repository, you must annotate the image at build time.
The image labels that Atomist requires are:
Label | Value |
---|---|
org.opencontainers.image.revision |
The commit revision that the image is built for. |
com.docker.image.source.entrypoint |
Path to the Dockerfile, relative to project root. |
For more information about pre-defined OCI annotations, see the specification document on GitHub.
You can add these labels to images using the built-in Git provenance feature of
Buildx, or set using the --label
CLI argument.
Beta
Git provenance labels in Buildx is a Beta feature.
To add the image labels using Docker Buildx, set the environment variable
BUILDX_GIT_LABELS=1
. The Buildx will create the labels automatically when
building the image.
export BUILDX_GIT_LABELS=1
docker buildx build . -f docker/Dockerfile
Assign image labels using the --label
argument for docker build
.
docker build . -f docker/Dockerfile -t $IMAGE_NAME \
--label "org.opencontainers.image.revision=10ac8f8bdaa343677f2f394f9615e521188d736a" \
--label "com.docker.image.source.entrypoint=docker/Dockerfile"
Images built in a CI/CD environment can leverage the built-in environment variables when setting the Git revision label:
Build tool | Environment variable |
---|---|
GitHub Actions | ${{ github.sha }} |
GitHub Actions, pull requests | ${{ github.event.pull_request.head.sha }} |
GitLab CI/CD | $CI_COMMIT_SHA |
Docker Hub automated builds | $SOURCE_COMMIT |
Google Cloud Build | $COMMIT_SHA |
AWS CodeBuild | $CODEBUILD_RESOLVED_SOURCE_VERSION |
Manually | $(git rev-parse HEAD) |
Consult the documentation for your CI/CD platform to learn which variables to use.
Atomist is now tracking bill of materials, packages, and vulnerabilities for your images! You can view your image scan results on the images overview page.
Teams use Atomist to protect downstream workloads from new vulnerabilities. Itâs also used to help teams track and remediate new vulnerabilities that impact existing workloads. The following sections describe integrate and configure Atomist further. For example, to gain visibility into container workload systems like Kubernetes.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
Atomist is a data and automation platform for managing the software supply chain. It extracts metadata from container images, evaluates the data, and helps you understand the state of the image.
Integrating Atomist into your systems and repositories grants you essential information about the images you build, and the containers running in production. Beyond collecting and visualizing information, Atomist can help you further by giving you recommendations, notifications, validation, and more.
Example capabilities made possible with Atomist are:
Atomist monitors your container registry for new images. When it finds a new image, it analyzes and extracts metadata about the image contents and any base images used. The metadata is uploaded to an isolated partition in the Atomist data plane where itâs securely stored.
The Atomist data plane is a combination of metadata and a large knowledge graph of public software and vulnerability data. Atomist determines the state of your container by overlaying the image metadata with the knowledge graph.
Head over to the try atomist page for instructions on how to run Atomist, locally and with no strings attached.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
By integrating Atomist with a runtime environment, you can track vulnerabilities for deployed containers. This gives you contexts for whether security debt is increasing or decreasing.
There are several options for how you could implement deployment tracking:
Each Atomist workspace exposes an API endpoint. Submitting a POST request to the endpoint updates Atomist about what image you are running in your environments. This lets you compare data for images you build against images of containers running in staging or production.
You can find the API endpoint URL on the Integrations page. Using this API requires an API key.
The most straight-forward use is to post to this endpoint using a webhook. When
deploying a new image, submit an automated POST request (using curl
, for
example) as part of your deployment pipeline.
$ curl <api-endpoint-url> \\
-X POST \\
-H "Content-Type: application/json" \\
-H "Authorization: Bearer <api-token>" \\
-d '{"image": {"url": "<image-url>@<sha256-digest>"}}'
The API supports the following parameters in the request body:
{
"image": {
"url": "string",
"name": "string"
},
"environment": {
"name": "string"
},
"platform": {
"os": "string",
"architecture": "string",
"variant": "string"
}
}
Parameter | Mandatory | Default | Description |
---|---|---|---|
image.url |
Yes | Â | Fully qualified reference name of the image, plus version (digest). You must specify the image version by digest. |
image.name |
No | Â | Optional identifier. If you deploy many containers from the same image in any one environment, each instance must have a unique name. |
environment.name |
No | deployed |
Use custom environment names to track different image versions in environments, like staging and production |
platform.os |
No | linux |
Image operating system. |
platform.architecture |
No | amd64 |
Instruction set architecture. |
platform.variant |
No | Â | Optional variant label. |
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
When installed for a GitHub organization, the Atomist GitHub app links repository activity to images. This enables Atomist to relate image tags and digests directly to specific commits in the source repository. It also opens up the possibility to incorporate image analysis in your Git workflow. For example, by adding analysis checks to pull request, or automatically raising pull requests for updating and pinning base image versions.
Install the GitHub app in the organization that contains the source code repositories for your Docker images.
Select Connect to GitHub and follow the authorization flow. This installs the Atomist GitHub App.
Install the app.
Note
If your GitHub account is a member of one or more organizations, GitHub prompts you to choose which account to install the app into. Select the account that contains the source repositories for your images.
After installing the app, GitHub redirects you back to Atomist.
In the repository selection menu, select what repositories you want Atomist to start watching.
If you are just looking to evaluate Atomist, start by selecting a few repositories during evaluation. Once you are comfortable using Atomist, you can switch on the integration for all repositories. Selecting All repositories also includes any repository created in the future.
Important
If Atomist detects
FROM
commands in Dockerfiles in the selected repositories, it begins raising automated pull requests. The pull requests update the DockerfileFROM
-line to specify the image versions (as digests).
Atomist is now connected with your GitHub repositories and is be able to link image analyses with Git commits.
If you wish to add or remove repository access for Atomist, go to the Repositories page.
You might want to disconnect from GitHub when:
You want to change which GitHub organization or account connected to your Atomist workspace.
To do so, disconnect the old GitHub organization or account first. Then, follow the instructions for connecting to GitHub for the new GitHub organization or account.
You want to remove Atomist access to a GitHub organization or account when you no longer use Atomist.
To disconnect a GitHub account:
Go to the GitHub Applications settings page, then:
Select Configure
Select Uninstall. This removes the installation of the Atomist GitHub App from your GitHub organization or account.
Select Revoke.
This removes the authorization of the Atomist GitHub App from your GitHub organization or account.
Note
Atomist is currently in Early Access. Features and APIs are subject to change.
The quickest way to try Atomist is to run it on your local images, as a CLI tool. Trying it locally eliminates the need of having to integrate with and connect to a remote container registry. The CLI uses your local Docker daemon directly to upload the Software Bill of Materials (SBOM) to the Atomist control plane for analysis.
Before you can begin the setup, you need a Docker ID. If you donât already have one, you can register here.
Note
Only use this CLI-based method of indexing images for testing or trial purposes. For further evaluation or production use, integrate Atomist with your container registry. See get started.
In your terminal of choice, invoke the Atomist CLI tool using docker run
.
Update the following values:
--workspace
: the workspace ID found on the Integrations page on the
Atomist website.--api-key
: the API key you just created.--image
: the Docker image that you want to index.docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-ti atomist/docker-registry-broker:latest \
index-image local \
--workspace AQ1K5FIKA \
--api-key team::6016307E4DF885EAE0579AACC71D3507BB38E1855903850CF5D0D91C5C8C6DC0 \
--image docker.io/david/myimage:latest
Note
The image must have a tag (for example,
myimage:latest
) so that you are able to identify the image later.
The output should be similar to the following:
[info] Starting session with correlation-id c12e08d3-3bcc-4475-ab21-7114da599eaf
[info] Starting atomist/docker-vulnerability-scanner-skill 'index_image' (1f99caa) atomist/skill:0.12.0-main.44 (fe90e3c) nodejs:16.15.0
[info] Indexing image python:latest
[info] Downloading image
[info] Download completed
[info] Indexing completed
[info] Mapped packages to layers
[info] Indexing completed successfully
[info] Transacting image manifest for docker.io/david/myimage:latest with digest sha256:a8077d2b2ff4feb1588d941f00dd26560fe3a919c16a96305ce05f7b90f388f6
[info] Successfully transacted entities in team AQ1K5FIKA
[info] Image URL is https://dso.atomist.com/AQ1K5FIKA/overview/images/myimage/digests/sha256:a8077d2b2ff4feb1588d941f00dd26560fe3a919c16a96305ce05f7b90f388f6
[info] Transacting SBOM...
[info] Successfully transacted entities in team AQ1K5FIKA
[info] Transacting SBOM...
When the command exits, open the Atomist web UI, where you should see the image in the list.
Select the image name. This gets you to the list of image tags.
Since this is your first time indexing this image, the list only has one tag for now. When you integrate Atomist with your container registry, images and tags show up in this list automatically.
Select the tag name. This shows you the insights for this tag.
In this view, you can see how many vulnerabilities this image has, their severity levels, whether there is a fix version available, and more.
The tutorial ends here. Take some time to explore the different data views that Atomist presents about your image. When youâre ready, head to the get started guide to learn how to start integrating Atomist in your software supply chain.
In addition to the main context
key that defines the build context each target
can also define additional named contexts with a map defined with key contexts
.
These values map to the --build-context
flag in the build command.
Inside the Dockerfile these contexts can be used with the FROM
instruction or --from
flag.
The value can be a local source directory, container image (with docker-image://
prefix),
Git URL, HTTP URL or a name of another target in the Bake file (with target:
prefix).
# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello world"
# docker-bake.hcl
target "app" {
contexts = {
alpine = "docker-image://alpine:3.13"
}
}
# syntax=docker/dockerfile:1
FROM scratch AS src
FROM golang
COPY --from=src . .
# docker-bake.hcl
target "app" {
contexts = {
src = "../path/to/source"
}
}
To use a result of one target as a build context of another, specity the target
name with target:
prefix.
# syntax=docker/dockerfile:1
FROM baseapp
RUN echo "Hello world"
# docker-bake.hcl
target "base" {
dockerfile = "baseapp.Dockerfile"
}
target "app" {
contexts = {
baseapp = "target:base"
}
}
Please note that in most cases you should just use a single multi-stage Dockerfile with multiple targets for similar behavior. This case is recommended when you have multiple Dockerfiles that canât be easily merged into one.
Bake uses the compose-spec to parse a compose file and translate each service to a target.
# docker-compose.yml
services:
webapp-dev:
build: &build-dev
dockerfile: Dockerfile.webapp
tags:
- docker.io/username/webapp:latest
cache_from:
- docker.io/username/webapp:cache
cache_to:
- docker.io/username/webapp:cache
webapp-release:
build:
<<: *build-dev
x-bake:
platforms:
- linux/amd64
- linux/arm64
db:
image: docker.io/username/db
build:
dockerfile: Dockerfile.db
$ docker buildx bake --print
{
"group": {
"default": {
"targets": [
"db",
"webapp-dev",
"webapp-release"
]
}
},
"target": {
"db": {
"context": ".",
"dockerfile": "Dockerfile.db",
"tags": [
"docker.io/username/db"
]
},
"webapp-dev": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
]
},
"webapp-release": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
}
}
}
Unlike the HCL format, there are some limitations with the compose format:
inherits
service field is not supported, but you can use YAML anchors
to reference other services like the example above.env
fileYou can declare default environment variables in an environment file named
.env
. This file will be loaded from the current working directory,
where the command is executed and applied to compose definitions passed
with -f
.
# docker-compose.yml
services:
webapp:
image: docker.io/username/webapp:${TAG:-v1.0.0}
build:
dockerfile: Dockerfile
# .env
TAG=v1.1.0
$ docker buildx bake --print
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:v1.1.0"
]
}
}
}
Note
System environment variables take precedence over environment variables in
.env
file.
x-bake
Even if some fields are not (yet) available in the compose specification, you
can use the special extension
field x-bake
in your compose file to evaluate extra fields:
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true
$ docker buildx bake --print
{
"group": {
"default": {
"targets": [
"aws",
"addon"
]
}
},
"target": {
"addon": {
"context": ".",
"dockerfile": "./Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-addon:foo",
"ct-addon:alp"
],
"cache-from": [
"user/app:cache",
"type=local,src=path/to/cache"
],
"cache-to": [
"type=local,dest=path/to/cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
],
"pull": true
},
"aws": {
"context": ".",
"dockerfile": "./aws.Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-fake-aws:bar"
],
"secret": [
"id=mysecret,src=./secret",
"id=mysecret2,src=./secret2"
],
"platforms": [
"linux/arm64"
],
"output": [
"type=docker"
],
"no-cache": true
}
}
}
Complete list of valid fields for x-bake
:
cache-from
cache-to
contexts
no-cache
no-cache-filter
output
platforms
pull
secret
ssh
tags
Bake supports loading build definition from files, but sometimes you need even more flexibility to configure this definition.
For this use case, you can define variables inside the bake files that can be
set by the user with environment variables or by attribute definitions
in other bake files. If you wish to change a specific value for a single
invocation you can use the --set
flag from the command line.
You can define global scope attributes in HCL/JSON and use them for code reuse and setting values for variables. This means you can do a âdata-onlyâ HCL file with the values you want to set/override and use it in the list of regular output files.
# docker-bake.hcl
variable "FOO" {
default = "abc"
}
target "app" {
args = {
v1 = "pre-${FOO}"
}
}
You can use this file directly:
$ docker buildx bake --print app
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-abc"
}
}
}
}
Or create an override configuration file:
# env.hcl
WHOAMI="myuser"
FOO="def-${WHOAMI}"
And invoke bake together with both of the files:
$ docker buildx bake -f docker-bake.hcl -f env.hcl --print app
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-def-myuser"
}
}
}
}
You can also override target configurations from the command line with the
--set
flag:
# docker-bake.hcl
target "app" {
args = {
mybuildarg = "foo"
}
}
$ docker buildx bake --set app.args.mybuildarg=bar --set app.platform=linux/arm64 app --print
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"mybuildarg": "bar"
},
"platforms": [
"linux/arm64"
]
}
}
}
Pattern matching syntax defined in https://golang.org/pkg/path/#Match is also supported:
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with "foo"
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with "foo"
Complete list of overridable fields:
args
cache-from
cache-to
context
dockerfile
labels
no-cache
output
platform
pull
secrets
ssh
tags
target
When multiple files are specified, one file can use variables defined in another file.
# docker-bake1.hcl
variable "FOO" {
default = upper("${BASE}def")
}
variable "BAR" {
default = "-${FOO}-"
}
target "app" {
args = {
v1 = "pre-${BAR}"
}
}
# docker-bake2.hcl
variable "BASE" {
default = "abc"
}
target "app" {
args = {
v2 = "${FOO}-post"
}
}
$ docker buildx bake -f docker-bake1.hcl -f docker-bake2.hcl --print app
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre--ABCDEF-",
"v2": "ABCDEF-post"
}
}
}
}