Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Install Docker Buildx

Install Docker Buildx

Docker Desktop

Docker Buildx is included by default in Docker Desktop.

Docker Engine via package manager

Docker Linux packages also include Docker Buildx when installed using the .deb or .rpm packages.

Install using a Dockerfile

Here is how to install and use Buildx inside a Dockerfile through the docker/buildx-bin image:

# syntax=docker/dockerfile:1
FROM docker
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Download manually


This section is for unattended installation of the Buildx component. These instructions are mostly suitable for testing purposes. We do not recommend installing Buildx using manual download in production environments as they will not be updated automatically with security updates.

On Windows, macOS, and Linux workstations we recommend that you install Docker Desktop instead. For Linux servers, we recommend that you follow the instructions specific for your distribution.

You can also download the latest binary from the releases page on GitHub.

Rename the relevant binary and copy it to the destination matching your OS:

OS Binary name Destination folder
Linux docker-buildx $HOME/.docker/cli-plugins
macOS docker-buildx $HOME/.docker/cli-plugins
Windows docker-buildx.exe %USERPROFILE%\.docker\cli-plugins

Or copy it into one of these folders for installing it system-wide.

On Unix environments:

  • /usr/local/lib/docker/cli-plugins OR /usr/local/libexec/docker/cli-plugins
  • /usr/lib/docker/cli-plugins OR /usr/libexec/docker/cli-plugins

On Windows:

  • C:\ProgramData\Docker\cli-plugins
  • C:\Program Files\Docker\cli-plugins


On Unix environments, it may also be necessary to make it executable with chmod +x :

$ chmod +x ~/.docker/cli-plugins/docker-buildx

Set Buildx as the default builder

Running the command docker buildx install sets up the docker build command as an alias to docker buildx . This results in the ability to have docker build use the current Buildx builder.

To remove this alias, run docker buildx uninstall .

Build release notes

Build release notes

This page contains information about the new features, improvements, and bug fixes in Docker Buildx.



Bug fixes and enhancements

  • The inspect command now displays the BuildKit version in use docker/buildx#1279
  • Fixed a regression when building Compose files that contain services without a build block docker/buildx#1277

For more details, see the complete release notes in the Buildx GitHub repository.




  • Support for new driver remote that you can use to connect to any already running BuildKit instance docker/buildx#1078 docker/buildx#1093 docker/buildx#1094 docker/buildx#1103 docker/buildx#1134 docker/buildx#1204
  • You can now load Dockerfile from standard input even when the build context is coming from external Git or HTTP URL docker/buildx#994
  • Build commands now support new the build context type oci-layout:// for loading build context from local OCI layout directories. Note that this feature depends on an unreleased BuildKit feature and builder instance from moby/buildkit:master needs to be used until BuildKit v0.11 is released docker/buildx#1173
  • You can now use the new --print flag to run helper functions supported by the BuildKit frontend performing the build and print their results. You can use this feature in Dockerfile to show the build arguments and secrets that the current build supports with --print=outline and list all available Dockerfile stages with --print=targets . This feature is experimental for gathering early feedback and requires enabling BUILDX_EXPERIMENTAL=1 environment variable. We plan to update/extend this feature in the future without keeping backward compatibility docker/buildx#1100 docker/buildx#1272
  • You can now use the new --invoke flag to launch interactive containers from build results for an interactive debugging cycle. You can reload these containers with code changes or restore them to an initial state from the special monitor mode. This feature is experimental for gathering early feedback and requires enabling BUILDX_EXPERIMENTAL=1 environment variable. We plan to update/extend this feature in the future without enabling backward compatibility docker/buildx#1168 docker/buildx#1257 docker/buildx#1259
  • Buildx now understands environment variable BUILDKIT_COLORS and NO_COLOR to customize/disable the colors of interactive build progressbar docker/buildx#1230 docker/buildx#1226
  • buildx ls command now shows the current BuildKit version of each builder instance docker/buildx#998
  • The bake command now loads .env file automatically when building Compose files for compatibility docker/buildx#1261
  • Bake now supports Compose files with cache_to definition docker/buildx#1155
  • Bake now supports new builtin function timestamp() to access current time docker/buildx#1214
  • Bake now supports Compose build secrets definition docker/buildx#1069
  • Additional build context configuration is now supported in Compose files via x-bake docker/buildx#1256
  • Inspecting builder now shows current driver options configuration docker/buildx#1003 docker/buildx#1066


  • Updated the Compose Specification to 1.4.0 docker/buildx#1246 docker/buildx#1251

Bug fixes and enhancements

  • The buildx ls command output has been updated with better access to errors from different builders docker/buildx#1109
  • The buildx create command now performs additional validation of builder parameters to avoid creating a builder instance with invalid configuration docker/buildx#1206
  • The buildx imagetools create command can now create new multi-platform images even if the source subimages are located on different repositories or registries docker/buildx#1137
  • You can now set the default builder config that is used when creating builder instances without passing custom --config value docker/buildx#1111
  • Docker driver can now detect if dockerd instance supports initially disabled Buildkit features like multi-platform images docker/buildx#1260 docker/buildx#1262
  • Compose files using targets with . in the name are now converted to use _ so the selector keys can still be used in such targets docker/buildx#1011
  • Included an additional validation for checking valid driver configurations docker/buildx#1188 docker/buildx#1273
  • The remove command now displays the removed builder and forbids removing context builders docker/buildx#1128
  • Enable Azure authentication when using Kubernetes driver docker/buildx#974
  • Add tolerations handling for kubernetes driver docker/buildx#1045 docker/buildx#1053
  • Replace deprecated seccomp annotations with securityContext in kubernetes driver docker/buildx#1052
  • Fix panic on handling manifests with nil platform docker/buildx#1144
  • Fix using duration filter with prune command docker/buildx#1252
  • Fix merging multiple JSON files on Bake definition docker/buildx#1025
  • Fix issues with implicit builder created from Docker context had invalid configuration or dropped connection docker/buildx#1129
  • Fix conditions for showing no-output warning when using named contexts docker/buildx#968
  • Fix duplicating builders when builder instance and docker context have the same name docker/buildx#1131
  • Fix printing unnecessary SSH warning logs docker/buildx#1085
  • Fix possible panic when using an empty variable block with Bake JSON definition docker/buildx#1080
  • Fix image tools commands not handling --builder flag correctly docker/buildx#1067
  • Fix using custom image together with rootless option docker/buildx#1063

For more details, see the complete release notes in the Buildx GitHub repository.




  • Update Compose spec used by buildx bake to v1.2.1 to fix parsing ports definition docker/buildx#1033

Bug fixes and enhancements

  • Fix possible crash on handling progress streams from BuildKit v0.10 docker/buildx#1042
  • Fix parsing groups in buildx bake when already loaded by a parent group docker/buildx#1021

For more details, see the complete release notes in the Buildx GitHub repository.



Bug fixes and enhancements

  • Fix possible panic on handling build context scanning errors docker/buildx#1005
  • Allow . on Compose target names in buildx bake for backward compatibility docker/buildx#1018

For more details, see the complete release notes in the Buildx GitHub repository.




  • Build command now accepts --build-context flag to define additional named build contexts for your builds docker/buildx#904
  • Bake definitions now support defining dependencies between targets and using the result of one target in another build docker/buildx#928 docker/buildx#965 docker/buildx#963 docker/buildx#962 docker/buildx#981
  • imagetools inspect now accepts --format flag allowing access to config and buildinfo for specific images docker/buildx#854 docker/buildx#972
  • New flag --no-cache-filter allows configuring build, so it ignores cache only for specified Dockerfile stages docker/buildx#860
  • Builds can now show a summary of warnings sets by the building frontend docker/buildx#892
  • The new build argument BUILDKIT_INLINE_BUILDINFO_ATTRS allows opting-in to embed building attributes to resulting image docker/buildx#908
  • The new flag --keep-buildkitd allows keeping BuildKit daemon running when removing a builder
    • docker/buildx#852

Bug fixes and enhancements

  • --metadata-file output now supports embedded structure types docker/buildx#946
  • buildx rm now accepts new flag --all-inactive for removing all builders that are not currently running docker/buildx#885
  • Proxy config is now read from Docker configuration file and sent with build requests for backward compatibility docker/buildx#959
  • Support host networking in Compose docker/buildx#905 docker/buildx#880
  • Bake files can now be read from stdin with -f - docker/buildx#864
  • --iidfile now always writes the image config digest independently of the driver being used (use --metadata-file for digest) docker/buildx#980
  • Target names in Bake are now restricted to not use special characters docker/buildx#929
  • Image manifest digest can be read from metadata when pushed with docker driver docker/buildx#989
  • Fix environment file handling in Compose files docker/buildx#905
  • Show last access time in du command docker/buildx#867
  • Fix possible double output logs when multiple Bake targets run same build steps docker/buildx#977
  • Fix possible errors on multi-node builder building multiple targets with mixed platform docker/buildx#985
  • Fix some nested inheritance cases in Bake docker/buildx#914
  • Fix printing default group on Bake files docker/buildx#884
  • Fix UsernsMode when using rootless container docker/buildx#887

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix issue with matching exclude rules in .dockerignore docker/buildx#858
  • Fix bake --print JSON output for current group docker/buildx#857

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • TLS certificates from BuildKit configuration are now transferred to build container with docker-container and kubernetes drivers docker/buildx#787
  • Builds support --ulimit flag for feature parity docker/buildx#800
  • Builds support --shm-size flag for feature parity docker/buildx#790
  • Builds support --quiet for feature parity docker/buildx#740
  • Builds support --cgroup-parent flag for feature parity docker/buildx#814
  • Bake supports builtin variable BAKE_LOCAL_PLATFORM docker/buildx#748
  • Bake supports x-bake extension field in Compose files docker/buildx#721
  • kubernetes driver now supports colon-separated KUBECONFIG docker/buildx#761
  • kubernetes driver now supports setting Buildkit config file with --config docker/buildx#682
  • kubernetes driver now supports installing QEMU emulators with driver-opt docker/buildx#682


  • Allow using custom registry configuration for multi-node pushes from the client docker/buildx#825
  • Allow using custom registry configuration for buildx imagetools command docker/buildx#825
  • Allow booting builder after creating with buildx create --bootstrap docker/buildx#692
  • Allow registry:insecure output option for multi-node pushes docker/buildx#825
  • BuildKit config and TLS files are now kept in Buildx state directory and reused if BuildKit instance needs to be recreated docker/buildx#824
  • Ensure different projects use separate destination directories for incremental context transfer for better performance docker/buildx#817
  • Build containers are now placed on separate cgroup by default docker/buildx#782
  • Bake now prints the default group with --print docker/buildx#720
  • docker driver now dials build session over HTTP for better performance docker/buildx#804


  • Fix using --iidfile together with a multi-node push docker/buildx#826
  • Using --push in Bake does not clear other image export options in the file docker/buildx#773
  • Fix Git URL detection for buildx bake when https protocol was used docker/buildx#822
  • Fix pushing image with multiple names on multi-node builds docker/buildx#815
  • Avoid showing --builder flags for commands that don’t use it docker/buildx#818
  • Unsupported build flags now show a warning docker/buildx#810
  • Fix reporting error details in some OpenTelemetry traces docker/buildx#812

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix BuildKit state volume location for Windows clients docker/buildx#751

For more details, see the complete release notes in the Buildx GitHub repository.



For more details, see the complete release notes in the Buildx GitHub repository.


  • Fix connection error showing up in some SSH configurations docker/buildx#741




  • Set ConfigFile to parse compose files with Bake docker/buildx#704


  • Duplicate progress env var docker/buildx#693
  • Should ignore nil client docker/buildx#686

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Support for OpenTelemetry traces and forwarding Buildx client traces to BuildKit docker/buildx#635
  • Experimental GitHub Actions remote cache backend with --cache-to type=gha and --cache-from type=gha docker/buildx#535
  • New --metadata-file flag has been added to build and Bake command that allows saving build result metadata in JSON format docker/buildx#605
  • This is the first release supporting Windows ARM64 docker/buildx#654
  • This is the first release supporting Linux Risc-V docker/buildx#652
  • Bake now supports building from remote definition with local files or another remote source as context docker/buildx#671
  • Bake now allows variables to reference each other and using user functions in variables and vice-versa docker/buildx#575 docker/buildx#539 docker/buildx#532
  • Bake allows defining attributes in the global scope docker/buildx#541
  • Bake allows variables across multiple files docker/buildx#538
  • New quiet mode has been added to progress printer docker/buildx#558
  • kubernetes driver now supports defining resources/limits docker/buildx#618
  • Buildx binaries can now be accessed through buildx-bin Docker image docker/buildx#656


  • docker-container driver now keeps BuildKit state in volume. Enabling updates with keeping state docker/buildx#672
  • Compose parser is now based on new compose-go parser fixing support for some newer syntax docker/buildx#669
  • SSH socket is now automatically forwarded when building an ssh-based git URL docker/buildx#581
  • Bake HCL parser has been rewritten docker/buildx#645
  • Extend HCL support with more functions docker/buildx#491 docker/buildx#503
  • Allow secrets from environment variables docker/buildx#488
  • Builds with an unsupported multi-platform and load configuration now fail fast docker/buildx#582
  • Store Kubernetes config file to make buildx builder switchable docker/buildx#497
  • Kubernetes now lists all pods as nodes on inspection docker/buildx#477
  • Default Rootless image has been set to moby/buildkit:buildx-stable-1-rootless docker/buildx#480


  • imagetools create command now correctly merges JSON descriptor with old one docker/buildx#592
  • Fix building with --network=none not requiring extra security entitlements docker/buildx#531

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix regression on setting --platform on buildx create outside kubernetes driver docker/buildx#475

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • The docker driver now supports the --push flag docker/buildx#442
  • Bake supports inline Dockerfiles docker/buildx#398
  • Bake supports building from remote URLs and Git repositories docker/buildx#398
  • BUILDX_CONFIG env var allow users to have separate buildx state from Docker config docker/buildx#385
  • BUILDKIT_MULTI_PLATFORM build arg allows to force building multi-platform return objects even if only one --platform specified docker/buildx#467


  • Allow --append to be used with kubernetes driver docker/buildx#370
  • Build errors show error location in source files and system stacktraces with --debug docker/buildx#389
  • Bake formats HCL errors with source definition docker/buildx#391
  • Bake allows empty string values in arrays that will be discarded docker/buildx#428
  • You can now use the Kubernetes cluster config with the kubernetes driver docker/buildx#368 docker/buildx#460
  • Creates a temporary token for pulling images instead of sharing credentials when possible docker/buildx#469
  • Ensure credentials are passed when pulling BuildKit container image docker/buildx#441 docker/buildx#433
  • Disable user namespace remapping in docker-container driver docker/buildx#462
  • Allow --builder flag to switch to default instance docker/buildx#425
  • Avoid warn on empty BUILDX_NO_DEFAULT_LOAD config value docker/buildx#390
  • Replace error generated by quiet option by a warning docker/buildx#403
  • CI has been switched to GitHub Actions docker/buildx#451 docker/buildx#463 docker/buildx#466 docker/buildx#468 docker/buildx#471


  • Handle lowercase Dockerfile name as a fallback for backward compatibility docker/buildx#444

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Support cacheonly exporter docker/buildx#337


  • Update go-cty to pull in more stdlib functions docker/buildx#277
  • Improve error checking on load docker/buildx#281


  • Fix parsing json config with HCL docker/buildx#280
  • Ensure --builder is wired from root options docker/buildx#321
  • Remove warning for multi-platform iidfile docker/buildx#351

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix regression on flag parsing docker/buildx#268
  • Fix using pull and no-cache keys in HCL targets docker/buildx#268

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Add kubernetes driver docker/buildx#167
  • New global --builder flag to override builder instance for a single command docker/buildx#246
  • New prune and du commands for managing local builder cache docker/buildx#249
  • You can now set the new pull and no-cache options for HCL targets docker/buildx#165


  • Upgrade Bake to HCL2 with support for variables and functions docker/buildx#192
  • Bake now supports --load and --push docker/buildx#164
  • Bake now supports wildcard overrides for multiple targets docker/buildx#164
  • Container driver allows setting environment variables via driver-opt docker/buildx#170

For more details, see the complete release notes in the Buildx GitHub repository.




  • Handle copying unix sockets instead of erroring docker/buildx#155 moby/buildkit#1144


  • Running Bake with multiple Compose files now merges targets correctly docker/buildx#134
  • Fix bug when building a Dockerfile from stdin ( build -f - ) docker/buildx#153

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Custom buildkitd daemon flags docker/buildx#102
  • Driver-specific options on create docker/buildx#122


  • Environment variables are used in Compose files docker/buildx#117
  • Bake now honors --no-cache and --pull docker/buildx#118
  • Custom BuildKit config file docker/buildx#121
  • Entitlements support with build --allow docker/buildx#104


  • Fix bug where --build-arg foo would not read foo from environment docker/buildx#116

For more details, see the complete release notes in the Buildx GitHub repository.




  • Change Compose file handling to require valid service specifications docker/buildx#87

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Add BUILDKIT_PROGRESS env var docker/buildx#69
  • Add local platform docker/buildx#70


  • Keep arm variant if one is defined in the config docker/buildx#68
  • Make dockerfile relative to context docker/buildx#83


  • Fix parsing target from compose files docker/buildx#53

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • First release

For more details, see the complete release notes in the Buildx GitHub repository.

Read article
ACI integration Compose features

ACI integration Compose features

Compose - Azure Container Instances mapping

This document outlines the conversion of an application defined in a Compose file to ACI objects. At a high-level, each Compose deployment is mapped to a single ACI container group. Each service is mapped to a container in the container group. The Docker ACI integration does not allow scaling of services.

Compose fields mapping

The table below lists supported Compose file fields and their ACI counterparts.


  • ✓: Implemented
  • n: Not yet implemented
  • x: Not applicable / no available conversion
Keys Map Notes
Service ✓ Â
service.service.build x Ignored. No image build support on ACI.
service.cap_add, cap_drop x Â
service.command ✓ Override container Command. On ACI, specifying command will override the image command and entrypoint, if the image has an command or entrypoint defined
service.configs x Â
service.cgroup_parent x Â
service.container_name x Service name is used as container name on ACI.
service.credential_spec x Â
service.deploy ✓ Â
service.deploy.endpoint_mode x Â
service.deploy.mode x Â
service.deploy.replicas x Only one replica is started for each service.
service.deploy.placement x Â
service.deploy.update_config x Â
service.deploy.resources ✓ Restriction: ACI resource limits cannot be greater than the sum of resource reservations for all containers in the container group. Using container limits that are greater than container reservations will cause containers in the same container group to compete with resources.
service.deploy.restart_policy ✓ One of: any , none , on-failure . Restriction: All services must have the same restart policy. The entire ACI container group will be restarted if needed.
service.deploy.labels x ACI does not have container-level labels.
service.devices x Â
service.depends_on x Â
service.dns x Â
service.dns_search x Â
service.domainname ✓ Mapped to ACI DNSLabelName. Restriction: all services must specify the same domainname , if specified. domainname must be unique globally in .azurecontainer.io
service.tmpfs x Â
service.entrypoint x ACI only supports overriding the container command.
service.env_file ✓ Â
service.environment ✓ Â
service.expose x Â
service.extends x Â
service.external_links x Â
service.extra_hosts x Â
service.group_add x Â
service.healthcheck ✓ Â
service.hostname x Â
service.image ✓ Private images will be accessible if the user is logged into the corresponding registry at deploy time. Users will be automatically logged in to Azure Container Registry using their Azure login if possible.
service.isolation x Â
service.labels x ACI does not have container-level labels.
service.links x Â
service.logging x Â
service.network_mode x Â
service.networks x Communication between services is implemented by defining mapping for each service in the shared /etc/hosts file of the container group. Each service can resolve names for other services and the resulting network calls will be redirected to localhost .
service.pid x Â
service.ports ✓ Only symmetrical port mapping is supported in ACI. See Exposing ports.
service.secrets ✓ See Secrets.
service.security_opt x Â
service.stop_grace_period x Â
service.stop_signal x Â
service.sysctls x Â
service.ulimits x Â
service.userns_mode x Â
service.volumes ✓ Mapped to AZure File Shares. See Persistent volumes.
service.restart x Replaced by service.deployment.restart_policy
Volume x Â
driver ✓ See Persistent volumes.
driver_opts ✓ Â
external x Â
labels x Â
Secret x Â
Config x Â


Container logs can be obtained for each container with docker logs <CONTAINER> . The Docker ACI integration does not currently support aggregated logs for containers in a Compose application, see https://github.com/docker/compose-cli/issues/803.

Exposing ports

When one or more services expose ports, the entire ACI container group will be exposed and will get a public IP allocated. As all services are mapped to containers in the same container group, only one service cannot expose a given port number. ACI does not support port mapping, so the source and target ports defined in the Compose file must be the same.

When exposing ports, a service can also specify the service domainname field to set a DNS hostname. domainname will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at . .azurecontainer.io. All services specifying a `domainname` must set the same value, as it is applied to the entire container group. `domainname` must be unique globally in .azurecontainer.io

Persistent volumes

Docker volumes are mapped to Azure file shares. Only the long Compose volume format is supported meaning that volumes must be defined in the volume section. Volumes are defined with a name, the driver field must be set to azure_file , and driver_options must define the storage account and file share to use for the volume. A service can then reference the volume by its name, and specify the target path to be mounted in the container.

        image: nginx
        - mydata:/mount/testvolumes

    driver: azure_file
      share_name: myfileshare
      storage_account_name: mystorageaccount

The short volume syntax is not allowed for ACI volumes, as it was designed for local path bind mounting when running local containers. A Compose file can define several volumes, with different Azure file shares or storage accounts.

Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.


Secrets can be defined in compose files, and will need secret files available at deploy time next to the compose file. The content of the secret file will be made available inside selected containers, by default under /run/secrets/<SECRET_NAME> . External secrets are not supported with the ACI integration.

        image: nginx
          - mysecret1
        image: mysql
          - mysecret2

    file: ./my_secret1.txt
    file: ./my_secret2.txt

The nginx container will have secret1 mounted as /run/secrets/mysecret1 , the db container will have secret2 mounted as /run/secrets/mysecret2

A target can also be specified to set the name of the mounted file or by specifying an absolute path where to mount the secret file

        image: nginx
          - source: mysecret1
            target: renamedsecret1.txt
        image: mysql
          - source: mysecret1
            target: /mnt/dbmount/mysecretonmount1.txt
          - source: mysecret2
            target: /mnt/dbmount/mysecretonmount2.txt

    file: ./my_secret1.txt
    file: ./my_secret2.txt

In this example the nginx service will have its secret mounted to /run/secrets/renamedsecret1.txt and db will have 2 files ( mysecretonmount1.txt and mysecretonmount2.txt ). Both of them with be mounted in the same folder ( /mnt/dbmount/ ).

Note: Relative file paths are not allowed in the target

Note: Secret files cannot be mounted in a folder next to other existing files

Container Resources

CPU and memory reservations and limits can be set in compose. Resource limits must be greater than reservation. In ACI, setting resource limits different from resource reservation will cause containers in the same container group to compete for resources. Resource limits cannot be greater than the total resource reservation for the container group. (Therefore single containers cannot have resource limits different from resource reservations)

    image: mysql
          cpus: '2'
          memory: 2G
          cpus: '3'
          memory: 3G
    image: nginx
          cpus: '1.5'
          memory: 1.5G

In this example, the db container will be allocated 2 CPUs and 2G of memory. It will be allowed to use up to 3 CPUs and 3G of memory, using some of the resources allocated to the web container. The web container will have its limits set to the same values as reservations, by default.


A health check can be described in the healthcheck section of each service. This is translated to a LivenessProbe in ACI. If the health check fails then the container is considered unhealthy and terminated. In order for the container to be restarted automatically, the service needs to have a restart policy other than none to be set. Note that the default restart policy if one isn’t set is any .

    image: nginx
        condition: on-failure
      test: ["CMD", "curl", "-f", "http://localhost:80"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Note: that the test command can be a string or an array starting or not by NONE , CMD , CMD-SHELL . In the ACI implementation, these prefixes are ignored.

Read article
ACI integration container features

ACI integration container features

Azure Container Instances: running single containers

Single containers can be executed on ACI with the docker run command. A single container is executed in its own ACI container group, which will contain a single container.

Containers can be listed with the docker ps command, and stopped and removed with docker stop <CONTAINER> and docker rm <CONTAINER> .

Docker run options for ACI containers

The table below lists supported docker run flags and their ACI counterparts.


  • ✓: Implemented
  • n: Not yet implemented
  • x: Not applicable / no available conversion
Flag Map Notes
--cpus ✓ See Container Resources.
-d, --detach ✓ Detach from container logs when container starts. By default, the command line stays attached and follow container logs.
--domainname ✓ See Exposing ports.
--e, --env ✓ Sets environment variable.
--env-file ✓ Sets environment variable from and external file.
--health-cmd ✓ Specify healthcheck command. See Healthchecks.
--health-interval ✓ Specify healthcheck interval
--health-retries ✓ Specify healthcheck number of retries
--health-start-period ✓ Specify healthcheck initial delay
--health-timeout ✓ Specify healthcheck timeout
-l, --label x Unsupported in Docker ACI integration, due to limitations of ACI Tags.
-m, --memory ✓ See Container Resources.
--name ✓ Provide a name for the container. Name must be unique withing the ACI resource group. a name is generated by default.
-p, --publish ✓ See Exposing ports. Only symetrical port mapping is supported in ACI.
--restart ✓ Restart policy, must be one of: always , no , on-failure .
--rm x Not supported as ACI does not support auto-delete containers.
-v, --volume ✓ See Persistent Volumes.

Exposing ports

You can expose one or more ports of a container with docker run -p <PORT>:<PORT> If ports are exposed when running a container, the corresponding ACI container group will be exposed with a public IP allocated and the required port(s) accessible.

Note: ACI does not support port mapping, so the same port number must be specified when using -p <PORT>:<PORT> .

When exposing ports, a container can also specify the service --domainname flag to set a DNS hostname. domainname will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at <DOMAINNANE>.<REGION>.azurecontainer.io . domainname must be unique globally in .azurecontainer.io

Persistent volumes

Docker volumes are mapped to Azure File shares, each file share is part of an Azure Storage Account. One or more volumes can be specified with docker run -v <STORAGE-ACCOUNT>/<FILESHARE>:<TARGET-PATH> .

A run command can use the --volume or -v flag several times for different volumes. The volumes can use the same or different storage accounts. The target paths for different volume mounts must be different and not overlap. There is no support for mounting a single file, or mounting a subfolder from an Azure File Share.

Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.

Container Resources

CPU and memory reservations can be set when running containers with docker run --cpus 1.5 --memory 2G .

It is not possible to set resource limits that differ from resource reservation on single containers. ACI allows setting resource limits for containers in a container group but these limits must stay within the reserved resources for the entire group. In the case of a single container deployed in a container group, the resource limits must be equal to the resource reservation.


You can view container logs with the command docker logs <CONTAINER-ID> .

You can follow logs with the --follow ( -f ) option. When running a container with docker run , by default the command line stays attached to container logs when the container starts. Use docker run --detach to not follow logs once the container starts.

Note: Following ACI logs may have display issues especially when resizing a terminal that is following container logs. This is due to ACI providing raw log pulling but no streaming of logs. Logs are effectively pulled every 2 seconds when following logs.


A health check can be described using the flags prefixed by --health- . This is translated into LivenessProbe for ACI. If the health check fails then the container is considered unhealthy and terminated. In order for the container to be restarted automatically, the container needs to be run with a restart policy (set by the --restart flag) other than no . Note that the default restart policy if one isn’t set is no .

In order to restart automatically, the container also need to have a restart policy set with --restart ( docker run defaults to no restart policy)

Read article
Deploying Docker containers on Azure

Deploying Docker containers on Azure


The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment.

In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to:

  • Easily log into Azure
  • Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily
  • Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service

Also see the full list of container features supported by ACI and full list of compose features supported by ACI.


To deploy Docker containers on Azure, you must meet the following requirements:

  1. Download and install the latest version of Docker Desktop.

    • Download for Mac
    • Download for Windows

    Alternatively, install the Docker Compose CLI for Linux.

  2. Ensure you have an Azure subscription. You can get started with an Azure free account.

Run Docker containers on ACI

Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using docker run or deploy multi-container applications defined in a Compose file using the docker compose up command.

The following sections contain instructions on how to deploy your Docker containers on ACI. Also see the full list of container features supported by ACI.

Log into Azure

Run the following commands to log into Azure:

$ docker login azure

This opens your web browser and prompts you to enter your Azure login credentials. If the Docker CLI cannot open a browser, it will fall back to the Azure device code flow and lets you connect manually. Note that the Azure command line login is separated from the Docker CLI Azure login.

Alternatively, you can log in without interaction (typically in scripts or continuous integration scenarios), using an Azure Service Principal, with docker login azure --client-id xx --client-secret yy --tenant-id zz


Logging in through the Azure Service Provider obtains an access token valid for a short period (typically 1h), but it does not allow you to automatically and transparently refresh this token. You must manually re-login when the access token has expired when logging in with a Service Provider.

You can also use the --tenant-id option alone to specify a tenant, if you have several ones available in Azure.

Create an ACI context

After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI. Creating an ACI context requires an Azure subscription, a resource group, and a region. For example, let us create a new context called myacicontext :

$ docker context create aci myacicontext

This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: --subscription-id , --resource-group , and --location .

If you don’t have any existing resource groups in your Azure account, the docker context create aci myacicontext command creates one for you. You don’t have to specify any additional options to do this.

After you have created an ACI context, you can list your Docker contexts by running the docker context ls command:

NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT                KUBERNETES ENDPOINT   ORCHESTRATOR
myacicontext        aci                 myResourceGroupGTA@eastus
default *           moby              Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                          swarm

Run a container

Now that you’ve logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI.

There are two ways to use your new ACI context. You can use the --context flag with the Docker command to specify that you would like to run the command using your newly created ACI context.

$ docker --context myacicontext run -p 80:80 nginx

Or, you can change context using docker context use to select the ACI context to be your focus for running Docker commands. For example, we can use the docker context use command to deploy an Nginx container:

$ docker context use myacicontext
$ docker run -p 80:80 nginx

After you’ve switched to the myacicontext context, you can use docker ps to list your containers running on ACI.

In the case of the demonstration Nginx container started above, the result of the ps command will display in column “PORTS” the IP address and port on which the container is running. For example, it may show>80/tcp , and you can view the Nginx welcome page by browsing .

To view logs from your container, run:

$ docker logs <CONTAINER_ID>

To execute a command in a running container, run:

$ docker exec -t <CONTAINER_ID> COMMAND

To stop and remove a container from ACI, run:

$ docker stop <CONTAINER_ID>
$ docker rm <CONTAINER_ID>

You can remove containers using docker rm . To remove a running container, you must use the --force flag, or stop the container using docker stop before removing it.


The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the container’s filesystem so all state that is not stored in a volume will be lost on restart.

Running Compose applications

You can also deploy and manage multi-container applications defined in Compose files to ACI using the docker compose command. All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. Name resolution between containers is achieved by writing service names in the /etc/hosts file that is shared automatically by all containers in the container group.

Also see the full list of compose features supported by ACI.

  1. Ensure you are using your ACI context. You can do this either by specifying the --context myacicontext flag or by setting the default context using the command docker context use myacicontext .

  2. Run docker compose up and docker compose down to start and then stop a full Compose application.

By default, docker compose up uses the docker-compose.yaml file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using docker compose --file mycomposefile.yaml up .

You can also specify a name for the Compose application using the --project-name flag during deployment. If no name is specified, a name will be derived from the working directory.

Containers started as part of Compose applications will be displayed along with single containers when using docker ps . Their container ID will be of the format: <COMPOSE-PROJECT>_<SERVICE> . These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group. You can view each container’s logs with docker logs . You can list deployed Compose applications with docker compose ls . This will list only compose applications, not single containers started with docker run . You can remove a Compose application with docker compose down .


The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application.

Updating applications

From a deployed Compose application, you can update the application by re-deploying it with the same project name: docker compose --project-name PROJECT up .

Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch.

Updating is the default behavior if you invoke docker compose up on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute docker compose down before running docker compose up again in order to totally reset a Compose application.

Releasing resources

Single containers and Compose applications can be removed from ACI with the docker prune command. The docker prune command removes deployments that are not currently running. To remove running depoyments, you can specify --force . The --dry-run option lists deployments that are planned for removal, but it doesn’t actually remove them.

$ ./bin/docker --context acicontext prune --dry-run --force
Resources that would be deleted:
Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB

Exposing ports

Single containers and Compose applications can optionally expose ports. For single containers, this is done using the --publish ( -p ) flag of the docker run command : docker run -p 80:80 nginx .

For Compose applications, you must specify exposed ports in the Compose file service definition:

    image: nginx
      - "80:80"


ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI.

All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI.

By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application). This IP address can be obtained when listing containers with docker ps or using docker inspect .

DNS label name

In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form: <NAME>.region.azurecontainer.io .

You can set this name with the --domainname flag when performing a docker run , or by using the domainname field in the Compose file when performing a docker compose up :

    image: nginx
    domainname: "myapp"
      - "80:80"


The domain of a Compose application can only be set once, if you specify the domainname for several services, the value must be identical.

The FQDN <DOMAINNAME>.region.azurecontainer.io must be available.

Using Azure file share as volumes in ACI containers

You can deploy containers or Compose applications that use persistent data stored in volumes. Azure File Share can be used to support volumes for ACI containers.

Using an existing Azure File Share with storage account name mystorageaccount and file share name myfileshare , you can specify a volume in your deployment run command as follows:

$ docker run -v mystorageaccount/myfileshare:/target/path myimage

The runtime container will see the file share content in /target/path .

In a Compose application, the volume specification must use the following syntax in the Compose file:

  image: nginx
    - mydata:/mount/testvolumes

    driver: azure_file
      share_name: myfileshare
      storage_account_name: mystorageaccount


The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear.

In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume. Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify rw (read/write) or ro (read only) when mounting the volume ( rw is the default).

Managing Azure volumes

To create a volume that you can use in containers or Compose applications when using your ACI Docker context, you can use the docker volume create command, and specify an Azure storage account name and the file share name:

$ docker --context aci volume create test-volume --storage-account mystorageaccount
[+] Running 2/2
 â ¿ mystorageaccount  Created                         26.2s
 â ¿ test-volume       Created                          0.9s

By default, if the storage account does not already exist, this command creates a new storage account using the Standard LRS as a default SKU, and the resource group and location associated with your Docker ACI context.

If you specify an existing storage account, the command creates a new file share in the existing account:

$ docker --context aci volume create test-volume2 --storage-account mystorageaccount
[+] Running 2/2
 â ¿ mystorageaccount   Use existing                    0.7s
 â ¿ test-volume2       Created                         0.7s

Alternatively, you can create an Azure storage account or a file share using the Azure portal, or the az command line.

You can also list volumes that are available for use in containers or Compose applications:

$ docker --context aci volume ls
ID                                 DESCRIPTION
mystorageaccount/test-volume       Fileshare test-volume in mystorageaccount storage account
mystorageaccount/test-volume2      Fileshare test-volume2 in mystorageaccount storage account

To delete a volume and the corresponding Azure file share, use the volume rm command:

$ docker --context aci volume rm mystorageaccount/test-volume

This permanently deletes the Azure file share and all its data.

When deleting a volume in Azure, the command checks whether the specified file share is the only file share available in the storage account. If the storage account is created with the docker volume create command, docker volume rm also deletes the storage account when it does not have any file shares. If you are using a storage account created without the docker volume create command (through Azure portal or with the az command line for example), docker volume rm does not delete the storage account, even when it has zero remaining file shares.

Environment variables

When using docker run , you can pass the environment variables to ACI containers using the --env flag. For Compose applications, you can specify the environment variables in the Compose file with the environment or env-file service field, or with the --environment command line flag.

Health checks

You can specify a container health checks using either the --healthcheck- prefixed flags with docker run , or in a Compose file with the healthcheck section of the service.

Health checks are converted to ACI LivenessProbe s. ACI runs the health check command periodically, and if it fails, the container will be terminated.

Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for docker run is no which will not restart the container. The default restart policy for Compose is any which will always try restarting the service containers.

Example using docker run :

$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s  nginx

Example using Compose files:

    image: nginx
        condition: on-failure
      test: ["CMD", "curl", "-f", "http://localhost:80"]
      interval: 10s

Private Docker Hub images and using the Azure Container Registry

You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using docker login before running docker run or docker compose up . The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI. In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You don’t need to manually login to the ACR registry first, if your Azure login has access to the ACR.

Using ACI resource groups as namespaces

You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using docker context use <CONTEXT> .

When you run the docker ps command, it only lists containers in your current Docker context. There won’t be any contention in container names or Compose application names between two Docker contexts.

Install the Docker Compose CLI on Linux

The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI).

Install Prerequisites

  • Docker 19.03 or later

Install script

You can install the new CLI using the install script:

$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh

Manual install

You can download the Docker ACI Integration CLI from the latest release page.

You will then need to make it executable:

$ chmod +x docker-aci

To enable using the local Docker Engine and to use existing Docker contexts, you must have the existing Docker CLI as com.docker.cli somewhere in your PATH . You can do this by creating a symbolic link from the existing Docker CLI:

$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli


The PATH environment variable is a colon-separated list of directories with priority from left to right. You can view it using echo $PATH . You can find the path to the existing Docker CLI using which docker . You may need root permissions to make this link.

On a fresh install of Ubuntu 20.04 with Docker Engine already installed:

$ echo $PATH
$ which docker
$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli

You can verify that this is working by checking that the new CLI works with the default context:

$ ./docker-aci --context default ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
$ echo $?

To make this CLI with ACI integration your default Docker CLI, you must move it to a directory in your PATH with higher priority than the existing Docker CLI.

Again, on a fresh Ubuntu 20.04:

$ which docker
$ echo $PATH
$ sudo mv docker-aci /usr/local/bin/docker
$ which docker
$ docker version
 Azure integration  0.1.4

Supported commands

After you have installed the Docker ACI Integration CLI, run --help to see the current list of commands.


To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and com.docker.cli from your PATH . If you installed using the script, this can be done as follows:

$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli


Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the compose-cli GitHub repository.

Read article

Still Thinking?
Give us a try!

We embrace agility in everything we do.
Our onboarding process is both simple and meaningful.
We can't wait to welcome you on AiDOOS!