Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

OCI and Docker exporters

OCI and Docker exporters

The oci exporter outputs the build result into an OCI image layout tarball. The docker exporter behaves the same way, except it exports a Docker image layout instead.

The docker driver doesn’t support these exporters. You must use docker-container or some other driver if you want to generate these outputs.


Build a container image using the oci and docker exporters:

$ docker buildx build --output type=oci[,parameters] .
$ docker buildx build --output type=docker[,parameters] .

The following table describes the available parameters:

Parameter Type Default Description
name String  Specify image name(s)
dest String  Path
tar true , false true Bundle the output into a tarball layout
compression uncompressed , gzip , estargz , zstd gzip Compression type, see compression
compression-level 0..22 Â Compression level, see compression
force-compression true , false false Forcefully apply compression, see compression
oci-mediatypes true , false  Use OCI media types in exporter manifests. Defaults to true for type=oci , and false for type=docker . See OCI Media types
buildinfo true , false true Attach inline build info
buildinfo-attrs true , false false Attach inline build info attributes
annotation.<key> String  Attach an annotation with the respective key and value to the built image,see annotations


These exporters support adding OCI annotation using annotation.* dot notation parameter. The following example sets the org.opencontainers.image.title annotation for a build:

$ docker buildx build \
    --output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .

For more information about annotations, see BuildKit documentation.

Further reading

For more information on the oci or docker exporters, see the BuildKit README.

Overview of Docker Build

Overview of Docker Build

Docker Build is one of Docker Engine’s most used features. Whenever you are creating an image you are using Docker Build. Build is a key part of your software development life cycle allowing you to package and bundle your code and ship it anywhere.

The Docker Engine uses a client-server architecture and is composed of multiple components and tools. The most common method of executing a build is by issuing a docker build command. The CLI sends the request to Docker Engine which, in turn, executes your build.

There are now two components in Engine that can be used to build an image. Starting with the 18.09 release, Engine is shipped with Moby BuildKit, the new component for executing your builds by default.

The new client Docker Buildx, is a CLI plugin that extends the docker command with the full support of the features provided by BuildKit builder toolkit. docker buildx build command provides the same user experience as docker build with many new features like creating scoped builder instances, building against multiple nodes concurrently, outputs configuration, inline build caching, and specifying target platform. In addition, Buildx also supports new features that aren’t yet available for regular docker build like building manifest lists, distributed caching, and exporting build results to OCI image tarballs.

Docker Build is more than a simple build command, and it’s not only about packaging your code. It’s a whole ecosystem of tools and features that support not only common workflow tasks but also provides support for more complex and advanced scenarios.

Closed cardboard box

Packaging your software

Build and package your application to run it anywhere: locally or in the cloud.


Multi-stage builds

Keep your images small and secure with minimal dependencies.

Stacked windows

Multi-platform images

Build, push, pull, and run images seamlessly on different computer architectures.

Silhouette of an engineer, with cogwheels in the background

Build drivers

Configure where and how you run your builds.

Two arrows rotating in a circle

Build caching

Avoid unnecessary repetitions of costly operations, such as package installs.

Infinity loop

Continuous integration

Learn how to use Docker in your continuous integration pipelines.

Arrow coming out of a box


Export any artifact you like, not just Docker images.

Cake silhouette


Orchestrate your builds with Bake.

Pen writing on a document

Dockerfile frontend

Learn about the Dockerfile frontend for BuildKit.

Hammer and screwdriver

Configure BuildKit

Take a deep dive into the internals of BuildKit to get the most out of your builds.

Read article
Install Docker Buildx

Install Docker Buildx

Docker Desktop

Docker Buildx is included by default in Docker Desktop.

Docker Engine via package manager

Docker Linux packages also include Docker Buildx when installed using the .deb or .rpm packages.

Install using a Dockerfile

Here is how to install and use Buildx inside a Dockerfile through the docker/buildx-bin image:

# syntax=docker/dockerfile:1
FROM docker
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Download manually


This section is for unattended installation of the Buildx component. These instructions are mostly suitable for testing purposes. We do not recommend installing Buildx using manual download in production environments as they will not be updated automatically with security updates.

On Windows, macOS, and Linux workstations we recommend that you install Docker Desktop instead. For Linux servers, we recommend that you follow the instructions specific for your distribution.

You can also download the latest binary from the releases page on GitHub.

Rename the relevant binary and copy it to the destination matching your OS:

OS Binary name Destination folder
Linux docker-buildx $HOME/.docker/cli-plugins
macOS docker-buildx $HOME/.docker/cli-plugins
Windows docker-buildx.exe %USERPROFILE%\.docker\cli-plugins

Or copy it into one of these folders for installing it system-wide.

On Unix environments:

  • /usr/local/lib/docker/cli-plugins OR /usr/local/libexec/docker/cli-plugins
  • /usr/lib/docker/cli-plugins OR /usr/libexec/docker/cli-plugins

On Windows:

  • C:\ProgramData\Docker\cli-plugins
  • C:\Program Files\Docker\cli-plugins


On Unix environments, it may also be necessary to make it executable with chmod +x :

$ chmod +x ~/.docker/cli-plugins/docker-buildx

Set Buildx as the default builder

Running the command docker buildx install sets up the docker build command as an alias to docker buildx . This results in the ability to have docker build use the current Buildx builder.

To remove this alias, run docker buildx uninstall .

Read article
Build release notes

Build release notes

This page contains information about the new features, improvements, and bug fixes in Docker Buildx.



Bug fixes and enhancements

  • The inspect command now displays the BuildKit version in use docker/buildx#1279
  • Fixed a regression when building Compose files that contain services without a build block docker/buildx#1277

For more details, see the complete release notes in the Buildx GitHub repository.




  • Support for new driver remote that you can use to connect to any already running BuildKit instance docker/buildx#1078 docker/buildx#1093 docker/buildx#1094 docker/buildx#1103 docker/buildx#1134 docker/buildx#1204
  • You can now load Dockerfile from standard input even when the build context is coming from external Git or HTTP URL docker/buildx#994
  • Build commands now support new the build context type oci-layout:// for loading build context from local OCI layout directories. Note that this feature depends on an unreleased BuildKit feature and builder instance from moby/buildkit:master needs to be used until BuildKit v0.11 is released docker/buildx#1173
  • You can now use the new --print flag to run helper functions supported by the BuildKit frontend performing the build and print their results. You can use this feature in Dockerfile to show the build arguments and secrets that the current build supports with --print=outline and list all available Dockerfile stages with --print=targets . This feature is experimental for gathering early feedback and requires enabling BUILDX_EXPERIMENTAL=1 environment variable. We plan to update/extend this feature in the future without keeping backward compatibility docker/buildx#1100 docker/buildx#1272
  • You can now use the new --invoke flag to launch interactive containers from build results for an interactive debugging cycle. You can reload these containers with code changes or restore them to an initial state from the special monitor mode. This feature is experimental for gathering early feedback and requires enabling BUILDX_EXPERIMENTAL=1 environment variable. We plan to update/extend this feature in the future without enabling backward compatibility docker/buildx#1168 docker/buildx#1257 docker/buildx#1259
  • Buildx now understands environment variable BUILDKIT_COLORS and NO_COLOR to customize/disable the colors of interactive build progressbar docker/buildx#1230 docker/buildx#1226
  • buildx ls command now shows the current BuildKit version of each builder instance docker/buildx#998
  • The bake command now loads .env file automatically when building Compose files for compatibility docker/buildx#1261
  • Bake now supports Compose files with cache_to definition docker/buildx#1155
  • Bake now supports new builtin function timestamp() to access current time docker/buildx#1214
  • Bake now supports Compose build secrets definition docker/buildx#1069
  • Additional build context configuration is now supported in Compose files via x-bake docker/buildx#1256
  • Inspecting builder now shows current driver options configuration docker/buildx#1003 docker/buildx#1066


  • Updated the Compose Specification to 1.4.0 docker/buildx#1246 docker/buildx#1251

Bug fixes and enhancements

  • The buildx ls command output has been updated with better access to errors from different builders docker/buildx#1109
  • The buildx create command now performs additional validation of builder parameters to avoid creating a builder instance with invalid configuration docker/buildx#1206
  • The buildx imagetools create command can now create new multi-platform images even if the source subimages are located on different repositories or registries docker/buildx#1137
  • You can now set the default builder config that is used when creating builder instances without passing custom --config value docker/buildx#1111
  • Docker driver can now detect if dockerd instance supports initially disabled Buildkit features like multi-platform images docker/buildx#1260 docker/buildx#1262
  • Compose files using targets with . in the name are now converted to use _ so the selector keys can still be used in such targets docker/buildx#1011
  • Included an additional validation for checking valid driver configurations docker/buildx#1188 docker/buildx#1273
  • The remove command now displays the removed builder and forbids removing context builders docker/buildx#1128
  • Enable Azure authentication when using Kubernetes driver docker/buildx#974
  • Add tolerations handling for kubernetes driver docker/buildx#1045 docker/buildx#1053
  • Replace deprecated seccomp annotations with securityContext in kubernetes driver docker/buildx#1052
  • Fix panic on handling manifests with nil platform docker/buildx#1144
  • Fix using duration filter with prune command docker/buildx#1252
  • Fix merging multiple JSON files on Bake definition docker/buildx#1025
  • Fix issues with implicit builder created from Docker context had invalid configuration or dropped connection docker/buildx#1129
  • Fix conditions for showing no-output warning when using named contexts docker/buildx#968
  • Fix duplicating builders when builder instance and docker context have the same name docker/buildx#1131
  • Fix printing unnecessary SSH warning logs docker/buildx#1085
  • Fix possible panic when using an empty variable block with Bake JSON definition docker/buildx#1080
  • Fix image tools commands not handling --builder flag correctly docker/buildx#1067
  • Fix using custom image together with rootless option docker/buildx#1063

For more details, see the complete release notes in the Buildx GitHub repository.




  • Update Compose spec used by buildx bake to v1.2.1 to fix parsing ports definition docker/buildx#1033

Bug fixes and enhancements

  • Fix possible crash on handling progress streams from BuildKit v0.10 docker/buildx#1042
  • Fix parsing groups in buildx bake when already loaded by a parent group docker/buildx#1021

For more details, see the complete release notes in the Buildx GitHub repository.



Bug fixes and enhancements

  • Fix possible panic on handling build context scanning errors docker/buildx#1005
  • Allow . on Compose target names in buildx bake for backward compatibility docker/buildx#1018

For more details, see the complete release notes in the Buildx GitHub repository.




  • Build command now accepts --build-context flag to define additional named build contexts for your builds docker/buildx#904
  • Bake definitions now support defining dependencies between targets and using the result of one target in another build docker/buildx#928 docker/buildx#965 docker/buildx#963 docker/buildx#962 docker/buildx#981
  • imagetools inspect now accepts --format flag allowing access to config and buildinfo for specific images docker/buildx#854 docker/buildx#972
  • New flag --no-cache-filter allows configuring build, so it ignores cache only for specified Dockerfile stages docker/buildx#860
  • Builds can now show a summary of warnings sets by the building frontend docker/buildx#892
  • The new build argument BUILDKIT_INLINE_BUILDINFO_ATTRS allows opting-in to embed building attributes to resulting image docker/buildx#908
  • The new flag --keep-buildkitd allows keeping BuildKit daemon running when removing a builder
    • docker/buildx#852

Bug fixes and enhancements

  • --metadata-file output now supports embedded structure types docker/buildx#946
  • buildx rm now accepts new flag --all-inactive for removing all builders that are not currently running docker/buildx#885
  • Proxy config is now read from Docker configuration file and sent with build requests for backward compatibility docker/buildx#959
  • Support host networking in Compose docker/buildx#905 docker/buildx#880
  • Bake files can now be read from stdin with -f - docker/buildx#864
  • --iidfile now always writes the image config digest independently of the driver being used (use --metadata-file for digest) docker/buildx#980
  • Target names in Bake are now restricted to not use special characters docker/buildx#929
  • Image manifest digest can be read from metadata when pushed with docker driver docker/buildx#989
  • Fix environment file handling in Compose files docker/buildx#905
  • Show last access time in du command docker/buildx#867
  • Fix possible double output logs when multiple Bake targets run same build steps docker/buildx#977
  • Fix possible errors on multi-node builder building multiple targets with mixed platform docker/buildx#985
  • Fix some nested inheritance cases in Bake docker/buildx#914
  • Fix printing default group on Bake files docker/buildx#884
  • Fix UsernsMode when using rootless container docker/buildx#887

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix issue with matching exclude rules in .dockerignore docker/buildx#858
  • Fix bake --print JSON output for current group docker/buildx#857

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • TLS certificates from BuildKit configuration are now transferred to build container with docker-container and kubernetes drivers docker/buildx#787
  • Builds support --ulimit flag for feature parity docker/buildx#800
  • Builds support --shm-size flag for feature parity docker/buildx#790
  • Builds support --quiet for feature parity docker/buildx#740
  • Builds support --cgroup-parent flag for feature parity docker/buildx#814
  • Bake supports builtin variable BAKE_LOCAL_PLATFORM docker/buildx#748
  • Bake supports x-bake extension field in Compose files docker/buildx#721
  • kubernetes driver now supports colon-separated KUBECONFIG docker/buildx#761
  • kubernetes driver now supports setting Buildkit config file with --config docker/buildx#682
  • kubernetes driver now supports installing QEMU emulators with driver-opt docker/buildx#682


  • Allow using custom registry configuration for multi-node pushes from the client docker/buildx#825
  • Allow using custom registry configuration for buildx imagetools command docker/buildx#825
  • Allow booting builder after creating with buildx create --bootstrap docker/buildx#692
  • Allow registry:insecure output option for multi-node pushes docker/buildx#825
  • BuildKit config and TLS files are now kept in Buildx state directory and reused if BuildKit instance needs to be recreated docker/buildx#824
  • Ensure different projects use separate destination directories for incremental context transfer for better performance docker/buildx#817
  • Build containers are now placed on separate cgroup by default docker/buildx#782
  • Bake now prints the default group with --print docker/buildx#720
  • docker driver now dials build session over HTTP for better performance docker/buildx#804


  • Fix using --iidfile together with a multi-node push docker/buildx#826
  • Using --push in Bake does not clear other image export options in the file docker/buildx#773
  • Fix Git URL detection for buildx bake when https protocol was used docker/buildx#822
  • Fix pushing image with multiple names on multi-node builds docker/buildx#815
  • Avoid showing --builder flags for commands that don’t use it docker/buildx#818
  • Unsupported build flags now show a warning docker/buildx#810
  • Fix reporting error details in some OpenTelemetry traces docker/buildx#812

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix BuildKit state volume location for Windows clients docker/buildx#751

For more details, see the complete release notes in the Buildx GitHub repository.



For more details, see the complete release notes in the Buildx GitHub repository.


  • Fix connection error showing up in some SSH configurations docker/buildx#741




  • Set ConfigFile to parse compose files with Bake docker/buildx#704


  • Duplicate progress env var docker/buildx#693
  • Should ignore nil client docker/buildx#686

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Support for OpenTelemetry traces and forwarding Buildx client traces to BuildKit docker/buildx#635
  • Experimental GitHub Actions remote cache backend with --cache-to type=gha and --cache-from type=gha docker/buildx#535
  • New --metadata-file flag has been added to build and Bake command that allows saving build result metadata in JSON format docker/buildx#605
  • This is the first release supporting Windows ARM64 docker/buildx#654
  • This is the first release supporting Linux Risc-V docker/buildx#652
  • Bake now supports building from remote definition with local files or another remote source as context docker/buildx#671
  • Bake now allows variables to reference each other and using user functions in variables and vice-versa docker/buildx#575 docker/buildx#539 docker/buildx#532
  • Bake allows defining attributes in the global scope docker/buildx#541
  • Bake allows variables across multiple files docker/buildx#538
  • New quiet mode has been added to progress printer docker/buildx#558
  • kubernetes driver now supports defining resources/limits docker/buildx#618
  • Buildx binaries can now be accessed through buildx-bin Docker image docker/buildx#656


  • docker-container driver now keeps BuildKit state in volume. Enabling updates with keeping state docker/buildx#672
  • Compose parser is now based on new compose-go parser fixing support for some newer syntax docker/buildx#669
  • SSH socket is now automatically forwarded when building an ssh-based git URL docker/buildx#581
  • Bake HCL parser has been rewritten docker/buildx#645
  • Extend HCL support with more functions docker/buildx#491 docker/buildx#503
  • Allow secrets from environment variables docker/buildx#488
  • Builds with an unsupported multi-platform and load configuration now fail fast docker/buildx#582
  • Store Kubernetes config file to make buildx builder switchable docker/buildx#497
  • Kubernetes now lists all pods as nodes on inspection docker/buildx#477
  • Default Rootless image has been set to moby/buildkit:buildx-stable-1-rootless docker/buildx#480


  • imagetools create command now correctly merges JSON descriptor with old one docker/buildx#592
  • Fix building with --network=none not requiring extra security entitlements docker/buildx#531

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix regression on setting --platform on buildx create outside kubernetes driver docker/buildx#475

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • The docker driver now supports the --push flag docker/buildx#442
  • Bake supports inline Dockerfiles docker/buildx#398
  • Bake supports building from remote URLs and Git repositories docker/buildx#398
  • BUILDX_CONFIG env var allow users to have separate buildx state from Docker config docker/buildx#385
  • BUILDKIT_MULTI_PLATFORM build arg allows to force building multi-platform return objects even if only one --platform specified docker/buildx#467


  • Allow --append to be used with kubernetes driver docker/buildx#370
  • Build errors show error location in source files and system stacktraces with --debug docker/buildx#389
  • Bake formats HCL errors with source definition docker/buildx#391
  • Bake allows empty string values in arrays that will be discarded docker/buildx#428
  • You can now use the Kubernetes cluster config with the kubernetes driver docker/buildx#368 docker/buildx#460
  • Creates a temporary token for pulling images instead of sharing credentials when possible docker/buildx#469
  • Ensure credentials are passed when pulling BuildKit container image docker/buildx#441 docker/buildx#433
  • Disable user namespace remapping in docker-container driver docker/buildx#462
  • Allow --builder flag to switch to default instance docker/buildx#425
  • Avoid warn on empty BUILDX_NO_DEFAULT_LOAD config value docker/buildx#390
  • Replace error generated by quiet option by a warning docker/buildx#403
  • CI has been switched to GitHub Actions docker/buildx#451 docker/buildx#463 docker/buildx#466 docker/buildx#468 docker/buildx#471


  • Handle lowercase Dockerfile name as a fallback for backward compatibility docker/buildx#444

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Support cacheonly exporter docker/buildx#337


  • Update go-cty to pull in more stdlib functions docker/buildx#277
  • Improve error checking on load docker/buildx#281


  • Fix parsing json config with HCL docker/buildx#280
  • Ensure --builder is wired from root options docker/buildx#321
  • Remove warning for multi-platform iidfile docker/buildx#351

For more details, see the complete release notes in the Buildx GitHub repository.




  • Fix regression on flag parsing docker/buildx#268
  • Fix using pull and no-cache keys in HCL targets docker/buildx#268

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Add kubernetes driver docker/buildx#167
  • New global --builder flag to override builder instance for a single command docker/buildx#246
  • New prune and du commands for managing local builder cache docker/buildx#249
  • You can now set the new pull and no-cache options for HCL targets docker/buildx#165


  • Upgrade Bake to HCL2 with support for variables and functions docker/buildx#192
  • Bake now supports --load and --push docker/buildx#164
  • Bake now supports wildcard overrides for multiple targets docker/buildx#164
  • Container driver allows setting environment variables via driver-opt docker/buildx#170

For more details, see the complete release notes in the Buildx GitHub repository.




  • Handle copying unix sockets instead of erroring docker/buildx#155 moby/buildkit#1144


  • Running Bake with multiple Compose files now merges targets correctly docker/buildx#134
  • Fix bug when building a Dockerfile from stdin ( build -f - ) docker/buildx#153

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Custom buildkitd daemon flags docker/buildx#102
  • Driver-specific options on create docker/buildx#122


  • Environment variables are used in Compose files docker/buildx#117
  • Bake now honors --no-cache and --pull docker/buildx#118
  • Custom BuildKit config file docker/buildx#121
  • Entitlements support with build --allow docker/buildx#104


  • Fix bug where --build-arg foo would not read foo from environment docker/buildx#116

For more details, see the complete release notes in the Buildx GitHub repository.




  • Change Compose file handling to require valid service specifications docker/buildx#87

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • Add BUILDKIT_PROGRESS env var docker/buildx#69
  • Add local platform docker/buildx#70


  • Keep arm variant if one is defined in the config docker/buildx#68
  • Make dockerfile relative to context docker/buildx#83


  • Fix parsing target from compose files docker/buildx#53

For more details, see the complete release notes in the Buildx GitHub repository.



New features

  • First release

For more details, see the complete release notes in the Buildx GitHub repository.

Read article
ACI integration Compose features

ACI integration Compose features

Compose - Azure Container Instances mapping

This document outlines the conversion of an application defined in a Compose file to ACI objects. At a high-level, each Compose deployment is mapped to a single ACI container group. Each service is mapped to a container in the container group. The Docker ACI integration does not allow scaling of services.

Compose fields mapping

The table below lists supported Compose file fields and their ACI counterparts.


  • ✓: Implemented
  • n: Not yet implemented
  • x: Not applicable / no available conversion
Keys Map Notes
Service ✓ Â
service.service.build x Ignored. No image build support on ACI.
service.cap_add, cap_drop x Â
service.command ✓ Override container Command. On ACI, specifying command will override the image command and entrypoint, if the image has an command or entrypoint defined
service.configs x Â
service.cgroup_parent x Â
service.container_name x Service name is used as container name on ACI.
service.credential_spec x Â
service.deploy ✓ Â
service.deploy.endpoint_mode x Â
service.deploy.mode x Â
service.deploy.replicas x Only one replica is started for each service.
service.deploy.placement x Â
service.deploy.update_config x Â
service.deploy.resources ✓ Restriction: ACI resource limits cannot be greater than the sum of resource reservations for all containers in the container group. Using container limits that are greater than container reservations will cause containers in the same container group to compete with resources.
service.deploy.restart_policy ✓ One of: any , none , on-failure . Restriction: All services must have the same restart policy. The entire ACI container group will be restarted if needed.
service.deploy.labels x ACI does not have container-level labels.
service.devices x Â
service.depends_on x Â
service.dns x Â
service.dns_search x Â
service.domainname ✓ Mapped to ACI DNSLabelName. Restriction: all services must specify the same domainname , if specified. domainname must be unique globally in .azurecontainer.io
service.tmpfs x Â
service.entrypoint x ACI only supports overriding the container command.
service.env_file ✓ Â
service.environment ✓ Â
service.expose x Â
service.extends x Â
service.external_links x Â
service.extra_hosts x Â
service.group_add x Â
service.healthcheck ✓ Â
service.hostname x Â
service.image ✓ Private images will be accessible if the user is logged into the corresponding registry at deploy time. Users will be automatically logged in to Azure Container Registry using their Azure login if possible.
service.isolation x Â
service.labels x ACI does not have container-level labels.
service.links x Â
service.logging x Â
service.network_mode x Â
service.networks x Communication between services is implemented by defining mapping for each service in the shared /etc/hosts file of the container group. Each service can resolve names for other services and the resulting network calls will be redirected to localhost .
service.pid x Â
service.ports ✓ Only symmetrical port mapping is supported in ACI. See Exposing ports.
service.secrets ✓ See Secrets.
service.security_opt x Â
service.stop_grace_period x Â
service.stop_signal x Â
service.sysctls x Â
service.ulimits x Â
service.userns_mode x Â
service.volumes ✓ Mapped to AZure File Shares. See Persistent volumes.
service.restart x Replaced by service.deployment.restart_policy
Volume x Â
driver ✓ See Persistent volumes.
driver_opts ✓ Â
external x Â
labels x Â
Secret x Â
Config x Â


Container logs can be obtained for each container with docker logs <CONTAINER> . The Docker ACI integration does not currently support aggregated logs for containers in a Compose application, see https://github.com/docker/compose-cli/issues/803.

Exposing ports

When one or more services expose ports, the entire ACI container group will be exposed and will get a public IP allocated. As all services are mapped to containers in the same container group, only one service cannot expose a given port number. ACI does not support port mapping, so the source and target ports defined in the Compose file must be the same.

When exposing ports, a service can also specify the service domainname field to set a DNS hostname. domainname will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at . .azurecontainer.io. All services specifying a `domainname` must set the same value, as it is applied to the entire container group. `domainname` must be unique globally in .azurecontainer.io

Persistent volumes

Docker volumes are mapped to Azure file shares. Only the long Compose volume format is supported meaning that volumes must be defined in the volume section. Volumes are defined with a name, the driver field must be set to azure_file , and driver_options must define the storage account and file share to use for the volume. A service can then reference the volume by its name, and specify the target path to be mounted in the container.

        image: nginx
        - mydata:/mount/testvolumes

    driver: azure_file
      share_name: myfileshare
      storage_account_name: mystorageaccount

The short volume syntax is not allowed for ACI volumes, as it was designed for local path bind mounting when running local containers. A Compose file can define several volumes, with different Azure file shares or storage accounts.

Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.


Secrets can be defined in compose files, and will need secret files available at deploy time next to the compose file. The content of the secret file will be made available inside selected containers, by default under /run/secrets/<SECRET_NAME> . External secrets are not supported with the ACI integration.

        image: nginx
          - mysecret1
        image: mysql
          - mysecret2

    file: ./my_secret1.txt
    file: ./my_secret2.txt

The nginx container will have secret1 mounted as /run/secrets/mysecret1 , the db container will have secret2 mounted as /run/secrets/mysecret2

A target can also be specified to set the name of the mounted file or by specifying an absolute path where to mount the secret file

        image: nginx
          - source: mysecret1
            target: renamedsecret1.txt
        image: mysql
          - source: mysecret1
            target: /mnt/dbmount/mysecretonmount1.txt
          - source: mysecret2
            target: /mnt/dbmount/mysecretonmount2.txt

    file: ./my_secret1.txt
    file: ./my_secret2.txt

In this example the nginx service will have its secret mounted to /run/secrets/renamedsecret1.txt and db will have 2 files ( mysecretonmount1.txt and mysecretonmount2.txt ). Both of them with be mounted in the same folder ( /mnt/dbmount/ ).

Note: Relative file paths are not allowed in the target

Note: Secret files cannot be mounted in a folder next to other existing files

Container Resources

CPU and memory reservations and limits can be set in compose. Resource limits must be greater than reservation. In ACI, setting resource limits different from resource reservation will cause containers in the same container group to compete for resources. Resource limits cannot be greater than the total resource reservation for the container group. (Therefore single containers cannot have resource limits different from resource reservations)

    image: mysql
          cpus: '2'
          memory: 2G
          cpus: '3'
          memory: 3G
    image: nginx
          cpus: '1.5'
          memory: 1.5G

In this example, the db container will be allocated 2 CPUs and 2G of memory. It will be allowed to use up to 3 CPUs and 3G of memory, using some of the resources allocated to the web container. The web container will have its limits set to the same values as reservations, by default.


A health check can be described in the healthcheck section of each service. This is translated to a LivenessProbe in ACI. If the health check fails then the container is considered unhealthy and terminated. In order for the container to be restarted automatically, the service needs to have a restart policy other than none to be set. Note that the default restart policy if one isn’t set is any .

    image: nginx
        condition: on-failure
      test: ["CMD", "curl", "-f", "http://localhost:80"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Note: that the test command can be a string or an array starting or not by NONE , CMD , CMD-SHELL . In the ACI implementation, these prefixes are ignored.

Read article
ACI integration container features

ACI integration container features

Azure Container Instances: running single containers

Single containers can be executed on ACI with the docker run command. A single container is executed in its own ACI container group, which will contain a single container.

Containers can be listed with the docker ps command, and stopped and removed with docker stop <CONTAINER> and docker rm <CONTAINER> .

Docker run options for ACI containers

The table below lists supported docker run flags and their ACI counterparts.


  • ✓: Implemented
  • n: Not yet implemented
  • x: Not applicable / no available conversion
Flag Map Notes
--cpus ✓ See Container Resources.
-d, --detach ✓ Detach from container logs when container starts. By default, the command line stays attached and follow container logs.
--domainname ✓ See Exposing ports.
--e, --env ✓ Sets environment variable.
--env-file ✓ Sets environment variable from and external file.
--health-cmd ✓ Specify healthcheck command. See Healthchecks.
--health-interval ✓ Specify healthcheck interval
--health-retries ✓ Specify healthcheck number of retries
--health-start-period ✓ Specify healthcheck initial delay
--health-timeout ✓ Specify healthcheck timeout
-l, --label x Unsupported in Docker ACI integration, due to limitations of ACI Tags.
-m, --memory ✓ See Container Resources.
--name ✓ Provide a name for the container. Name must be unique withing the ACI resource group. a name is generated by default.
-p, --publish ✓ See Exposing ports. Only symetrical port mapping is supported in ACI.
--restart ✓ Restart policy, must be one of: always , no , on-failure .
--rm x Not supported as ACI does not support auto-delete containers.
-v, --volume ✓ See Persistent Volumes.

Exposing ports

You can expose one or more ports of a container with docker run -p <PORT>:<PORT> If ports are exposed when running a container, the corresponding ACI container group will be exposed with a public IP allocated and the required port(s) accessible.

Note: ACI does not support port mapping, so the same port number must be specified when using -p <PORT>:<PORT> .

When exposing ports, a container can also specify the service --domainname flag to set a DNS hostname. domainname will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at <DOMAINNANE>.<REGION>.azurecontainer.io . domainname must be unique globally in .azurecontainer.io

Persistent volumes

Docker volumes are mapped to Azure File shares, each file share is part of an Azure Storage Account. One or more volumes can be specified with docker run -v <STORAGE-ACCOUNT>/<FILESHARE>:<TARGET-PATH> .

A run command can use the --volume or -v flag several times for different volumes. The volumes can use the same or different storage accounts. The target paths for different volume mounts must be different and not overlap. There is no support for mounting a single file, or mounting a subfolder from an Azure File Share.

Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.

Container Resources

CPU and memory reservations can be set when running containers with docker run --cpus 1.5 --memory 2G .

It is not possible to set resource limits that differ from resource reservation on single containers. ACI allows setting resource limits for containers in a container group but these limits must stay within the reserved resources for the entire group. In the case of a single container deployed in a container group, the resource limits must be equal to the resource reservation.


You can view container logs with the command docker logs <CONTAINER-ID> .

You can follow logs with the --follow ( -f ) option. When running a container with docker run , by default the command line stays attached to container logs when the container starts. Use docker run --detach to not follow logs once the container starts.

Note: Following ACI logs may have display issues especially when resizing a terminal that is following container logs. This is due to ACI providing raw log pulling but no streaming of logs. Logs are effectively pulled every 2 seconds when following logs.


A health check can be described using the flags prefixed by --health- . This is translated into LivenessProbe for ACI. If the health check fails then the container is considered unhealthy and terminated. In order for the container to be restarted automatically, the container needs to be run with a restart policy (set by the --restart flag) other than no . Note that the default restart policy if one isn’t set is no .

In order to restart automatically, the container also need to have a restart policy set with --restart ( docker run defaults to no restart policy)

Read article

Still Thinking?
Give us a try!

We embrace agility in everything we do.
Our onboarding process is both simple and meaningful.
We can't wait to welcome you on AiDOOS!