×

Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-Docker
Introduction to Atomist

Introduction to Atomist

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

Atomist is a data and automation platform for managing the software supply chain. It extracts metadata from container images, evaluates the data, and helps you understand the state of the image.

Integrating Atomist into your systems and repositories grants you essential information about the images you build, and the containers running in production. Beyond collecting and visualizing information, Atomist can help you further by giving you recommendations, notifications, validation, and more.

Example capabilities made possible with Atomist are:

  • Stay up to date with advisory databases without having to re-analyze your images.
  • Automatically open pull requests to update base images for improved product security.
  • Check that your applications don’t contain secrets, such as a password or API token, before they get deployed.
  • Dissect Dockerfiles and see where vulnerabilities come from, line by line.

How it works

Atomist monitors your container registry for new images. When it finds a new image, it analyzes and extracts metadata about the image contents and any base images used. The metadata is uploaded to an isolated partition in the Atomist data plane where it’s securely stored.

The Atomist data plane is a combination of metadata and a large knowledge graph of public software and vulnerability data. Atomist determines the state of your container by overlaying the image metadata with the knowledge graph.

What’s next?

Head over to the try atomist page for instructions on how to run Atomist, locally and with no strings attached.

Track deployments

Track deployments

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

By integrating Atomist with a runtime environment, you can track vulnerabilities for deployed containers. This gives you contexts for whether security debt is increasing or decreasing.

There are several options for how you could implement deployment tracking:

  • Invoking the API directly
  • Adding it as a step in your continuous deployment pipeline
  • Creating Kubernetes admission controllers

API

Each Atomist workspace exposes an API endpoint. Submitting a POST request to the endpoint updates Atomist about what image you are running in your environments. This lets you compare data for images you build against images of containers running in staging or production.

You can find the API endpoint URL on the Integrations page. Using this API requires an API key.

The most straight-forward use is to post to this endpoint using a webhook. When deploying a new image, submit an automated POST request (using curl , for example) as part of your deployment pipeline.

$ curl <api-endpoint-url> \\
  -X POST \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer <api-token>" \\
  -d '{"image": {"url": "<image-url>@<sha256-digest>"}}'

Parameters

The API supports the following parameters in the request body:

{
  "image": {
    "url": "string",
    "name": "string"
  },
  "environment": {
    "name": "string"
  },
  "platform": {
    "os": "string",
    "architecture": "string",
    "variant": "string"
  }
}
Parameter Mandatory Default Description
image.url Yes  Fully qualified reference name of the image, plus version (digest). You must specify the image version by digest.
image.name No  Optional identifier. If you deploy many containers from the same image in any one environment, each instance must have a unique name.
environment.name No deployed Use custom environment names to track different image versions in environments, like staging and production
platform.os No linux Image operating system.
platform.architecture No amd64 Instruction set architecture.
platform.variant No  Optional variant label.
Read article
Integrate with GitHub

Integrate with GitHub

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

When installed for a GitHub organization, the Atomist GitHub app links repository activity to images. This enables Atomist to relate image tags and digests directly to specific commits in the source repository. It also opens up the possibility to incorporate image analysis in your Git workflow. For example, by adding analysis checks to pull request, or automatically raising pull requests for updating and pinning base image versions.

Install the GitHub app in the organization that contains the source code repositories for your Docker images.

Connect to GitHub

  1. Go to https://dso.docker.com/ and sign in using your Docker ID.
  2. Open the Repositories tab.
  3. Select Connect to GitHub and follow the authorization flow. This installs the Atomist GitHub App.

    install the GitHub app

  4. Install the app.

    Note

    If your GitHub account is a member of one or more organizations, GitHub prompts you to choose which account to install the app into. Select the account that contains the source repositories for your images.

    After installing the app, GitHub redirects you back to Atomist.

  5. In the repository selection menu, select what repositories you want Atomist to start watching.

    activate repositories

    If you are just looking to evaluate Atomist, start by selecting a few repositories during evaluation. Once you are comfortable using Atomist, you can switch on the integration for all repositories. Selecting All repositories also includes any repository created in the future.

    Important

    If Atomist detects FROM commands in Dockerfiles in the selected repositories, it begins raising automated pull requests. The pull requests update the Dockerfile FROM -line to specify the image versions (as digests).

  6. Select Save selection .

Atomist is now connected with your GitHub repositories and is be able to link image analyses with Git commits.

Manage repository access

If you wish to add or remove repository access for Atomist, go to the Repositories page.

  • Select All repositories if you want enable Atomist for all connected organizations and repositories.
  • Select Only select repositories if you want to provision access to only a subset of repositories.

Disconnect from GitHub

You might want to disconnect from GitHub when:

  • You want to change which GitHub organization or account connected to your Atomist workspace.

    To do so, disconnect the old GitHub organization or account first. Then, follow the instructions for connecting to GitHub for the new GitHub organization or account.

  • You want to remove Atomist access to a GitHub organization or account when you no longer use Atomist.

To disconnect a GitHub account:

  1. Go to Repositories and select the Disconnect link. This removes the connection to your GitHub organization or account.
  2. Go to the GitHub Applications settings page, then:

  3. Find atomist on the Installed GitHub Apps tab.
  4. Select Configure

  5. Select Uninstall . This removes the installation of the Atomist GitHub App from your GitHub organization or account.

  6. Find atomist on the Authorized GitHub Apps tab.
  7. Select Revoke .

    This removes the authorization of the Atomist GitHub App from your GitHub organization or account.

Read article
Try Atomist

Try Atomist

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

The quickest way to try Atomist is to run it on your local images, as a CLI tool. Trying it locally eliminates the need of having to integrate with and connect to a remote container registry. The CLI uses your local Docker daemon directly to upload the Software Bill of Materials (SBOM) to the Atomist control plane for analysis.

Prerequisites

Before you can begin the setup, you need a Docker ID. If you don’t already have one, you can register here.

Steps

Note

Only use this CLI-based method of indexing images for testing or trial purposes. For further evaluation or production use, integrate Atomist with your container registry. See get started.

  1. Go to the Atomist website and sign in using your Docker ID.
  2. Open the Integrations tab.
  3. Under API Keys , create a new API key.
  4. In your terminal of choice, invoke the Atomist CLI tool using docker run . Update the following values:

    • --workspace : the workspace ID found on the Integrations page on the Atomist website.
    • --api-key : the API key you just created.
    • --image : the Docker image that you want to index.
    docker run \
       -v /var/run/docker.sock:/var/run/docker.sock \
       -ti atomist/docker-registry-broker:latest \
       index-image local \
       --workspace AQ1K5FIKA \
       --api-key team::6016307E4DF885EAE0579AACC71D3507BB38E1855903850CF5D0D91C5C8C6DC0 \
       --image docker.io/david/myimage:latest
    

    Note

    The image must have a tag (for example, myimage:latest ) so that you are able to identify the image later.

    The output should be similar to the following:

    [info] Starting session with correlation-id c12e08d3-3bcc-4475-ab21-7114da599eaf
    [info] Starting atomist/docker-vulnerability-scanner-skill 'index_image' (1f99caa) atomist/skill:0.12.0-main.44 (fe90e3c) nodejs:16.15.0
    [info] Indexing image python:latest
    [info] Downloading image
    [info] Download completed
    [info] Indexing completed
    [info] Mapped packages to layers
    [info] Indexing completed successfully
    [info] Transacting image manifest for docker.io/david/myimage:latest with digest sha256:a8077d2b2ff4feb1588d941f00dd26560fe3a919c16a96305ce05f7b90f388f6
    [info] Successfully transacted entities in team AQ1K5FIKA
    [info] Image URL is https://dso.atomist.com/AQ1K5FIKA/overview/images/myimage/digests/sha256:a8077d2b2ff4feb1588d941f00dd26560fe3a919c16a96305ce05f7b90f388f6
    [info] Transacting SBOM...
    [info] Successfully transacted entities in team AQ1K5FIKA
    [info] Transacting SBOM...
    
  5. When the command exits, open the Atomist web UI, where you should see the image in the list.

    indexed image in the image overview list

  6. Select the image name. This gets you to the list of image tags.

    list of image tags

    Since this is your first time indexing this image, the list only has one tag for now. When you integrate Atomist with your container registry, images and tags show up in this list automatically.

  7. Select the tag name. This shows you the insights for this tag.

    vulnerability breakdown view

    In this view, you can see how many vulnerabilities this image has, their severity levels, whether there is a fix version available, and more.

Where to go next

The tutorial ends here. Take some time to explore the different data views that Atomist presents about your image. When you’re ready, head to the get started guide to learn how to start integrating Atomist in your software supply chain.

Read article
Defining additional build contexts and linking targets

Defining additional build contexts and linking targets

In addition to the main context key that defines the build context each target can also define additional named contexts with a map defined with key contexts . These values map to the --build-context flag in the build command.

Inside the Dockerfile these contexts can be used with the FROM instruction or --from flag.

The value can be a local source directory, container image (with docker-image:// prefix), Git URL, HTTP URL or a name of another target in the Bake file (with target: prefix).

Pinning alpine image

# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello world"
# docker-bake.hcl
target "app" {
  contexts = {
    alpine = "docker-image://alpine:3.13"
  }
}

Using a secondary source directory

# syntax=docker/dockerfile:1
FROM scratch AS src

FROM golang
COPY --from=src . .
# docker-bake.hcl
target "app" {
  contexts = {
    src = "../path/to/source"
  }
}

Using a result of one target as a base image in another target

To use a result of one target as a build context of another, specity the target name with target: prefix.

# syntax=docker/dockerfile:1
FROM baseapp
RUN echo "Hello world"
# docker-bake.hcl
target "base" {
  dockerfile = "baseapp.Dockerfile"
}

target "app" {
  contexts = {
    baseapp = "target:base"
  }
}

Please note that in most cases you should just use a single multi-stage Dockerfile with multiple targets for similar behavior. This case is recommended when you have multiple Dockerfiles that can’t be easily merged into one.

Read article
Building from Compose file

Building from Compose file

Specification

Bake uses the compose-spec to parse a compose file and translate each service to a target.

# docker-compose.yml
services:
  webapp-dev: 
    build: &build-dev
      dockerfile: Dockerfile.webapp
      tags:
        - docker.io/username/webapp:latest
      cache_from:
        - docker.io/username/webapp:cache
      cache_to:
        - docker.io/username/webapp:cache

  webapp-release:
    build:
      <<: *build-dev
      x-bake:
        platforms:
          - linux/amd64
          - linux/arm64

  db:
    image: docker.io/username/db
    build:
      dockerfile: Dockerfile.db
$ docker buildx bake --print
{
  "group": {
    "default": {
      "targets": [
        "db",
        "webapp-dev",
        "webapp-release"
      ]
    }
  },
  "target": {
    "db": {
      "context": ".",
      "dockerfile": "Dockerfile.db",
      "tags": [
        "docker.io/username/db"
      ]
    },
    "webapp-dev": {
      "context": ".",
      "dockerfile": "Dockerfile.webapp",
      "tags": [
        "docker.io/username/webapp:latest"
      ],
      "cache-from": [
        "docker.io/username/webapp:cache"
      ],
      "cache-to": [
        "docker.io/username/webapp:cache"
      ]
    },
    "webapp-release": {
      "context": ".",
      "dockerfile": "Dockerfile.webapp",
      "tags": [
        "docker.io/username/webapp:latest"
      ],
      "cache-from": [
        "docker.io/username/webapp:cache"
      ],
      "cache-to": [
        "docker.io/username/webapp:cache"
      ],
      "platforms": [
        "linux/amd64",
        "linux/arm64"
      ]
    }
  }
}

Unlike the HCL format, there are some limitations with the compose format:

  • Specifying variables or global scope attributes is not yet supported
  • inherits service field is not supported, but you can use YAML anchors to reference other services like the example above

.env file

You can declare default environment variables in an environment file named .env . This file will be loaded from the current working directory, where the command is executed and applied to compose definitions passed with -f .

# docker-compose.yml
services:
  webapp:
    image: docker.io/username/webapp:${TAG:-v1.0.0}
    build:
      dockerfile: Dockerfile
# .env
TAG=v1.1.0
$ docker buildx bake --print
{
  "group": {
    "default": {
      "targets": [
        "webapp"
      ]
    }
  },
  "target": {
    "webapp": {
      "context": ".",
      "dockerfile": "Dockerfile",
      "tags": [
        "docker.io/username/webapp:v1.1.0"
      ]
    }
  }
}

Note

System environment variables take precedence over environment variables in .env file.

Extension field with x-bake

Even if some fields are not (yet) available in the compose specification, you can use the special extension field x-bake in your compose file to evaluate extra fields:

# docker-compose.yml
services:
  addon:
    image: ct-addon:bar
    build:
      context: .
      dockerfile: ./Dockerfile
      args:
        CT_ECR: foo
        CT_TAG: bar
      x-bake:
        tags:
          - ct-addon:foo
          - ct-addon:alp
        platforms:
          - linux/amd64
          - linux/arm64
        cache-from:
          - user/app:cache
          - type=local,src=path/to/cache
        cache-to:
          - type=local,dest=path/to/cache
        pull: true

  aws:
    image: ct-fake-aws:bar
    build:
      dockerfile: ./aws.Dockerfile
      args:
        CT_ECR: foo
        CT_TAG: bar
      x-bake:
        secret:
          - id=mysecret,src=./secret
          - id=mysecret2,src=./secret2
        platforms: linux/arm64
        output: type=docker
        no-cache: true
$ docker buildx bake --print
{
  "group": {
    "default": {
      "targets": [
        "aws",
        "addon"
      ]
    }
  },
  "target": {
    "addon": {
      "context": ".",
      "dockerfile": "./Dockerfile",
      "args": {
        "CT_ECR": "foo",
        "CT_TAG": "bar"
      },
      "tags": [
        "ct-addon:foo",
        "ct-addon:alp"
      ],
      "cache-from": [
        "user/app:cache",
        "type=local,src=path/to/cache"
      ],
      "cache-to": [
        "type=local,dest=path/to/cache"
      ],
      "platforms": [
        "linux/amd64",
        "linux/arm64"
      ],
      "pull": true
    },
    "aws": {
      "context": ".",
      "dockerfile": "./aws.Dockerfile",
      "args": {
        "CT_ECR": "foo",
        "CT_TAG": "bar"
      },
      "tags": [
        "ct-fake-aws:bar"
      ],
      "secret": [
        "id=mysecret,src=./secret",
        "id=mysecret2,src=./secret2"
      ],
      "platforms": [
        "linux/arm64"
      ],
      "output": [
        "type=docker"
      ],
      "no-cache": true
    }
  }
}

Complete list of valid fields for x-bake :

  • cache-from
  • cache-to
  • contexts
  • no-cache
  • no-cache-filter
  • output
  • platforms
  • pull
  • secret
  • ssh
  • tags
Read article

Still Thinking?
Give us a try!

We embrace agility in everything we do.
Our onboarding process is both simple and meaningful.
We can't wait to welcome you on AiDOOS!