×

Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Nutanix
Foundation Platforms Submodule Open Source Software

Foundation Platforms 2.12.x

Product Release Date: 2022-09-28

Last updated: 2022-09-28

Open Source Software For Foundation Platforms Submodule 2.12.1

For more information about Foundation Platforms Submodule 2.12.1 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12.1 .

Open Source Software For Foundation Platforms Submodule 2.12

For more information about Foundation Platforms Submodule 2.12 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12 .

Frame Documentation

Frame Hosted

Last updated: 2022-02-21

Frame Documentation

For Frame documentation, see https://docs.frame.nutanix.com/

Read article
Karbon Platform Services Administration Guide

Karbon Platform Services Hosted

Last updated: 2022-11-24

Karbon Platform Services Overview

Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.

In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.

With Karbon Platform Services, you can:

  • Quickly build and deploy intelligent applications across a public or private cloud infrastructure.
  • Connect various mobile, highly distributed data sensors (like video cameras, temperature or pressure sensors, streaming devices, and so on) to help collect data.
  • Create intelligent applications using data connectors and machine learning modules (for example, implementing object recognition) to transform the data. An application can be as simple as text processing code or it could be advanced code implementing AI by using popular machine learning frameworks like TensorFlow.
  • Push this data to your Service Domain or the public cloud to be stored or otherwise made available.

This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.

Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.

Cloud Management Console and User Experience

As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.

  • Admin Console drop-down menu for infra admins and Home Console drop-down menu for project users. Both menus provide a list of all current projects and 1-click access to an individual project. In the navigation sidebar, Projects also provides a navigation path to the overall list of projects.
  • Vertical tabs for most pages and dashboards. For example, clicking Administration > Logging shows the Logging dashboard with related task and Alert tabs. The left navigation menu remains persistent to help eliminate excessive browser refreshes and help overall navigation.
  • Simplified Service Domain Management . In addition to the standard Service Domain list showing all Service Domains, a consolidated dashboard for each Service Domain is available from Infrastructure > Service Domain . Karbon Platform Services also simplifies upgrade operations available from Administration > Upgrades . For convenience, infra admins can choose when to download available versions to all or selected Service Domains, when to upgrade them, and check status for all or individual Service Domains.
  • Updated Workflows for Infrastructure Admins and Project Users . Apart from administration operations such as Service Domain, user management, and resource management, the infra admin and project user work flow is project-centric. A project encapsulates everything required to successfully implement and monitor the platform.
  • Updated GPU/vGPU Configuration Support . If your Service Domain includes access to a GPU/vGPU, you can choose its use case. When you create or update a Service Domain, you can specify exclusive access by any Kubernetes app or data pipeline or by the AI Inferencing API (for example, if you are using ML Models).

Built-In App Services, Logging, and Alerting

Karbon Platform Services includes these ready-to-use built-in services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.

App Runtime Service

These services are enabled by default on each Service Domain. All services now have monitoring and status capabilities

  • Kubernetes Apps. Container-as-a-Service to manage and run Kubernetes apps without worrying about Kubernetes complexity and having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Function-as-a-Service to run serverless functions. Invoke functions based on data triggers and publish to service end points.
  • AI Inferencing - updated in this release. ML model management and AI Inferencing runtime, pooled though an abstraction of GPUs and hardware accelerators. Enable this service to use your machine learning (ML) models in your project.

Ingress Controller

Ingress controller configuration and management is now available from the cloud management console (as well as from the Karbon Platform Services kps command line). Options to enable and disable the Ingress controller are available in the user interface.

Traefik or Nginx-Ingress. Content-based routing, load balancing, SSL/TLS termination. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. You can only enable one Ingress controller per Service Domain.

Service Mesh

Istio. Provides traffic management, secure connection, and telemetry collection for your applications.

Data Streaming | Messaging

  • Kafka. Options to enable and disable Kafka are available in the user interface. Persistent, high performance data streaming with publish/subscribe and queue based messaging. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment.
  • NATS. In-memory, high performance data streaming with publish/subscribe and queue based messaging. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.

Logging | Monitoring | Alerting | Audit Trail

  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing).
  • Logging. Centralized real-time log monitoring, log bundle collection, and external log forwarding.
  • Log forwarding policies for all users. Create, edit, and delete log forwarding policies to help make collection more granular, and then forward those Service Domain logs to the cloud - for example, AWS CloudWatch.
  • Audit trail. Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Log dashboard in the cloud management console.

Infrastructure Administrator Role

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

Infrastructure administrator creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, services, data sources, and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them. This user has create/read/update/delete (CRUD) permissions for:

  • Categories
  • Cloud profiles
  • Container registry profiles
  • Data sources
  • Service Domains
  • Projects
  • Users
  • Logs

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Infrastructure Administrator Click to enlarge Infrastructure Administrator Role Capabilities

Project User Role

A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:

  • Kubernetes Apps
  • Functions (scripts)
  • Data pipelines
  • Runtime environments

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Project User Click to enlarge

Naming Guidelines

Data Pipelines and Functions
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
Service Domain, Data Source, and Data Pipeline Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.For example, my.service-domain is a valid Service Domain name.
  • Maximum length of 63 characters
Data Source Topic and Field Naming
Topic and field naming must be unique across the same data source types. You are allowed to duplicate names across different data source types. For example, an MQTT topic name and RTSP protocol stream name can share /temperature/frontroom.
  • An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
  • An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
  • A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.
Container Registry Profile Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.
  • Maximum length of 200 characters
All Other Resource Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name
  • Maximum length of 200 characters

Karbon Platform Services Cloud Management Console

The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).

You can log on with your My Nutanix or local user credentials.

Logging On to The Cloud Management Console

Before you begin

  • The supported web browser is the current and two previous versions of Google Chrome.
  • If you are logging on for the first time, you might experience a guided onboarding workflow.
  • After three failed login attempts in one minute, you are locked out of your account for 30 minutes.
  • Users without My Nutanix credentials log on as a local user.

Procedure

  1. Open https://karbon.nutanix.com/ in a web browser.
  2. Choose one of the following to log on:
    • Click Login with My Nutanix and log on with your My Nutanix credentials.
    • Click Log In with Local User Account and log on with your project user or infrastructure administrator user name and password in the Username / Password fields.
    The web browser displays role-specific dashboard.

Infrastructure Admin View

The default view for an infrastructure administrator is the Dashboard . Click the menu button in the view to expand and display all available pages in this view.

Figure. Default Infrastructure Administrator View Click to enlarge This images shows the dashboard view for an administrator.

Admin Console / Projects
Drop-down menu that lets you quickly navigate to the Admin Console (home Dashboard) or any project.
Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. Click Projects or Infrastructure to change the user dashboard view.
Services
Managed services dashboard. Add a service such as Istio to a project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.
Audit Trail
Access the Audit Log dashboard to view the most recent events or tasks performed in the console.
Projects
  • Displays a list of all projects.
  • Create projects from this page.
Infrastructure Submenu
Service Domains
  • Displays a list of available Service Domains.
  • Add an internet-connected Service Domain device or appliance.
  • You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.
Data Sources and IoT Sensors
  • Displays a list of available or created data sources sensors
  • Create and define a data source (sensor or gateway) and associate it with a Service Domain.
Administration Submenu
Categories
  • Displays a list of available categories.
  • Create and define a category to logically group Service Domains, data sources, and other items.
Cloud Profiles
  • Displays a list of cloud profiles.
  • Add and define a cloud profile (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is to be stored.
Container Registry Profiles
  • Displays a list of Docker container registry profiles.
  • Add and define a Docker container registry profile. This profile can optionally be associated with an existing cloud profile.
Users
  • Displays a list of users.
  • Create infrastructure administrators and project users.
System Logs
Run Log Collector on one or more connected Service Domains and then download the collected log bundle.
Upgrades
Apply software updates for your Service Domain when updates are available.
Help
Provides a link to the latest online documentation.

Project User View

The default view for a project user is the Dashboard .

  • Click the menu button in the view to expand and display all available pages in this view.
Figure. Default Project User View Click to enlarge Page shows the project user dashboard including an onboarding widget

Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. The user view indicate displays Project for project users.
Projects
  • Displays a list of all projects created by the infrastructure administrator.
  • Edit an existing project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.

Kubernetes Apps, Logging, and Services

Kubernetes Apps
  • Displays a list of available Kubernetes applications.
  • Create an application and associate it with a project.
Functions and Data Pipelines
  • Displays a list of available data pipelines.
  • Create a data pipeline and associate it with a project.
  • Create a visualization, a graphical representation of a data pipeline including data sources and Service Domain or cloud data destinations.
  • Displays a list of scripts (code used to perform one or more tasks) available for use in a project.
  • Create a function by uploading a script, and then associate it with a project.
  • Displays a list of available runtime environments.
  • Create an runtime environment and associate it with a project and a container registry.
AI Inferencing
  • Create and display machine learning models (ML Models) available for use in a project.
Services and Related Alerts
  • Kafka. Configured and deployed data streaming and messaging topics.
  • Istio. Defined secure traffic management service.
  • Prometheus. Defined app monitoring and metrics collection.
  • Nginx-Ingress. Configured and deployed ingress controllers.

Viewing Dashboards

After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.

Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.

The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.

Example: Managing a Service Domain From Its Dashboard

About this task

To complete this task, log on to the cloud management console.

To delete a service domain that does not have any associated data sources, click Infrastructure > Service Domains , select a Service Domain from the list, then click Remove . Deleting a multinode Service Domain deletes all nodes in that Service Domain.

Procedure

  1. Click Infrastructure > Service Domains , then click a Service Domain in the list.
    The Service Domain dashboard is displayed along with an Edit button.
    Figure. Service Domain Dash board Click to enlarge Service Domain dashboard with Edit button and management links.

  2. Click Edit to update Service Domain properties, such as its name, serial number, network settings, and so on. See Adding a Single Node Service Domain for information about these fields.
  3. Click Nodes to view nodes associated with the service domain.
  4. Click Data Sources .
    1. View any associated data sources.
    2. Delete a data source: Select it, then click Remove .
    3. Click Add Data Source to add another data source to the Service Domain. See Adding a Data Source and IoT Sensor.
  5. Click Alerts to show available or active alerts.

Quick Start Menu

The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.

Figure. Quick Start Menu Click to enlarge The quick start menu is at the top right of the console.

Getting Started - Infrastructure Administrator

These tasks assume you have already done the following. Ensure that any network-connected devices are assigned static IP addresses.

  1. Deployed data sources (sensors, gateways, other input devices) reachable by the Service Domain.
  2. Deployed an internet-connected Service Domain as a VM hosted in an AHV cluster.
  3. [Optional] Created an account with a supported cloud provider, where acquired data is transmitted for further processing.
  4. [Optional] Create a container registry with profile (public or private/on-premise registry).
  5. Obtained infrastructure administrator credentials for the Karbon Platform Services management console.

The Quick Start Menu lists the common onboarding tasks for the infrastructure administrator. It includes links to infrastructure-related resource pages. You can also go directly to any infrastructure resource from the Infrastructure menu item. As the infrastructure administrator, you need to create the following minimum infrastructure.

  1. Add a Service Domain and add categories.
  2. Add a data source to associate with a Service Domain.

Adding a Single Node Service Domain

Create and deploy a Service Domain cluster that consists of a single node.

About this task

To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.

If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.

  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  5. Select Single Node to create a single-node Service Domain.
    1. You cannot expand this Service Domain later by adding nodes.
  6. Click Add Node and enter the following node details:
    1. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    2. Name the node.
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  7. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  8. Click Next .
  9. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  10. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  11. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  12. Click Add .

What to do next

See Adding a Data Source and IoT Sensor.
See [Optional] Creating Your Cloud Profile

Creating Your Cloud Profile

Procedure

  1. Click Administration > Cloud Profiles > Create .
  2. Select a Cloud Type (your cloud service provider).
  3. If you selected Amazon Web Services , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. Access Key . Enter the AWS access key ID.
    4. Secret . Enter the AWS secret access key.
  4. If you selected Azure , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. Storage Account Name .
    4. Storage Key . Copy your key into this field.
  5. If you selected Google Cloud Platform , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. GCP Service Account Info JSON . Download your Google Cloud Platform service account key as a JSON file, open the file in a text editor, and copy its contents into this field.
  6. Click Create .
  7. On the Your Cloud Profile page that is now displayed, you can add another profile by clicking Add Cloud Profile .
  8. You can also Edit profile properties or Remove a profile from Your Cloud Profile .

Creating a Category

About this task

Create categories of grouped attributes you can specify when you create a data source or pipeline.

Procedure

  1. Click Administration > Categories > Create .
  2. Name your category. Up to 200 alphanumeric characters are allowed.
  3. Purpose . Describe the category. Up to 200 alphanumeric characters are allowed.
  4. Click Add Value and type a category component name in the text field.
  5. Click the check icon to add the data source field.
  6. Click Add Value to add more category components in the text field, clicking the check icon to add each one.
  7. Click Create when you are finished.
  8. Repeat to add more categories.

What to do next

  • Use these categories as attributes when adding a data source or data pipeline.
  • See also Adding a Single Node Service Domain or Manage a Multinode Service Domain, where you can add these categories to a Service Domain.

Adding a Data Source and IoT Sensor

You can add one or more data sources (a collection of sensors, gateways, or other input devices providing data) to associate with a Service Domain.

Each defined data source consists of the following:

  • Data source type (sensor, input device like a camera, or gateway) - the origin of the data
  • Communication protocol typically associated with the data source
  • Authentication type to secure access to the data source and data
  • One or more fields specifying the data extraction method - the data pipeline specification
  • Categories which are attributes that can be metadata you define to associate with the captured data

Add a Data Source - MQTT

About this task

Add one or more data sources using the Message Queuing Telemetry Transport lightweight messaging protocol (MQTT) to associate with a Service Domain.

Certificates downloaded from the cloud management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.

When naming entities, Up to 200 alphanumeric characters are allowed.

Procedure

  1. Click Infrastructure > Data Sources and IoT Sensors > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source.
    2. Associated Service Domain . Select a Service Domain for your data source.
    3. Select Protocol > MQTT .
    4. Authentication type . When you select the MQTT protocol, the Certificate authentication type is selected automatically to authenticate the connection between your data source and the Service Domain.
    5. Click Generate Certificates , then click Download to download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates.
      See Certificates Used with MQTT Data Sources.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.

Data Extraction - MQTT

Procedure

  1. Click Add New Field .
    1. Name . Enter a relevant name for the data source field.
    2. Add MQTT Topic . Enter a topic (a case-sensitive UTF-8 string). For example, /temperature/frontroom for a temperature sensor located in a specific room.
      An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
    3. Click the check icon to add the data source field.
  2. Click Add New Field to add another topic or click Next to go to the Category Assignment panel.

Category Assignment - MQTT

About this task

Categories enable you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Add a Data Source - RTSP

About this task

Add one or more data sources using the Real Time Streaming protocol (RTSP) to associate with a Service Domain.

Procedure

  1. Click Infrastructure > Data Sources > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source. Up to 200 alphanumeric characters are allowed.
    2. Associated Service Domain . Select a service domain for your data source.
    3. Select Protocol > RTSP .
    4. Authentication type . When you select RTSP , the Username and Password authentication type is selected automatically to authenticate the connection between your data source and the service domain.
    5. IP address . Enter the sensor / gateway IP address.
    6. Port . Specify a port number for the streaming device. Default port is 554.
    7. User Name . Enter the user name credential for the data source.
    8. Password . If your data source requires it, enter the password credential for the data source. Click Show to display the password as you type.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.
    These steps create an RTSP URL in the format rtsp://username:password@ip-address/ . For example: rstp://userproject2:

    In the next step, you will specify one or more streams.

Data Extraction - RTSP

Procedure

  1. Click Add New Field on the Data Extraction page depending on your data source type.
    1. Name . Enter a relevant name for the data source.
    2. Complete the RTSP URL field by specifying a named protocol stream. For example, BackRoomCam1 .
      An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
    3. Click the check icon to add the data source.
  2. Click Add New Field to add another stream or click Next to go to the Category Assignment panel.

Category Assignment- RTSP

About this task

Categories enables you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Add a Data Source - GigE Vision

About this task

Add one or more data sources using the GigE Vision camera protocol to associate with a Service Domain.

Before you begin

GigE Vision data sources must be on the same subnet as the Service Domain.

Procedure

  1. Click Infrastructure > Data Sources and IoT Sensors > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source. Up to 200 alphanumeric characters are allowed.
    2. Associated Service Domain . Select a Service Domain for your data source.
    3. Select Protocol > GigE Vision .
    4. Authentication type . When you select GigE Vision , the None authentication type is selected automatically.
    5. IP address . Enter the sensor / gateway IP address.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.

Data Extraction

Procedure

Click Add New Field on the Data Extraction page depending on your data source type.
  1. Name . Enter a relevant name for the data source.
  2. Click the check icon to add the data source.
    A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.

Category Assignment

About this task

Categories enables you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Adding a Container Registry Profile

About this task

Create a new container registry profile or associate an existing cloud profile. This profile stores container images which can be public (for example, Docker Hub) or private (an on-premise registry).

Procedure

  1. Click Administration > Container Registry Profiles > Add Profile .
  2. To add a new profile:
    1. Select Add New .
    2. Name your profile.
      • Up to 200 alphanumeric characters are allowed
      • Starts and ends with a lowercase alphanumeric character
      • Dash (-) and dot (.) characters are allowed
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
    For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    1. Username . Enter the user name credential for the profile.
    2. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    3. Email Address . Add an email address in this field.
  3. To use an existing cloud profile that you have already added (see Creating Your Cloud Profile):
    1. Select Use Existing cloud profile :
    2. Select an existing profile from Cloud Profile .
    3. Enter the Name of your container registry profile.
      • Up to 200 alphanumeric characters are allowed
      • Starts and ends with a lowercase alphanumeric character
      • Dash (-) and dot (.) characters are allowed
    4. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    5. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  4. Click Create .

Creating Users

As an infrastructure administrator, you can create infrastructure users or project users. Users without My Nutanix credentials log on as a local user.

Before you begin

You can also do this step from the Dashboard panel Getting Started.

Procedure

  1. Click Administration > Users > Create .
  2. Name . Provide a name for the user.
  3. Email . Enter a user email address which will be the user name to log on to the cloud management console.
  4. Password . Enter a password to be used when logging on to the Karbon Platform Services management console.
  5. Select a user role.
    1. Infrastructure Admin
    2. User
  6. Click Create .
  7. Repeat to add more users.

Certificates Used with MQTT Data Sources

Each Service Domain image is preconfigured with security certificates and public/private keys.

When you create an MQTT data source, you generate and download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates. Install these components on the MQTT enabled sensor device to securely authenticate the connection between an MQTT enabled sensor device and Service Domain. See your vendor document for your MQTT enabled sensor device for certificate installation details.

Certificates downloaded from the Karbon Platform Services management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.

Figure. Certificate Download When Creating a Data Source Click to enlarge Download a certificate

Onboard and Manage Your Service Domain

The Karbon Platform Services cloud management console provides a rich administrative control plane to manage your Service Domain and its infrastructure. The topics in this section describe how to create, add, and upgrade a Service Domain.

Checking Service Domain Details

In the cloud management console, go to Infrastructure > Service Domains to add a VM-based Service Domain. You can also view health status, CPU/Memory/Storage usage, version details, and more information for every service domain.

Upgrading a Service Domain

In the cloud management console, go to Administration > Upgrades to upgrade your existing Service Domains. This page provides you with various levels of control and granularity over your maintenance process. At your convenience, download new versions for all or specific Service Domains and upgrade them with "1-click".

Onboarding a Multinode Service Domain By Using Nutanix Karbon

You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster. To do this, use Karbon on Prism Central with the kps command line and cloud management console Create a Service Domain workflow. See Onboarding a Multinode Service Domain By Using Nutanix Karbon. (You can also continue to use other methods to onboard and create a Service Domain, as described in Onboarding and Managing Your Service Domain.)

For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the command line and the required YAML configuration file for cluster.

This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

About the Service Domain Image Files

The Karbon Platform Services Release Notes include information about any new and updated features for the Service Domain. You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.

The Service Domain is available as a qcow disk image provided by Nutanix for hosting the VM in an AOS cluster running AHV.

The Service Domain is also available as an OVA disk image provided by Nutanix for hosting the VM in a non-Nutanix VMware vSphere ESXi cluster. To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVA file for your ESXi version

Each Service Domain you create by using these images are configured with X.509 security certificates.

If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.

Download the Service Domain VM image file from the Nutanix Support portal Downloads page. This table describes the available image file types.

Table 1. Service Domain Image Types
Service Domain Image Type Use
QCOW2 Image file for hosting the Service Domain VM on an AHV cluster
OVA Image file for hosting the Service Domain VM on vSphere.
EFI RAW compressed file RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using an Extensible Firmware Interface (EFI) BIOS
RAW compressed file RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using a legacy or non-EFI BIOS
AWS RAW uncompressed file Uncompressed RAW file for hosting the Service Domain on Amazon Web Services (AWS)

Service Domain VM Resource Requirements

As a default, in a single VM deployment, the Service Domain requires these resources to support Karbon Platform Services features. You can download the Service Domain VM image file from the Nutanix Support portal Downloads page.

References
  • AHV Administration Guide, Host Network Management
  • Prism Web Console Guide, Network Management topic
  • Prism Web Console Guide, Virtual Machine Customization topic (cloud-init)
Table 1. Service Domain Infrastructure VM Cluster Requirements
VM Resource Requirements Supported Clusters
Environment AOS cluster running AHV (AOS-version-compatible version), where Service Domain Infrastructure VM is running as a guest VM.

VMware vSphere ESXi 6.0 or later cluster, where Service Domain Infrastructure VM is running as a guest VM (created from an OVA image file provided by Nutanix). The OVA image as provided by Nutanix is running virtual hardware version 11.

vCPUs 8 single core vCPUs
Memory 16 GiB memory. You might require more storage as determined by your applications.
Disk storage Minimum 200 GB storage. The Service Domain Infrastructure VM image file provides an initial disk size of 100 GiBs (gibibytes). You might require more storage as determined by your applications. Before first power on of the VM, you can increase (but not decrease) the VM disk size.
GPUs [optional] GPUs as required by any application using them

Karbon Platform Services Port and Firewall Requirements

Service Domain Infrastructure VM Network Requirements and Recommendations

Note: For more information about ports and protocols, see the Ports and Protocols Reference.
Table 1. Network Requirements and Recommendations
Item Requirement/Recommendation
Outbound port Allow connection for applications requiring outbound connectivity.

Starting with Service Domain 2.2.0, Karbon Platform Services now retrieves Service Domain package images from these locations. Ensure that your firewall or proxy allows outbound Internet access to the following.

  • *.dkr.ecr.us-west-2.amazonaws.com - Amazon Elastic Container Registry
  • https://gcr.io and *.gcr.io- Google container registry
Allow outbound port 443 for websocket connection to management console / cloud providers
NTP Allow outbound NTP connection For network time protocol server.
HTTPS proxy Service Domain Infrastructure VM supports a network configuration that includes an HTTPS proxy. Customers can now configure such a proxy as part of a cloud-init based method when deploying Service Domain Infrastructure VMs.
Service Domain Infrastructure VM static IP address The Service Domain Infrastructure VM requires a static IP address as provided through a managed network when hosted on an AOS / AHV cluster.
Configured network with one or more configured domain name servers (DNS) and optionally a DHCP server
Integrated IP address management (IPAM), which you can enable when creating virtual networks for VMs in the Prism web console
(Optional) cloud-init script which specifies network details including a DNS server
Miscellaneous The cloud-init package is included in the Service Domain VM image to enable support for Nutanix Calm and its associated deployment automation features.
Real Time Streaming protocol (rtsp) Port 554 (default)

Create and Onboard the Service Domain VM

Onboarding the Service Domain VM is a three-step process:

  1. For an AOS cluster running Nutanix AHV, log on to the Prism Element cluster web console and upload the disk image through the image service. See Uploading the Service Domain Image.
  2. Create and power on the Service Domain VM. See Creating and Starting the Service Domain VM.

    If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.

  3. Add this Service Domain VM to the Karbon Platform Services cloud management console. See:
    • Adding a Single Node Service Domain. This VM acts as a single node that stores installation, configuration, and service/microservice data, metadata, and images, as well as providing Service Domain computing and cloud infrastructure management.
    • Adding a Multinode Service Domain. Create an initial three-node Service Domain to create a multinode Service Domain cluster.

See also:

  • Deploying the Service Domain Image on a Bare Metal Server
  • Deploying the Service Domain on Amazon Web Services

Uploading the Service Domain Image

How to upload the Service Domain VM disk image file on AHV running in an AOS cluster.

About this task

This topic describes how to initially install the Service Domain VM on an AOS cluster by uploading the image file. For details about your cluster AOS version and the procedures, see the Prism Web Console Guide.

To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVF or OVA file for your ESXi version.

Procedure

  1. Log on as an administrator to the Prism Element web console through the Chrome web browser.
  2. Click the gear icon in the main menu and select Image Configuration .
    The Image Configuration window appears.
    Figure. Image Configuration Window Click to enlarge Image configuration windows with Upload Image button

  3. Click Upload Image to launch the Create Image window.
    Do the following in the indicated fields:
    Figure. Create Image Window Click to enlarge create image window

    1. Name : Enter a name for the image.
    2. Annotation (optional): Enter a description for the image.
    3. Image Type (optional): Select the image type Disk .
    4. Storage Container : Select the default SelfServiceContainer .
    5. Image Source : Click Upload a file , then Choose File to select the Service Domain VM disk image file (that you downloaded from the Nutanix Support portal) from the file search window.
    6. When all the fields are correct, click the Save button.
      The Create Image window closes after uploading the file and the Image Configuration window reappears with the new disk image appearing in the list. This window also displays the uploading progress and will show the image State as INACTIVE until uploading completes, where the State changes to ACTIVE .

What to do next

See the topic Creating and Starting the Service Domain.

Creating and Starting the Service Domain VM

After uploading the Service Domain VM disk image file, create the Service Domain VM and power it on. After creating the Service Domain VM, note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.

About this task

This topic describes how to create the Service Domain VM on an AOS cluster and power it on. For details about your cluster's AOS version and VM management, see the Prism Web Console Guide.

To deploy a VM from an OVA file on vSphere, see the VMware documentation for your ESXi version.

The most recent requirements for the Service Domain VM is listed in the Karbon Platform Services Release Notes.

If your network requires that traffic flow through an HTTP/HTTPS proxy, you can use a cloud-init script. See HTTP/HTTPS Proxy Support for a Service Domain VM.

Procedure

  1. Log on as an administrator to your cluster's web console through the Chrome web browser.
  2. Click Home > VM , then click Create VM .
    The Create VM dialog box appears. You might need to scroll down to see everything that is displayed here.
    Figure. Create VM Dialog Box 1 Click to enlarge Create VM dialog box with fields

    Figure. Create VM Dialog Box 2 Click to enlarge Create VM dialog box with fields

    Figure. Create VM Dialog Box 3 Click to enlarge Create VM dialog box with fields

  3. Do the following in the indicated fields.
    You might require more memory and storage as determined by your applications.
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Use this VM as an agent VM : Do not select. Not used in this case.
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM. For Karbon Platform Services, enter 8 .
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU. For Karbon Platform Services, enter 1 .
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM. For Karbon Platform Services, enter 16 .
  4. Click Add Disk to add the uploaded Karbon Platform Services image file.
    Figure. Add Disk Dialog Click to enlarge Create a disk for your VM

    1. Select Type > DISK .
    2. Select Operation > Clone from Image Service .
    3. Select Bus Type > SCSI .
    4. Select the Image you uploaded in Uploading the Service Domain Image.
    5. Keep the default Index selection.
    6. Click Add .
  5. Click Add New Nic to assign the VM network interface to a vlan, then click Add .
  6. Select Legacy BIOS to start the VM using legacy BIOS firmware. This choice is selected by default on AHV clusters supporting legacy or UEFI boot firmware.
  7. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics section, and then do the following in the Add GPU dialog box:
    Figure. Add GPU Dialog Box Click to enlarge

    1. Configure GPU pass-through by clicking Passthrough , selecting the GPU that you want to allocate, and then clicking Add .
      If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you need to. Make sure that all the allocated pass-through GPUs are on the same host. If all specified GPUs of the type that you want to allocate are in use, you can proceed to allocate the GPU to the VM, but you cannot power on the VM until a VM that is using the specified GPU type is powered off.
  8. Click Save .
    The VM creation task progress appears in Tasks at the top of the web console.
    Note: You might require more storage as determined by your applications. Before first power on of the Service Domain VM, you can increase (but not decrease) the VM disk size.
    • When the VM creation task is completed (the VM is created successfully), select the new Service Domain VM in the Table view, scroll to the bottom of the VM page, and click Update .
    • Scroll to the disk, click the pencil icon to edit the disk, and increase the disk Size , then click Update and Save .
  9. When the VM creation task is completed (the VM is created successfully), select the new Service Domain VM in the Table view, scroll to the bottom of the VM page, and Power On the VM.
    Note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.
  10. If you are creating a multinode Service Domain , repeat these steps to create at least two more VMs for a minimum of three VMs. The additional VMs you create here can become nodes in a multinode Service Domain cluster, or remain unclustered individual/single node Service Domains.

What to do next

Add the newly-created Service Domain VM in the Karbon Platform Services management console. See:
  • Getting Started - Infrastructure Administrator
  • Adding a Single Node Service Domain for single node service domains
  • Manage a Multinode Service Domain for Service Domains of up to three nodes

Deploying the Service Domain Image on a Bare Metal Server

Before you begin

  • For installing the Service Domain on bare metal, see About the Service Domain Image Files for available image types. In this procedure, you can use a RAW or EFI RAW compressed Service Domain image depending on the bare metal server BIOS type.
  • For a list of approved hardware for bare metal installation, contact Nutanix Support or send email to kps@nutanix.com.
  • Before imaging the bare metal server, prepare it by updating any required firmware and performing hardware RAID virtual disk configuration.
  • Minimum destination disk size is 200 GB.
  • Single node Service Domain support only. Multinode Service Domains are not supported.

Procedure

  1. Download and boot a live image.
    You can image the bare metal serve by live-booting any Linux operating system that includes destination driver support and these utilities or packages: lshw (list hardware), tar (tape archive), gzip (GNU compression utilitiy), dd (file convert and copy). You can live-boot an image through a USB or BMC connection. These steps use an Ubuntu distro and USB drive as an example.
    1. Use MacOS or Microsoft Windows, create an Ubuntu bootable USB drive.
    2. Download the Service Domain image and copy to the USB drive.
    3. Boot from the Ubuntu bootable USB drive with the Try Ubuntu option.
  2. In Ubuntu, open Terminal, find the destination disk, and image it.
    1. Find the destination disk.
      $ sudo lshw -c disk
    2. Go to the directory where you saved the Service Domain image. For example, where drive_label is the USB drive:
      $ cd /media/ubuntu/drive_label
    3. Image the destination disk. For example:
      $ sudo tar -xOzvf service-domain-image.raw.tgz | sudo dd of=destination_disk bs=1M status=progress
      • service-domain-image.raw .tgz. RAW or EFI RAW compressed Service Domain image you downloaded and copied to the USB drive.
      • destination_disk . Destination disk. For example, /dev/sda , /dev/nvme0n1 , and so on.
  3. When imaging is completed successfully, restart the baremetal server.
    1. Connect the primary network interface to a network with DHCP.
    2. Remove the USB drive.
    3. Restart the bare metal server and note the IP address received from DHCP on boot.
    During startup (boot), note the IP address assigned by the DHCP server.

What to do next

See Adding a Single Node Service Domain

Deploying the Service Domain on Amazon Web Services

Before you begin

  • For installing the Service Domain on Amazon Web Services (AWS), see About the Service Domain Image Files for available image types. In this procedure, you can use a RAW uncompressed Service Domain image.
  • You need an AWS account to deploy the Service Domain on AWS.
  • Nutanix recommends that you create a new dedicated Amazon S3 bucket for the Service Domain. This procedure applies a new trust policy to the S3 bucket storing the Service Domain. Creating a dedicated S3 bucket helps avoid other policies or configurations being inadvertently being applied to the Service Domain.
  • Install configure the AWS command line interface aws on your local machine, so that you can communicate with AWS as required in these steps. For convenience, download the Service Domain image to this machine.
  • Single node Service Domain support only. Multinode Service Domains are not supported.

Procedure

  1. Create an AWS S3 bucket and copy the Service Domain RAW image to it.
    1. Create the bucket.
      $ aws s3 mb s3://raw-image-bkt
      • raw-image-bkt . Name of the your S3 bucket.
      • aws_region . Your AWS Region.
    2. Copy the Service Domain RAW image to the bucket you just created.
      $ aws s3 cp service-domain-image.raw s3://raw-image-bkt
      • service-domain-image.raw . RAW uncompressed Service Domain image you downloaded.
  2. In a text editor, create a new trust policy configuration file named trust-policy.json .
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }
  3. Create a role named vmimport and grant VM Import/Export access to it.
    $ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
  4. Create a new role policy configuration file named role-policy.json .
    Make sure you replace raw-image-bkt with the actual name of your S3 bucket.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect": "Allow",
             "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket" 
             ],
             "Resource": [
                "arn:aws:s3:::raw-image-bkt",
                "arn:aws:s3:::raw-image-bkt/*"
             ]
          },
          {
             "Effect": "Allow",
             "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetBucketAcl"
             ],
             "Resource": [
                "arn:aws:s3:::raw-image-bkt",
                "arn:aws:s3:::raw-image-bkt/*"
             ]
          },
          {
             "Effect": "Allow",
             "Action": [
                "ec2:ModifySnapshotAttribute",
                "ec2:CopySnapshot",
                "ec2:RegisterImage",
                "ec2:Describe*"
             ],
             "Resource": "*"
          }
       ]
    }
  5. Attach the role policy in role-policy.json to the vmimport role.
    $ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
  6. Create a file named container.json .
    Make sure you replace raw-image-bkt with the actual name of your S3 bucket and service-domain-image.raw with the name of the RAW uncompressed Service Domain image.
    {
         "Description": "Karbon Platform Services Raw Image",
         "Format": "RAW",
         "UserBucket": {
            "S3Bucket": "raw-image-bkt",
            "S3Key": "service-domain-image.raw"
        }
     }
    
  7. Import the snapshot as an Amazon Machine Image (AMI) by specifying container.json .
    $ aws ec2 import-snapshot --description "exampletext" --disk-container "file://container.json"
    The command output displays a task_id . With the task_id , you can see the snapshot task progress as follows:
    $ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
  8. After the task completes successfully, get the snapshot ID ( SnapshotId , needed for the next steps).
    $ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
    Example completed task status output:
    {
        "ImportSnapshotTasks": [
            {
                "Description": "Karbon Platform Services Raw Image",
                "ImportTaskId": "import-task_id",
                "SnapshotTaskDetail": {
                    "Description": "Karbon Platform Services Raw Image",
                    "DiskImageSize": "disk_size",
                    "Format": "RAW",
                    "SnapshotId": "snapshot_ID"
                    "Status": "completed",
                    "UserBucket": {
                       "S3Bucket": "raw-image-bkt",
                       "S3Key": "service-domain-image.raw"
                    }
                }
            }
        ]
    }

Registering and Launching the Image in the EC2 Console

Procedure

  1. To register the AMI from the snapshot, run this command.
    $ aws ec2 register-image --virtualization-type hvm \
    --name "Karbon Platform Services Service Domain Image" --architecture x86_64 \
    --root-device-name "/dev/sda1"  --block-device-mappings \
    "[{\"DeviceName\": \"/dev/sda1\", \"Ebs\": {\"SnapshotId\": \"snapshot_ID\"}}]"
    
    1. snapshot_ID . Snapshot ID from completed task status output
  2. In the EC2 console, go to Images > AMIs , and select the AMI you created.
  3. Click Launch .
    1. For Choose Instance Type , select an instance with minimum 4 vCPUs and 16GiB memory.
    2. Click through to Add Storage and create an EBS volume with minimum disk size of 100 GiB and a device name of /dev/xvdf .
    3. Proceed to Review .
    4. In the Select an existing key pair or create a new key pair dialog box, use an existing key pair or create a new pair.
    5. If you create a new key pair, download the corresponding .PEM file.
  4. Launch the AMI.
  5. Return to the AWS CLI and get the Public and Private IP address of your instance by using the instance ID instance_id .
    $ aws ec2 describe-instances --instance-id instance_id --query 'Reservations[].Instances[].[PublicIpAddress]' --output text | sed '$!N;s/\n/ /'
  6. Back at the EC2 console, select Connect and follow the instructions to open an SSH session into your instance.
    Next, you need the serial number and gateway/subnet address to add your Service Domain to Karbon Platform Services.
  7. Log on to the instance and run these commands, noting the serial number and addresses:
    $ cat /config/serial_number.txt
    $ route -n

What to do next

See Adding a Single Node Service Domain

HTTP/HTTPS Proxy Support for a Service Domain VM

Attach a cloud-init script to configure HTTP/HTTPS proxy server support.

If your network policies require that all HTTP network traffic flow through a proxy server, you can configure a Service Domain to use an HTTP proxy. When you create the service domain VM, attach a cloud-init script with the proxy server details. When you then power on the VM and it fully starts, it will include your proxy configuration.

If you require a secure proxy (HTTPS), use the cloud-init script to upload SSL certificates to the Service Domain VM.

You can attach or copy the script in the Custom Script section of the Nutanix AOS cluster Create VM dialog box.
Figure. Create VM Cloud-Init for HTTP/HTTPS Proxy Click to enlarge This picture shows the cloud-init section of the Nutanix AOS cluster Create VM dialog box.

Sample cloud-init Script for HTTPS Proxy Server Configuration

This script creates an HTTP/HTTPS proxy server configuration on the Service Domain VM after you create and start the VM. Note that CACERT_PATH= in the first content spec is optional in this case, as it is already specified in the second path spec.

#cloud-config
#vim: syntax=yaml
write_files:
- path: /etc/http-proxy-environment
  content: |
    HTTPS_PROXY="http://ip_address:port"
    HTTP_PROXY="http://ip_address:port"
    NO_PROXY="127.0.0.1,localhost"
    CACERT_PATH="/etc/pki/ca-trust/source/anchors/proxy.crt"
- path: /etc/systemd/system/docker.service.d/http-proxy.conf
  content: |
    [Service]
    Environment="HTTP_PROXY=http://ip_address:port" 
    Environment="HTTPS_PROXY=http://ip_address:port" 
    Environment="NO_PROXY=127.0.0.1,localhost"
- path: /etc/pki/ca-trust/source/anchors/proxy.crt
  content: |
    -----BEGIN CERTIFICATE-----
PASTE CERTIFICATE DATA HERE
    -----END CERTIFICATE-----
runcmd:
- update-ca-trust force-enable
- update-ca-trust extract
- yum-config-manager --setopt=proxy=http://ip_address:port --save
- systemctl daemon-reload
- systemctl restart docker
- systemctl restart sherlock_configserver

Adding a Single Node Service Domain

Create and deploy a Service Domain cluster that consists of a single node.

About this task

To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.

If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.

  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  5. Select Single Node to create a single-node Service Domain.
    1. You cannot expand this Service Domain later by adding nodes.
  6. Click Add Node and enter the following node details:
    1. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    2. Name the node.
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  7. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  8. Click Next .
  9. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  10. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  11. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  12. Click Add .

What to do next

See Adding a Data Source and IoT Sensor.
See [Optional] Creating Your Cloud Profile

Manage a Multinode Service Domain

Create and deploy a Service Domain cluster that consists of three or more nodes.

Caution: Nutanix does not support upgrading Tech Preview multinode Service Domains to generally available Service Domain versions, then deploying those Service Domains in a production environment. For example, upgrading a multinode Service Domain v1.18 to v2.0.0, then using that Service Domain in production.

A multinode Service Domain is a cluster initially consisting of a minimum of three leader nodes. Each node is a single Service Domain VM hosted in an AHV cluster.

Creating and deploying a multinode Service Domain is a three-step process:

  1. Log on to the AHV cluster web console and upload the Service Domain disk image to a Nutanix AHV cluster through the image service as described in Uploading the Service Domain Image.
  2. Create three or more Service Domain VMs from this disk image, as described in Creating and Starting the Service Domain VM.
  3. Add a multinode Service Domain in the Karbon Platform Services management console as described in Adding a Multinode Service Domain.

Multinode Service Domain Requirements and Limitations

The Service Domain image version where Karbon Platform Services introduces this feature is described in the Karbon Platform Services Release Notes.

Service Domain VM hosting
  • Create and then start Service Domain VMs in a Nutanix AHV cluster only.
Single node Service Domain
  • You can create a single node Service Domain from a multinode compatible image version. It must remain a single node Service Domain, as you cannot expand this domain type.
  • You can upgrade an existing single node Service Domain to a multinode compatible image version, but it is limited for use as a single node Service Domain only.
Supported multinode Service Domain configuration
  • You can add a multinode Service Domain consisting of three Service Domain nodes initially created from a multinode compatible image version only.
  • Each node in a multinode Service Domain must reside on the same subnet.
  • You cannot mix image versions. Create each node in a multinode Service Domain from the same image version. You can also upgrade them to the same multinode compatible version later.
Service Domain High Availability Kubernetes API Support

Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments. When you create a multinode Service Domain to be hosted in a Nutanix AHV cluster, you must specify a Virtual IP Address (VIP), which is typically the IP address of the first node you add.

  • To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet for all Service Domain VMs and the VIP address has not already been allocated to any VM. Otherwise, the Service Domain will not enable this feature.
  • Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.
Adding or removing nodes from a multinode Service Domain
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
Configuring Storage and Storage Infrastructure

Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile. Adding a Multinode Service Domain requires these details.

On your AOS cluster:

  • Create a Cluster Administrator user dedicated for use with this feature. Do not use this admin user for day-to-day AOS cluster tasks. For information about creating a user role, see Creating a User Account in the Security Guide .
  • You must configure the AOS cluster with a cluster virtual IP address and an iSCSI Data Services IP address. See Modifying Cluster Details in the Prism Web Console Guide .
  • Ensure that the AOS cluster has at least one storage container available for shared storage use by the nodes. If you want to optionally maintain a storage quota for use by the nodes, create a new storage container. See Creating a Storage Container in the Prism Web Console Guide .
  • Get the storage container name from the Prism Element web console Storage dashboard. Do not use the NutanixManagementShare or SelfServiceContainer storage containers. AOS reserves these storage containers for special use.
Unsupported multinode Service Domain configurations
Nutanix does not support the following configurations.
  • You cannot create (add) a multinode Service Domain where you have upgraded each node to a multinode compatible version from a previous incompatible single-node-only version.

    For example, you have upgraded three older single-node Service Domains to a multinode image version. You cannot create a multinode Service Domain from these nodes.

  • In a multinode Service Domain, you cannot mix Service Domain nodes that you have upgraded to a multinode compatible version with newly created multinode compatible nodes.

    For example, you have upgraded two older single-node Service Domains to a multinode image version. You have a newly created multinode compatible single node. You cannot add these together to form a new muiltinode service domain.

  • Service domain version 1.14 and previous versions allow you to create single node Service Domains only. These Service Domain nodes are not "multinode aware" but are still useful as single node Service Domains.

Adding a Multinode Service Domain

Create and deploy a Service Domain cluster that consists of three or more nodes.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • If you deploy each Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Adding Each Node

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name.
  5. Select Multi-Node to create a multinode Service Domain type.
  6. Click Add Node and enter the following node details:
    1. Name the node.
    2. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    3. IP Address of your Service Domain node VM.
      The IP address of the first node you add becomes the Service Domain Virtual IP Address . By default, Karbon Platform Services uses this node IP address to represent the Service Domain cluster in the Service Domains List page.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
    6. Click Add Node and repeat these steps to add two leader nodes and worker nodes.
      The first three nodes are tagged as leader nodes and cannot be removed. Nodes four and later are considered as worker nodes and can be removed.
    7. Virtual IP Address . The IP address of the first node you add becomes the Service Domain Virtual IP Address . By default, this node IP address represents the Service Domain cluster in the Service Domains List page.

      Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments.

      To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet as the Service Domain VMs and the VIP address is unique (that is, has not already been allocated to any VM). Otherwise, the Service Domain will not enable this feature.

      Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.

Adding Categories

Procedure

  1. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. Click Add .
    You can expand this Service Domain later by adding nodes. You can also remove nodes from this Service Domain.
  3. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  4. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.
  5. Click Next to configure storage and advanced settings.

Configuring Storage (Nutanix Volumes)

Before you begin

Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile for the nodes. On your AOS cluster:
  • Create a Cluster Administrator user as described in the Nutanix Security Guide: Creating a User Account .
  • Ensure that the AOS cluster is configured with a cluster virtual IP address and an iSCSI Data Services IP address. See the Prism Web Console Guide: Modifying Cluster Details.
  • Ensure that the AOS cluster has at least one storage container available for shared storage use by the nodes. If you want to optionally maintain a storage quota for use by the nodes, create a new storage container. See the Prism Web Console Guide: Creating a Storage Container.
  • Get the storage container name from the Prism Element web console Storage dashboard. Do not use the NutanixManagementShare or SelfServiceContainer storage containers. AOS reserves these storage containers for special use.

Procedure

  1. From the Storage Infrastructure drop-down menu, select Nutanix Volumes .
  2. In the Nutanix Cluster Virtual IP Address field, enter the AOS cluster virtual IP address.
  3. In the Username and Password fields, enter the AOS cluster administrator user credentials.
  4. In the Data Services IP Address field, enter the iSCSI Data Services IP address.
  5. In the Storage Container Name field, enter the cluster storage container name.
  6. Continue to Advanced Settings . Otherwise, click Done .

What to do next

  • See Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile.
  • See also Expanding a Multinode Service Domain or Removing a Worker Node from a Multinode Service Domain

Advanced Settings

Procedure

  1. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  2. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  3. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  4. Click Done .

What to do next

  • See Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile.
  • See also Expanding a Multinode Service Domain or Removing a Worker Node from a Multinode Service Domain

Expanding a Multinode Service Domain

Add nodes to an existing multinode Service Domain.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
  • If you deploy each Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
  • After adding a Service Domain, the status dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (grey). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > List .
  3. Select a multinode Service Domain, then click Edit .
    You can also rename the Service Domain as part of this procedure.
  4. Click Add Node and enter the following node details:
    1. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    2. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  5. To expand the Service Domain, click Add Node and enter the following node details:
    1. Name the node.
    2. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the service domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
    6. Click Add Node and repeat these steps to add more nodes.

Adding Categories and Updating the Service Domain

Procedure

  1. Click Add... .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. To edit an existing category, select a category and value from the drop-down menu.
  3. To delete a category, click the trash can icon next to it.
  4. When you finish adding or deleting categories, click Next , the click Update .
  5. Click Update .
  6. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  7. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.

What to do next

Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile. See also Removing a Worker Node from a Multinode Service Domain
.

Removing a Worker Node from a Multinode Service Domain

Remove worker nodes from a multinode Service Domain. Any node added to an existing three-node Service Domain is considered a worker node.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
  • Before you remove a node, ensure that the remaining nodes in the Service Domain have enough CPU, storage, and memory capacity to run applications running on the removed node. The Service Domains dashboard shows health status, CPU/Memory/Storage usage, version details, and more information for every Service Domain.
  • After adding a Service Domain, the status dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (grey). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > List .
  3. Select a multinode Service Domain, then click Edit .
    You can also rename the Service Domain as part of this procedure.
  4. In the Service Domain table, click the elipsis menu next to a node and select Remove .
    This step removes the node.

Adding, Editing, or Deleting Categories and Updating the Service Domain

Procedure

  1. Click Add... .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. To edit an existing category, select a category and value from the drop-down menu.
  3. To delete a category, click the trash can icon next to it.
  4. When you finish adding or deleting categories, click Next , the click Update .
  5. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  6. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.

What to do next

Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile. See also Expanding a Multinode Service Domain.

Onboarding a Multinode Service Domain By Using Nutanix Karbon

You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster.

About this task

For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster. This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the kps command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Before you begin

This task requires the following.
  • Prism Central deployment managing at least one Prism Element Cluster running AHV.
  • Nutanix Karbon installed on Prism Central. Ensure that you have downloaded the ntnx-1.0 image. See Downloading Images in the Nutanix Kubernetes Engine Guide .
  • kps command line installed on a machine running Linux or MacOS, as described at the Nutanix public Github repository.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Nutanix AOS and click Next .
  4. Download and configure the kps command line if you have not previously done so.
    See the README at https://github.com/nutanix/karbon-platform-services/tree/master/cli for details.
  5. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  6. Enter these details about your Prism Central, Prism Element, and Nutanix Karbon deployments.
    1. Prism Central IP Address. IP address of the Prism Central cluster.
    2. Prism Central Username. User name of a Prism Central admin user.
    3. Prism Central Password. Password for the Prism Central admin user.
    4. Nutanix Cluster Name. Cluster name of the AHV cluster being managed by Prism Central
    5. Storage Container Name. Name of the storage container located on the Prism Element AHV cluster.
    6. Subnet Name. Name of the subnet configured on the Prism Element AHV cluster.
    7. Karbon Cluster Master VIP Address. Master VIP IP address for the Kubernetes API server (kube-apiserver). AHV IP address management (IPAM) provides an IP address for each node. Ensure that the VIP address has not been already allocated and is not part of any cluster IP address pool range that you have specified when you created a virtual network in the AHV cluster. The VIP address must be outside this IP pool address range but part of the same VLAN.
      For more information about network, see Cluster Setup in the Nutanix Kubernetes Engine Guide .

      This information populates the kps command line options and parameters in next step.

    Figure. Example Create Service Domain with Karbon Page Click to enlarge Image shows the fields need to create the Service Domain through Nutanix Karbon

  7. Click Copy next to the kps command line and run it on the machine where you installed the kps command line.
    For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster.
  8. Click Close .

Upgrade Service Domains

Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

Note: Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.

Upgrade your existing Service Domain VM by using the Upgrades page in the cloud management console. From Upgrades , you can see available updates that you can download and install on one or more Service Domains of your choosing.

Upgrading the Service Domain is a two-step process where you:

  1. Download an available upgrade to one or more existing Service Domains.
  2. Upgrade selected Service Domains to the downloaded version.
Upgrades lets you choose how to upgrade your Service Domains.
Note: During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
Table 1. Upgrades Page Links
Link Use Case
Service Domains "1-click" download or upgrade for all upgrade-eligible Service Domains.
  • Select the most recent available version (N) or the next most recent available versions (N-1, and N-2). For example, if your current Service Domain version is 1.16, you can download, then upgrade your Service Domain to available versions 1.16.1, 1.17, and 1.18.
  • Upgrade all Service Domains where you have already downloaded the selected available Service Domain version(s)
Download and upgrade on all eligible Use this workflow to download an available version to all Service Domains eligible to be upgraded. You can then decide when you want to upgrade the Service Domain to the downloaded version. See Upgrading All Service Domains.
Download and upgrade on selected Use this workflow to download an available version to one or more Service Domains that you select and are eligible to be upgraded. This option appears after you select one or more Service Domains. After downloading an available Service Domain version, upgrade one or more Service Domains when convenient. See Upgrading Selected Service Domains.
Task History

View Recent History

See Checking Upgrade Task History.

View Recent History appears in the Service Domains page list for each Service Domain, and shows a status summary.

Upgrading All Service Domains

Before you begin

Note:
  • During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
  • Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.
Log on to the cloud management console. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

About this task

  • Download an available Service Domain version to all eligible Service Domains
  • Upgrade all Service Domains where you have already downloaded an available service domain version
  • You can select the most recent available version (N) or the next most recent available versions (N-1, and N-2). For example, if your current Service Domain version is 1.16, you can download, then upgrade your Service Domain to available versions 1.16.1, 1.17, and 1.18.

Procedure

  1. Go to Administration > Upgrades > Service Domains .
    A Service Domain table lists each Service Domain, current status (Healthy, Not Healthy, Disconnected), current installed version, and upgrade actions and history.
  2. To download a version to all eligible Service Domains, click Download on all eligible and select a version (for example 1.18).
    The Download on all eligible drop-down menu shows the latest version as vX.xx.x (Latest) .
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  3. Click Download .
    The Service Domain version downloads to all Healthy Service Domains. The Actions column shows a downloading message. Hover over the task circle to see current progress. When the Actions column displays Upgrade to vX.xx.x , downloads are completed and you can upgrade the Service Domains.
  4. To upgrade the Service Domains when downloads are completed, click Upgrade all eligible to and select a version.
  5. In the Upgrade Service Domains popup window, click Upgrade . Clear (unselect) any Service Domain to prevent it from being upgraded.
    The Actions column shows upgrade progress similar to download. When the Actions column displays View Recent History and the Version column is updated, upgrade operations are complete. See also Checking Upgrade Task History.

Upgrading Selected Service Domains

Before you begin

Note:
  • During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
  • Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.
Log on to the cloud management console.

About this task

Use this workflow to download an available version to one or more Service Domains. You can then decide when you want to upgrade the Service Domain to the downloaded version. Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

Procedure

  1. Go to Administration > Upgrades > Service Domains .
    A Service Domain table lists each Service Domain, current status (Healthy, Not Healthy, Disconnected), current installed version, and upgrade actions and history.
  2. To download the upgrade to selected eligible Service Domains, do these steps.
    1. Select one or more Service Domains in the table.
    2. Click Download on selected num and select a version (for example 1.18). num is the number of eligible Service Domains you selected. The drop-down menu shows the latest version as vX.xx.x (Latest)
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  3. Click Download .
    The Service Domain version downloads to all Healthy Service Domains. The Actions column shows a downloading message. Hover over the task circle to see current progress. When the Actions column displays Upgrade to vX.xx.x , downloads are completed and you can upgrade the Service Domains.
  4. To upgrade selected eligible Service Domains, do these steps.
    1. Select one or more Service Domains where the Actions status is Upgrade to vX.xx.x .
    2. Click Upgrade selected num . num is the number of eligible Service Domains you selected.
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  5. Click Upgrade .
    The Actions column shows upgrade progress similar to download. When the Actions column displays View Recent History and the Version column is updated, upgrade operations are complete. See also Checking Upgrade Task History.

Checking Upgrade Task History

Before you begin

Log on to the cloud management console.

About this task

After downloading and upgrading an available Service Domain version, upgrade task history is available in the console.

View Recent History appears in the Service Domains page list for each Service Domain and shows a status summary.

Procedure

  1. To see Service Domain upgrade task history, go to Administration > Upgrades > Task History .
  2. You can see status for single node and multinode Service Domains.
    1. Version . The available Service Domain version to which you attempted to download or upgrade.
    2. Status . Downloaded (version download is complete), Upgraded (Service Domain is upgraded), Upgrade Failed (Service Domain was not upgraded).
    3. Completed on . Date and time task occurred (Upgrade Failed) or completed (Upgraded or Downloaded).
    4. Service Domains . Link to the Service Domain(s) where the action occurred. This link is especially helpful with multinode Service Domains. After you click the Service Domain link, a popup status windows shows for each service domain. The window includes a filter field so you can see individual status.

Projects

A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

A project can consist of:

  • Existing administrator or project users
  • A Service Domain
  • Cloud profile
  • Container registry profile

When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.

Projects Page

The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.

For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.

When you click a project name, the project Summary dashboard is displayed and shows resources in the project.

You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).

Figure. Project Summary Dashboard Click to enlarge Project summary dashboard with component links.

Creating a Project

As an infrastructure administrator, create a project. To complete this task, log on to the cloud management console.

About this task

  • When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
  • When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Procedure

  1. Click the menu button, then click Projects > Create .
    1. Name your project. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for your project.
    3. Click Add Users to add users to a project.
    4. Click Next .
  2. Add a Service Domain, which associates any resources available to the Service Domain to your project.
    1. Select Select By Category to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
    2. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  3. Select a Cloud Profile and Container Registry .
    Click the plus sign ( + ) to select additional cloud profiles or container registries.
  4. Click Next to enable one or more project services.
    1. Enabled indicates that this service is available to all projects.
    2. Click Enable to enable a service for all project users.
      See Enabling or Disabling One or More Services for a Project.
  5. Click Create .

Editing a Project

Update an existing project. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons. See Enabling or Disabling One or More Services for a Project.
  3. Click Edit .
    1. Update the projects general information by updating the name and description.
    2. Click the trashcan to remove a user from the list. Click Edit to open the user dialog box where you add or remove users associated with the project.
    3. Click Next to update project resources.
  4. Update the associated Service Domains.
    1. Select Select By Category to add or remove one or more categories associated with a Service Domain.
      • Click the trashcan to remove a category.
      • Click the plus sign ( + ) to add more categories.
    2. Select Select Individually to remove or add a Service Domain. Do one of the following.
      • Click the trashcan to remove a Service Domain in the list.
      • Click Edit to select one or more Service Domains.
  5. For Cloud Profile and Container Registry :
    1. Click the trashcan to remove cloud profiles or container registries.
    2. Click the plus sign ( + ) to add cloud profiles or container registries.
  6. Click Next to enable one or more project services.
    1. Enabled indicates that this service is available to all projects.
    2. Click Enable to enable a service for all project users.
      See Enabling or Disabling One or More Services for a Project.
  7. Click Update .

Removing a Project

As an infrastructure administrator, delete a project. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Select a project in the list.
    The project dashboard is displayed along with Edit , Manage Services , and Remove buttons. See Enabling or Disabling One or More Services for a Project.
  3. If Remove does not appear as an option, you might need to delete any apps associated with the project.
  4. After removing associated apps, return to the Projects page and select the project.
  5. Click Remove , then click Delete to confirm.
    The Projects page lists any remaining projects.

Managing Project Services

The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.

The platform includes these ready-to-use services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.
App Runtime Services
These services are enabled by default on each Service Domain.
  • Kubernetes Apps. Containers as a service. You can create and run Kubernetes apps without having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Run server-less functions based on data triggers, then publish data to specific cloud or Service Domain endpoints.
  • AI Inferencing. Enable this service to use your machine learning (ML) models in your project. The ML Model feature provides a common interface for functions (that is, scripts) or applications.
Ingress Controller
Traefik or Nginx-Ingress. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. See Enable an Ingress Controller.
Service Mesh
Istio. Provides secure connection, traffic management, and telemetry. See Istio Service Mesh.
Data Streaming | Messaging
  • Kafka. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment. See Kafka as a Service.
  • NATS. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.
Logging | Monitoring | Alerting
  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing). See Prometheus Application Monitoring and Metrics as a Service.
  • Logging. Provides log monitoring and log bundling. See Audit Trail and Log Management.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Enabling or Disabling One or More Services for a Project

Enable or disable services associated with your project.

Before you begin

  • You cannot disable the default Kubernetes Apps, Functions and Data Pipelines, and AI Inferencing services.
  • Disabling Kafka. When you disable Kafka, data stored in PersistentVolumeClaim (PVC) is not deleted and persists. If you later enable Kafka on the same Service Domain, the data is reused.
  • Disabling Prometheus. When you disable Prometheus and then later enable it, the service creates a new PVC and clears any previously-stored metrics data stored in PVC.

Procedure

  1. From the home menu, click Projects , then click a project.
    Depending on your role, you can also select a project from Admin Console drop-down menu.
  2. Click Manage Services .
  3. Enable a service.
    1. Click Enable in a service tile.
    2. Click Enable service_name .
      For Istio, apps in this project restart.
    3. Click Confirm .
    If the minimum Service Domain version is detected, the service is available for use in your project in a few minutes, after the service initializes. Otherwise, an alert is triggered if the Service Domain does not support the service.
  4. Disable a service.
    1. Click Disable in a service tile.
    2. Click Disable service_name .
      For Istio, apps in this project restart.
    3. Click Confirm .
    The service is no longer available for use in your project.

Kafka as a Service

Kafka is available as a data service through your Service Domain.

The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:

  • Secure by design.
  • No need to explicitly declare Kafka topics. With Karbon Platform Services Kafka as a service, topics are automatically created.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain monitors service health and raises service-specific alerts.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Using Kafka in an Application

Information about application requirements and sample YAML application file

Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.

Sample Kafka Application YAML Template File


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: some.container.registry.com/myapp:1.7.9
        ports:
        - containerPort: 80
     env:
        - name: KAFKA_ENDPOINT
          value: {{.Services.Kafka.Endpoint}}
Field Name Value or Field Name / Description Value or Field Name / Description
kind Deployment Specify the resource type. Here, use Deployment .
metadata name Provide a name for your deployment.
labels Provide at least one label. Here, specify the application name as app: my-app
spec Define the Kafka service specification.
replicas Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized.
selector Use matchLabels and specify the app name as in labels above.
template
Specify the application name here ( my-app ), same as metadata specifications above.
spec Here, define the specifications for the application using Kafka.
containers
  • name: my-app . Specify the container application
  • image: some.container.registry.com/myapp:1.7.9 . Define the container registry host address where the container image is stored.
  • ports: containerPort: 80 . Define the container registry port. Here, 80.
env

name: KAFKA_ENDPOINT

value: {{.Services.Kafka.Endpoint}}

Leave these values as shown.

Using Kafka in a Function for a Data Pipeline

Information about data pipeline function requirements.

See Functions and Data Pipelines.

You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:

  • Input. An existing data source or real-time data stream (output from another data pipeline).
  • Transformation. A function (code block or script) to transform data from a data source (or no function at all to pass data directly to the endpoint).
  • Output. An endpoint destination for the transformed or raw data.

Kafka Endpoint Function Requirements

For a data pipelines with a Kafka topic endpoint:

  • Language . Select the golang scripting language for the function.
  • Runtime Environment . Select the golang runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.

Viewing Kafka Status

In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Kafka .
    This dashboard shows a consolidated, high-level view of all Kafka topics, deployments, and related alerts. The default view is Topics.
  4. To show all topics for this project, click Topics .
    1. Topics. All Kafka topics used in this project.
    2. Service Domain. Service Domains in the project employing Kafka messaging.
    3. Bytes/sec produced and Bytes/sec consumed. Bandwidth use.
  5. To view Service Domain status where Kafka is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Brokers. A No Brokers Available message might mean the Service Domain is not connected.
    3. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with Kafka for this project, click Alerts .

Enable an Ingress Controller

The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.

An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.

When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.

If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.

In your application YAML, specify two snippets:

  • Service snippet. Use annotations to define the HTTP or HTTPS host protocol, path, service domain, and secret for each Service Domain to use as an ingress controller. You can specify one more Service Domains.
  • Secret snippet. Use this snippet to specify the certificates used to secure app traffic.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Sample Ingress Controller Service Domain Configuration

To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.

You can only enable and use one Ingress controller per Service Domain.

Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.

Ingress Controller Service Domain Host Annotations Specification

apiVersion: v1
kind: Service
metadata:
  name: whoami
  annotations:
    sherlock.nutanix.com/http-ingress-path: /notls
    sherlock.nutanix.com/https-ingress-path: /tls
    sherlock.nutanix.com/https-ingress-host: DNS_name
    sherlock.nutanix.com/http-ingress-host: DNS_name
    sherlock.nutanix.com/https-ingress-secret: whoami
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: whoami
Table 1. Ingress Controller Annotations Specification
Field Name Value or Field Name / Description Value or Field Name / Description
kind Service Specify the Kubernetes service. Here, use Service to indicate that this snippet defines the ingress controller details.
apiVersion v1 Here, the Kubernetes API version.
metadata name Provide an app name to which this controller applies.
annotations These annotations define the ingress controller encryption type and paths for Karbon Platform Services.
sherlock.nutanix.com/http-ingress-path: /notls /notls specifies no Transport Layer Security encryption.
sherlock.nutanix.com/https-ingress-path: /tls

/tls specifies Transport Layer Security encryption.

sherlock.nutanix.com/http-ingress-host: DNS_name Ingress service host path, where the service is bound to port 80.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-host: DNS_name Ingress service host path, where the service is bound to port 443.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-secret: whoami The sherlock.nutanix.com/https-ingress-secret: whoami snippet links the authentication Secret information defined above to this controller.
spec Define the transfer protocol, port type, and port for the application.
  • protocol: TCP . Transfer protocol of TCP.
  • ports: Define the port and type.

    name: web Port type.

    port: 80 TCP port.

A selector to specify the application. selector app: whoami

Securing The Application Traffic

Use a Secret snippet to specify the certificates used to secure app traffic.

apiVersion: v1
kind: Secret
metadata:
  name: whoami
type: kubernetes.io/tls
data:
  ca.crt: cert_auth_cert
  tls.crt: tls_cert
  tls.key: tls_key
Table 2. TLS Certificate Specification
Field Name Value or Field Name / Description Value or Field Name / Description
apiVersion v1 Here, the TLS API version.
kind Secret Specify the resource type. Here, use Secret to indicate that this snippet defines the authentication details.
metadata name Provide an app name to which this certification applies.
type Define the authentication type used to secure the app. Here, kubernetes.io/tls
data ca.crt Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key (
tls.crt
tls.key

Viewing Ingress Controller Details

In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Depending on the deployed Ingress controller, click Nginx-Ingress or Traefik .
    This dashboard shows a consolidated, high-level view of all rules, deployments, and related alerts. The default view is Rules.
  4. To show all rules for this project, click Rules .
    1. Application. App where traffic is being routed.
    2. Rules. Rules you have configured for this app. Here, Rules shows the host and paths
    3. Destination. Application and port number.
    4. Service Domain. Service Domains in the project employing routing.
    5. TLS. Transport Layer Security (TLS) protocol status. On indicates encrypted communication is enabled. Off indicates it is not used.
  5. To view Service Domain status where the controller is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Istio Service Mesh

Istio provides secure connection, traffic management, and telemetry.

Add the Istio Virtual Service and DestinationRules to an Application - Example

In the application YAML snippet or file, define the VirtualService and DestinationRules objects.

These objects specify traffic routing rules for the recommendation-service app host. If the traffic rules match, traffic flows to the named destination (or subset/version of it) as defined here.

In this example, traffic is routed to the recommendation-service app host if it is sent from the FireFox browser. The specific policy version ( subset ) for each host helps you identify and manage routed data.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - match:
    - headers:
        user-agent:
          regex: .*Firefox.*
    route:
    - destination:
        host: recommendation-service
        subset: v2
  - route:
    - destination:
        host: recommendation-service
        subset: v1

This DestinationRule YAML snippet defines a load-balancing traffic policy for the policy versions ( subsets ), where any healthy host can service the request.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: recomm-svc
spec:
  host: recommendation-service
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Manage Traffic by Weight - Example

In this YAML snipped, you can split traffic for each subset by specifying a weight of 30 in one case, and 70 in the other. You can also weight them evenly by giving each a weight value of 50.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - route:
    - destination:
       host: recommendation-service
       subset: v2
      weight: 30   
    - destination:
       host: recommendation-service
       subset: v1
      weight: 70

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Istio Details

In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.

About this task

The Istio tabs show service resource and routing configuration information derived from project-related Kubernetes Apps YAML files and service status.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Istio in the navigation sidebar to show the default Application Metrics page.
    Application Metrics shows initial details about the applications and related traffic metrics.
    1. Name and Service Domain. Application name and Service Domain where the application is deployed and the Istio service is enabled.
    2. Workloads. One or more workloads associated with the application. For example, Kubernetes Deployments, StatefulSets, DaemonSets, and so on.
    3. Inbound Request Volume / Outbound Request Volume. Inbound/Outbound HTTP requests per second for the last 5 minutes
  3. To see details about any traffic routing configurations associated with the application, click Virtual Services .
    An application can specify one or more virtual services, which you can deploy across one or more Service Domains.
    1. Application and Virtual Services. Application and service name.
    2. Service Domains. Number and name of the Service Domains where the service is deployed.
    3. Matches. Lists matching traffic routes.
    4. Destinations. Number of service destination host connection or request routes. Expand the number to show any destinations served by the virtual service (v1, v2, and so on) where traffic is routed. If specified in the Virtual Service YAML file, Weight indicates the proportion of traffic routed to each host. For example, Weight: 50 indicates that 50 percent of the traffic is routed to that host.
  4. To see rules associated with Destinations, click Destination Rules .
    An application can specify one or more destination rules, which you can deploy across one or more Service Domains. A destination rule is a policy that is applied after traffic is routed (for example, a load balancing configuration).
    1. Application. Application where the rule applies.
    2. Destination Rules. Rules by name.
    3. Service Domains. Number and name of the Service Domains where the rule is deployed.
    4. Subsets. Name and number of the specific policies (subsets). Expand the number to show any pod labels (v1, v2, and so on), service versions (v1, v2, and so on), and traffic weighting, where rules apply and traffic is routed.
  5. To view Service Domain status where Istio is used, click Deployments .
    1. List of Service Domains where the service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Prometheus Application Monitoring and Metrics as a Service

Note: When you disable Prometheus and then later enable it, the service creates a new PersistentVolumeClaim (PVC) and clears any previously-stored metrics data stored in PVC.

The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.

Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.

You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.

Default Service Settings

Table 1. Prometheus Default Settings
Setting Default Value or Description
Frequency interval to collect and store metrics (also known as scrape and store) Every 60 seconds 1
Collection endpoint /metrics 1
Default collection app collect-metrics
Data storage retention time 10 days
  1. You can create a customized ServiceMonitor YAML snippet to change this default setting. See Enable Prometheus App Monitoring and Metric Collection - Examples.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Enable Prometheus App Monitoring and Metric Collection - Examples

Monitor an Application with Prometheus - Default Endpoint

This sample app YAML specifies an app named metricsmatter-sample-app and creates one instance of this containerized app ( replicas: 1 ) from the managed Amazon Elastic Container Registry.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricsmatter-sample-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: metricsmatter-sample-app 
  template:
    metadata:
      name: metricsmatter-sample-app
      labels:
        app: metricsmatter-sample-app
    spec:
      containers:
        - name: metricsmatter-sample-app
          imagePullPolicy: Always
          image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/app-folder/metricmatter_sample_app:latest

Next, in the same application YAML file, create a Service snippet. Add the default collect-metrics app label to the Service object. When you add app: collect-metrics , Prometheus scrapes the default /metrics endpoint every 60 seconds, with metrics exposed on port 8010.

---
apiVersion: v1
kind: Service
metadata:
  name: metricsmatter-sample-service
  labels:
    app: collect-metrics
spec:
  selector:
    app: metricsmatter-sample-app
  ports:
    - name: web
      protocol: TCP
      port: 8010

Monitor an Application with Prometheus - Custom Endpoint and Interval Example

Add a ServiceMonitor snippet to the app YAML above to customize the endpoint to scrape and change the interval to collect and store metrics. Make sure you include the Deployment and Service snippets.

Here, change the endpoint to /othermetrics and the collection interval to 15 seconds ( 15s ).

Prometheus discovers all ServiceMonitors in a given namespace (that is, each project app) where it is installed.

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
 name: metricsmatter-sample-app
 labels:
   app: collect-metrics
spec:
 selector:
   matchLabels:
     app: collect-metrics
 endpoints:
   - path: /othermetrics
     interval: 15s
     port: 8010

Use Environment Variables as the Endpoint

You can also use endpoint environment variables in an application template for the service and AlertManager.

  • {{.Services.Prometheus.Endpoint}} defines the service endpoint.
  • {{.Services.AlertManager.Endpoint}} defines a custom Alert Manager endpoint.

Configure Service Domain Environment Variables describes how to use these environment variables.

Viewing Prometheus Service Status

The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Prometheus to view the default Deployments page.
    1. Service Domain and Status. Service Domains where the Prometheus service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Prometheus Endpoints. Endpoints that the service is scraping, by ID.
    3. Alert Manager. Alert Manager shows alerts associated with the Prometheus endpoints, by ID.
    4. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  4. To see all alerts associated with Prometheus for this project, click Alerts .

Create Prometheus Graphs with Grafana - Example

This example shows how you can set up a Prometheus metrics dashboard with Grafana.

This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.

Define Prometheus as the Data Source for Grafana

The first ConfigMap YAML snippet example uses the environment variable {{.Services.Prometheus.Endpoint}} to define the service endpoint. If this YAML snippet is part of an application template created by an infra admin, a project user can then specify these per-Service Domain variables in their application.

The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com


apiVersion: v1
kind: ConfigMap
metadata:
 name: grafana-datasources
data:
 prometheus.yaml: |-
   {
       "apiVersion": 1,
       "datasources": [
           {
              "access":"proxy",
               "editable": true,
               "name": "prometheus",
               "orgId": 1,
               "type": "prometheus",
               "url": "{{.Services.Prometheus.Endpoint}}",
               "version": 1
           }
       ]
   }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-ini
data:
  grafana.ini: |
    [server]
    domain = woodkraft2.ntnxdomain.com
    root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
    serve_from_sub_path = true
---

Specify Deployment Information on the Service Domain

This YAML snippet provides a standard deployment specification for Grafana.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          ports:
            - name: grafana
              containerPort: 3000
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "1Gi"
              cpu: "500m"
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-storage
            - mountPath: /etc/grafana/provisioning/datasources
              name: grafana-datasources
              readOnly: false
            - name: grafana-ini
              mountPath: "/etc/grafana/grafana.ini"
              subPath: grafana.ini
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
            defaultMode: 420
            name: grafana-datasources
        - name: grafana-ini
          configMap:
            defaultMode: 420
            name: grafana-ini
---

Define the Grafana Service and Use an Ingress Controller

Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).


apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  selector:
    app: grafana
  ports:
    - port: 3000
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  rules:
    - host: woodkraft2.ntnxdomain.com
      http:
        paths:
        - path: /grafana
          backend:
            serviceName: grafana
            servicePort: 3000

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kubernetes Apps

You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.

You need to create a project with at least one user to create an app.

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.

Privileged Kubernetes Apps

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

For Kubernetes apps running as privileged, you might have to specify the Kubernetes namespace where the application is deployed. You can do this by using the {{ .Namespace }} variable you can define in app YAML template file.

In this example, the resource kind of ClusterRoleBinding specifies the {{ .Namespace }} variable as the namespace where the subject ServiceAccount is deployed. As all app resources are deployed in the project namespace, specify the project name as well (here, name: my-sa).

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: my-cluster-role
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: {{ .Namespace }}

Creating an Application

Create a Kubernetes application that you can associate with a project.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • If your app requires a service, make sure you enable it in the associated project.
  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • See also Configure Service Domain Environment Variables to set and use environment variables in your app.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps > Create Kubernetes App .
  3. Name your application. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description for your application.
  5. Select one or more Service Domains, individually or by category.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. For Select Individually , click Add Service Domains to select one or more Service Domains.
    2. For Select By Category , select to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
  6. Click Next .
  7. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
  8. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a values YAML file to override the values you specified in the Helm Chart package.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  9. Click Create .
    If your app requires a service, make sure you enable it in the associated project.
    The Kubernetes Apps page lists the application you just created as well as any existing applications. Apps start automatically after creation.

Editing an Application

Update an existing Kubernetes application.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • To set and use environment variables in your app, see Configure Service Domain Environment Variables.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then click an application in the list.
    The application dashboard is displayed along with an Edit button.
  3. Click Edit .
  4. Name . Update the application name. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your application.
  6. You cannot change the application's Project .
  7. Update the associated Service Domains.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. If Select Individually is selected:
      • Click X to remove a Service Domain in the list.
      • Click Edit to select or deselect one or more Service Domains.
    2. If Select By Category is selected, add or remove one or more categories associated with a Service Domain.
      • Select a category and value.
      • Click X to remove a category.
      • Click the plus sign ( + ) to add more categories and values.
  8. Click Next .
  9. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    1. You can edit the file directly on the Yaml Configuration page.
    2. Click Choose File to navigate to a YAML file or template.
    After uploading the file, you can also edit the file contents. You can choose the contrast levels Normal , Dark , or High Contrast Dark .
  10. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a YAML file to override the values you specified in the Helm Chart package. Use this option to update your application without having to re-upload a new chart.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  11. Click Update .
    The Kubernetes Apps page lists the application you just updated as well as any existing applications.

Removing an Application

Delete an existing Kubernetes application.

About this task

  • To complete this task, log on to the cloud management console.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. Click Actions > Remove , then click Delete to confirm.
    The Kubernetes Apps page lists any remaining applications.

Deploying and Undeploying a Kubernetes Application

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.

Before you begin

Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.

  • If your application specifies an explicit PersistentVolumeClaim object, when the app is undeployed, data stored in PersistentVolumeClaim is deleted. This data is not available if you then deploy the app.
  • If your application specifies a VolumeClaimTemplates object, when the app is undeployed, data stored in PersistentVolumeClaim persists. This data is available for reuse if you later deploy the app. If you plan to redeploy apps, Nutanix recommends using VolumeClaimTemplates to implement StatefulSets with stable storage.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. To undeploy a running application, select an application, then click Actions > Undeploy
    1. To undeploy every instance of the app running on all Service Domains, select Undeploy All , then click Undeploy .
    2. To undeploy the app running on specific Service Domains, select Undeploy Selected , select one or more Service Domains, then click Undeploy .
    3. Click Undeploy App to confirm.
  4. To deploy an undeployed application, select an application, then click Actions > Deploy , then click Deploy to confirm.
    1. To deploy every instance of the app running on all Service Domains, select Deploy All , then click Deploy .
    2. To deploy the app running on specific Service Domains, select Deploy Selected , select one or more Service Domains, then click Deploy .

Helm Chart Support and Requirements

Helm Chart Format
When you create a Kubernetes app in the cloud management console, you can upload a Helm chart package in a gzipped TAR file (.TGZ format) that describes your app. For example, it can include:
  • Helm chart definition YAML file
  • App YAML template files that define your application (deployment, service, and so on). See also Privileged Kubernetes Apps
  • Values YAML file where you declare variables to pass to your app templates. For example, you can specify Ingress controller annotations, cloud repository details, and other required settings
Supported Helm Version
Karbon Platform Services supports Helm version 3.
Kubernetes App Support
Karbon Platform Services supports the same Kubernetes resources in Helm charts as in application YAML files. For example, daemonsets, deployment, secrets, services, statefulsets, and so on are supported when defined in a Helm chart.

Data Pipelines

The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.

A data pipeline is a path for data that includes:

  • Input . An existing data source or real-time data stream.
  • Transformation . Code block such as a script defined in a Function to process or transform input data.
  • Output . A destination for your data. Publish data to the Service Domain, cloud, or cloud data service (such as AWS Simple Queue Service).

It also enables you to process and transform captured data for further consumption or processing.

To create a data pipeline, you must have already created or defined at least one of the following:

  • Project
  • Category
  • Data source
  • Function
  • Cloud profile. Required for cloud data destinations or Service Domain endpoints
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
  • You can also stop and start a data pipeline. See Stopping and Starting a Data Pipeline.

Creating a Data Pipeline

Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.

Before you begin

You must have already created at least one of each: project, data source, function, and category. Also, a cloud profile is required for cloud data destinations or Service Domain endpoints. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Click Create .
  5. Data Pipeline Name . Name your data pipeline. Up to 63 lowercase alphanumeric characters and the dash character (-) are allowed.

Input - Add a Data Source

Procedure

Click Add Data Source , then select Data Source .
  1. Select a data source Category and one related Value .
  2. [Optional] Click Add new to add another Category and Value . Continue adding as many as you have defined.
  3. Click the trashcan to delete a data source category and value.

Input - Add a Real-Time Data Stream

Procedure

Click Add Data Source , then select Real-Time Data Stream .
  1. Select an existing pipeline as a Real-Time Data Stream .

Transformation - Add a Function

Procedure

  1. Click Add Function and select an existing Function.
  2. Define a data Sampling Interval by selecting Enable .
    1. Enter a interval value.
    2. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Create Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .

Output - Add a Destination

Procedure

  1. Click Add Destination to specify where the transformed data is to be output: Publish to Service Domain or Publish to External Cloud .
  2. If you select Destination > Service Domain :
    1. Endpoint Type . Select Kafka , MQTT , Realtime Data Stream , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EndpointType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  5. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
      d
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  6. Click Create .

Editing a Data Pipeline

Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline in the list, then click Actions > Edit
    You cannot update the data pipeline name.

Input - Edit a Data Source

Procedure

You can do any of the following:
  1. Select a different Data Source to change the data pipeline Input source (or keep the existing one).
  2. Select a different value for an existing Category.
  3. Click the trashcan to delete a data source category and value.
  4. Click Add new to add another Category and Value . Continue adding as many as you have defined.
  5. Select a different category.

Input - Edit a Real-Time Data Stream

Procedure

Select a different Realtime Data Stream to change the data pipeline Input source.

Transformation - Edit a Function

About this task

You can do any of the following tasks.

Procedure

  1. Select a different Function .
  2. Add or update the Sampling Interval for any new or existing function.
    1. If not selected, create a Sampling Interval by selecting Enable
    2. Enter a interval value.
    3. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Add Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .
  6. Click Create .

Output - Edit a Destination

About this task

You can do any of the following tasks.

Procedure

  1. Select a Infrastructure or External Cloud Destination to specify where the transformed data is to be output.
  2. If you select Destination > Infrastructure :
    1. Endpoint Type . Select MQTT , Realtime Data Stream , Kafka , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  5. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EdType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  6. Click Update .

Removing a Data Pipeline

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline, click Actions > Remove , then click Delete again to confirm.
    The Data Pipelines page lists any remaining data pipelines.

Stopping and Starting a Data Pipeline

About this task

  • You can stop and start a data pipeline.
  • You can select the table or tile view on this page by clicking one of the view icons.
    Figure. View Icons Click to enlarge The view icons on the page let you switch between table and tile view.

About this task

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Functions and Data Pipelines .
  3. To stop an active data pipeline, select a data pipeline, then click Actions > Stop .
    You can also click a data pipeline, then click Stop or Start , depending on the data pipeline state.
    Stop stops any data from being transformed or processed, and terminates any data transfer to your data destination.
  4. To start the data pipeline, select Start (after stopping a data pipeline) from the Actions drop-down menu.

Functions

A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.

  • You can use ML models in your function code. Use the ML Model API to call the model and version.
  • When you create, clone, or edit a function, you can define one or more parameters.
  • When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.

Creating a Function

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions > Create .
  4. Name . Name the function. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your function.
  6. Language . Select the scripting language for the function: golang, python, node.
  7. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  8. Click Next .
  9. Add function code.
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  10. Click Create .

Editing a Function

Edit an existing function. To complete this task, log on to the cloud management console.

About this task

Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Edit .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Update .

Cloning a Function

Clone an existing function. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Clone .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Next to update a data pipeline with this function, if desired.
  13. Click Create , then click Confirm in response to the data pipeline warning.
    An updated function can cause data pipelines to break (that is, stop collecting data correctly).

Removing a Function

About this task

To complete this task, log on to the cloud management console. You cannot remove a function that is associated with a data pipeline or realtime data stream.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function, click Remove , then click Delete again to confirm.
    The Functions page lists any remaining functions.

AI Inferencing and ML Model Management

You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.

The Karbon Platform Services Release Notes list currently supported ML model types.

An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.

Before You Begin
  • To allow access by the AI Inferencing API, ensure that the infrastructure admin selects Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • Ensure that the infrastructure admin has enabled the AI Inferencing service for your project.
ML Models Guidelines and Limitations
  • The maximum ML model zip file size you can upload is 1 GB.
  • Each ML model zip file must contain a binary file and metadata file.
  • You can use ML models in your function code. Use the ML Model API to call the model and version.
ML Model Validation
  • The Karbon Platform Services platform validates any ML model that you upload. It checks the following:
    • The uploaded model is a ZIP file, with the correct folder of file structure in the ZIP file.
    • The ML model binaries in the ZIP file are valid and in the required format.
    • TensorFlow ML model binaries in the ZIP file are valid and in the required format.

Adding an ML Model

About this task

To complete this task, log on to the cloud management console.

Before you begin

  • An infrastructure admin can allow access to the AI Inferencing API by selecting Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing > Add Model .
  3. Name . Name the ML model. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description.
  5. Select a specific project to make your ML model available to that project.
  6. Select a Framework Type from the drop-down menu. For example, Tensorflow 2.1.0 .
  7. Click Add Version in the List of Model Versions panel.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  8. [Optional] Click Add new to add another model version to the model.
    A single ML model can consist of multiple model versions.
  9. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Editing an ML Model

About this task

  • To complete this task, log on to the cloud management console.
  • You cannot change the name or framework type associated with an ML model.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing , select a model in the list, and click Edit .
  3. Description . Update an existing description.
  4. Edit or Remove an existing model version from the List of Model Versions.
  5. [Optional] Click Add new to add another model version.
    A single ML model can consist of multiple model versions.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  6. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Removing an ML Model

How to delete an ML model.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing and select a model in the list.
  3. Cick Remove , then click Delete again to confirm.
    The ML Models page lists any remaining ML models.

What to do next

Check ML model status in the ML Model page.

Viewing ML Model Status

The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing to show model information like Version, File Size, and Framework Type.
  3. To see where a model is deployed, click the model name.
    This page shows a list of associated Service Domains.

Runtime Environments

A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.

  • Golang
  • NodeJS
  • Python 2
  • Python 3
  • Tensorflow Python
You can add your own custom runtime environment for use by all or specific projects, functions, and container registries.
Note: Custom Golang runtime environments are not supported. Use the provided standard Golang runtime environment in this case.

Creating a Runtime Environment

How to create a user-added runtime environment for use with your project.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Click Create .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Click Add Profile to create a new container registry profile or use an existing profile. Do one of the following sets of steps.
    1. Select Add New .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
      For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    5. Username . Enter the user name credential for the profile.
    6. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    7. Email Address . Add an email address in this field.
    1. Select Use Existing cloud profile and elect an existing profile from Cloud Profile .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  8. Click Done .
  9. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  10. Languages . Select the scripting language for the runtime: golang, python, node.
  11. [Optional] Click Add Dockerfile to choose and upload a Dockerfile for the container image, then click Done .
    A Dockerfile typically includes instructions used to build images automatically or run commands.
  12. Click Create .

Editing a Custom Runtime Environment

How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment in the list, then click Edit .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Select an existing container registry profile.
  8. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  9. Languages . Select the scripting language for the runtime: golang, python, node.
  10. [Optional] Remove or Edit any existing Docker file.
    After editing a file, click Done .
  11. Click Add Docker Profile to choose a Docker file for the container image.
  12. Click Update .

Removing a Runtime Environment

How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment, click Remove , then click Delete again to confirm.
    The Runtime Environments page lists any remaining runtime environments.

Audit Trail and Log Management

Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.

From Logging or System Logs > Logging or the summary page for a specific project, you can:

  • Run Log Collector on every or selected Service Domains (admins only)
  • See created log bundles, which you can then download or delete
  • Create, edit, and delete log forwarding policies to help make collection more granular and then forward those logs to the cloud
  • View alerts
  • View an audit trail of any operations performed by users in the last 30 days

You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Audit Trail Dashboard

Access the Audit Log dashboard to view the most recent operations performed by users.

Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .

Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.

Running Log Collector - Service Domains

Log Collector examines the selected Service Domains and collects logs and configuration information useful for troubleshooting issues and finding out details about any Service Domain.

Procedure

  1. Go to System Logs > Logging from the Dashboard in the navigation sidebar menu.
  2. Click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  3. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Running Log Collector - Kubernetes Apps

Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard and navigation sidebar is displayed.
  3. Click Kubernetes Apps , then click an app in the list.
  4. Select the Log Bundles tab, then click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  5. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Creating and Updating a Log Forwarding Policy

Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.

Before you begin

Make sure your infrastructure admin has created a cloud profile first.

Procedure

  1. Go to System Logs > Logging from Dashboard or the summary page for a specific project.
  2. Click Log Forwarding Policy > Create New Policy .
  3. Name . Name your policy.
  4. Select a Service Domain.
    1. Select All . Logs for all Service Domains are forwarded to the cloud destination.
    2. Select Select By Category to choose and apply one or more categories associated with a Service Domain. Click the plus sign ( + ) to add more categories.
    3. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  5. Select a destination.
    1. Select Profile . Choose an existing cloud profile.
    2. Select Service . Choose a cloud service. For example, choose Cloudwatch to stream logs to Amazon CloudWatch. Other services include Kinesis and Firehose .
    3. Select Region . Enter a valid AWS region name or CloudWatch endpoint fully qualified domain name. For example, us-west-2 or monitoring.us-west-2.amazonaws.com
    4. Select Stream . Enter a log stream name.
    5. Select Groups . (CloudWatch only) Enter a Log group name.
  6. Click Done .
    The dashboard or summary page shows the policy tile.
  7. From the Logging dashboard, you can edit or delete a policy.
    1. Click Edit and change any of the fields, then click Done .
    2. To delete a policy, click Delete , then click Delete again to confirm.

Creating a Log Collector for Log Forwarding - Command Line

Create a log collector for log forwarding by using the kps command line.

Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Each sample YAML file defines a log collector. Log collectors can be:

  • Infrastructure-based: collects infrastructure-related (Service Domain) information. For use by infra admins.
  • Project-based: project-related information (applications, data pipelines, and so on). For use by project users and infra admins (with assigned projects).

See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.

Example Usage

Create a log collector defined in a YAML file:

user@host$ kps create -f infra-logcollector-cloudwatch.yaml

infra-logcollector-cloudwatch.yaml

This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.

kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name infra-log-name Specify the unique log collector name
type infrastructure Log collector for infrastructure
destination cloudwatch Cloud destination type
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

project-logcollector.yaml

This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name project-log-name Specify the unique log collector name
type project Log collector for specific project
project project-name Specify the project name
destination cloud-destination type Cloud destination type such as CloudWatch
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

Stream Real-Time Application and Pipeline Logs

Real-Time Log Monitoring built into Karbon Platform Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.

Note: Infrastructure administrators can stream logs if they have been added to a project.

Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Platform Services cloud platform).

The cloud management console shows the most recent log messages, up to 2 MB. To get the full logs, collect and then download the log bundles by Running Log Collector - Service Domains.

Displaying Real-Time Logs

View the most recent real-time logs for applications and data pipelines.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Go to the Deployments page for an application or data pipeline.
    1. From the home menu, click Projects , then click a project.
    2. Click Kubernetes Apps or Functions and Data Pipelines .
    3. Click an app name or data pipeline name, then click Deployments .
    4. Click an application or a data pipeline in the table, then click Deployments .
      A table shows each Service Domain deploying the application or data pipeline.
  2. Select a Service Domain, then click View Real-time Logs .
    The window displays streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application, from the Service Domain.

What to do next

Real-Time Log Monitoring Console describes what you see in the terminal-style display.

Real-Time Log Monitoring Console

The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.

After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.

Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.

Figure. Real-Time Log Monitoring Console Click to enlarge One or more tabs with a terminal style display shows messages generated by your application or function.

No Logs Message

If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.

Errors

You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.

  • If your application, function, or other resource fails
  • Network connectivity fails or is highly intermittent between the Karbon Platform Services Service Domain or your browser and karbon.nutanix.com

Tips for Real-Time Log Monitoring for Applications

  • The first tab shows the first container associated with the application running on the Karbon Platform Services Service Domain. The console shows one tab for each container associated with the application, including a tab for each container replicated in the same application.
  • Each tab displays the application and container name as app_name : container_id ) and might be truncated. To see the full name on the tab, hover over it.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs.

Tips for Real-Time Log Monitoring for Functions in Data Pipelines

  • The first tab shows the first function deployed in the data pipeline running on the Service Domain. The console shows one tab for each function deployed in the data pipeline.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs. For data pipelines, the displayed tab order is the order of the functions (that is, scripts of transformations) in the data pipeline.
  • Duplicate function names. If you use the same function more than once in the data pipeline, you will see duplicate tabs. That is, one tab for each function instance.
  • Each tab displays the data pipeline and function name as data_pipeline_ : function ) and might be truncated. To see the full name on the tab, hover over it.

Managing API Keys

Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.

As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.

Number of API Keys Per Users and Expiration
  • Each user can create and use two API keys.
  • Keys do not have an expiration date.
Deleting and Reusing API Keys
  • You can delete an API key at any time.
  • You cannot use or reuse a key after deleting it.
Creating and Securing the API Keys
  • When you create a key, the Manage API Keys dialog box displays the key.
  • When you create a key, make a copy of the key and secure it. You cannot see the key later. It is not recoverable at any time.
  • Do not share the key with anyone, including other users.

Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.

Using API Keys With HTTPS API Requests

Example API request using an API key.

After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.

For example, here is an example Node JS code snippet:

var http = require("https");

var options = {
  "method": "GET",
  "hostname": "karbon.nutanix.com",
  "port": null,
  "path": "/v1.0/applications",
  "headers": {
    "authorization": "Bearer API_key"
  }
};

Creating API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

  2. Click Create API Key .
    Manage API Keys shows that the key is created. Keys do not have an expiration date.
  3. Click Copy to Clipboard .
    • Make a copy of the API key and secure it. It is not recoverable at any time.
    • Click View to see the key value. You cannot see the key later.
    • Copy to Clipboard is a mandatory step. The Close button is inactive until you click Copy to Clipboard .
  4. To create another key, click Create API Key and Copy to Clipboard .
    Make a copy of the key and secure it. It is not recoverable at any time. Click View to see the key value. You cannot see the key later.
    Note that Status for each key is Active .
    Figure. Key Creation Click to enlarge The API keys show as Active.

  5. Click Close .

Disabling, Enabling, or Deleting API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

    Manage API Keys shows information about each API key: Create Time, Last Accessed, Status (Active or Disabled).
    Depending on the current state of the API key, you can disable, enable, or delete it.
    Figure. API Keys Click to enlarge The dialog shows API Key status as active or disabled.

  2. Disable an enabled API key.
    1. Click Disable .
      Note that Status for the API key is now Disabled .
    2. Click Close .
  3. Enable a disabled API key.
    1. Click Enable .
      Note that Status for the API key is now Active .
    2. Click Close .
  4. Delete an API Key.
    1. Click Delete , then click Delete again to confirm.
    2. Click Close .
      You cannot use or reuse a key after deleting it. Any client authenticating with this now-deleted API key will need a new key to authenticate again.

Secure Shell (SSH) Access to Service Domains

Karbon Platform Services provides limited secure shell (SSH) access to your cloud-connected service domain to manage Kubernetes pods.

Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. Setting Privileged Mode shows how you can check your Service Domain effectiveProfile setting.
Caution: Nutanix recommends limiting your use of this feature in production deployments. Do not enable SSH access in production deployments unless you have exhausted all other troubleshooting or debugging alternatives.

The Karbon Platform Services cloud management console provides limited secure shell (SSH) access to your cloud-connected Service Domain to manage Kubernetes pods. SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting.

As Karbon Platform Services is secure by design, dynamically generated public/private key pairs with a default expiration of 30 minutes secure your SSH connection. When you start an SSH session from the cloud management console, you automatically log on as user kubeuser .

Infrastructure administrators have SSH access to Service Domains. Project users do not have access.

kubeuser Restrictions

  • Limited to running kubectl CLI commands
  • No superuser or sudo access: No /root file access, unable to issue privileged commands

Opening a Secure Session (SSH) Console to a Service Domain

Access a Service Domain through SSH to manage Kubernetes pods with kubectl CLI commands. This feature is disabled by default. To enable this feature, contact Nutanix Support.

Before you begin

To complete this task, log on to the cloud management console as an infrastructure administrator user.
Caution: Nutanix recommends limiting your use of this feature in production deployments. Do not enable SSH access in production deployments unless you have exhausted all other troubleshooting or debugging alternatives.

Procedure

  1. Click Infrastructure > Service Domains .
  2. In the table, click a Service Domain Name , then click Console to connect to the Service Domain.
    A terminal window connects as the kubeuser user for 30 minutes.
  3. To activate the cursor, click the terminal window.
  4. After 30 minutes, you can choose to stay connected or disconnect.
    • After 30 minutes, you can remain connected by clicking Reconnect to reestablish the SSH connection.
    • To disconnect an active connection, click Back to Service Domains or another link on this page ( Summary , Nodes , or any others except SSH ).

What to do next

For example commands, see Using kubectl to Manage Service Domain Kubernetes Pods.

Using kubectl to Manage Service Domain Kubernetes Pods

Use kubectl commands to manage Kubernetes pods on the Service Domain.

Example kubectl Commands

  • List all running pods.
    kubeuser@host$ kubectl get pods
  • List all pod services.
    kubeuser@host$ kubectl get services
  • Get logs for a specific pod named pod_name .
    kubeuser@host$ kubectl logs pod_name
  • Run a command for a specific pod named pod_name .
    kubeuser@host$ kubectl exec pod_name command_name
  • Attach a shell to a container container_name in a pod named pod_name .
    kubeuser@host$ kubectl exec -it pod_name --container container_name -- /bin/sh

Alerts

The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.

  • Infrastructure Administrator . All alerts associated with Projects, Apps & Data, Infrastructure
  • Project User Alerts . All alerts associated with Projects and Apps & Data

To see alert details:

  • On the Alerts page, click the alert Description to see details about that alert.
  • From the Alerts Dashboard panel, click View Details , then click the alert Description to see details.

Click Filters to sort the alerts by:

  • Severity . Critical or Warning.
  • Time Range . Select All, Last 1 Hour, Last 1 Day, or Last 1 Week.

An Alert link is available on each Apps & Data and Infrastructure page.

Figure. Sorting and Filtering Alerts Click to enlarge Filters flyout menu to sort alerts by severity

For Karbon Platform Services Developers

Information and links to resources for Karbon Platform Services developers.

This section contains information about Karbon Platform Services development.

API Reference
Go to https://www.nutanix.dev/api-reference/ for API references, code samples, blogs , and more for Karbon Platform Services and other Nutanix products.
Karbon Platform Services Public Github Repository

The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.

Karbon Platform Services Command Line

Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Ingress Controller Support - Command Line

Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.

Kafka Data Service Support - Command Line

Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.

For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.

Free Karbon Platform Services Trial
Sign up for a free Karbon Platform Services Trial at https://www.nutanix.com/products/iot/try.

Set Privileged Mode For Applications

Enable a container application to run with elevated privileges.

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

For information about installing the kps command line, see For Karbon Platform Services Developers.

Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.

Setting Privileged Mode

Configure your Service Domain to enable a container application to run with elevated privileges.

Before you begin

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
  • Ensure that you have created an infrastructure administrator or project user with associated resources (Service Domain, project, applications, and so on).
  • To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing Karbon Platform Services user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) where the container application needs to run with elevated privileges.
    user@host$ kps get svcdomain -o yaml
  5. Set privilege mode for a specific Service Domain svc_domain_name .
    user@host$ kps update svcdomain svc_domain_name --set-privileged
    Successfully updated Service Domain: svc_domain_name
  6. Verify privileged mode is set to true .
    user@host$ kps get svcdomain svc_domain_name -o yaml
    kind: edge
    name: svc_domain_name
    
    connected: true
    .
    .
    .
    profile: 
       privileged: true 
       enableSSH: true 
    effectiveProfile: 
       privileged: true
       enableSSH: true
    effectiveProfile privileged set to true indicates that Nutanix Support has enabled this feature. If the setting is false , contact Nutanix Support to enable this feature. In this example, Nutanix has also enabled SSH access to this Service Domain (see Secure Shell (SSH) Access to Service Domains in Karbon Platform Services Administration Guide ).

What to do next

See Using Privileged Mode with an Application (Example).

Using Privileged Mode with an Application (Example)

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain

YAML Snippet, Sample Privileged Mode USB Device Access Application

Add a tag similar to the following in the Deployment section in your application YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
    sherlock.nutanix.com/privileged: "true"

Full YAML File, Sample Privileged Mode USB Device Access Application


apiVersion: v1
kind: ConfigMap
metadata:
  name: usb-scripts
data:
  entrypoint.sh: |-
    apk add python3
    apk add libusb
    pip3 install pyusb
    echo Read from USB keyboard
    python3 read-usb-keyboard.py
  read-usb-keyboard.py: |-
    import usb.core
    import usb.util
    import time

    USB_IF      = 0     # Interface
    USB_TIMEOUT = 10000 # Timeout in MS

    USB_VENDOR  = 0x627
    USB_PRODUCT = 0x1

    # Find keyboard
    dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
    endpoint = dev[0][(0,0)][0]
    try:
      dev.detach_kernel_driver(USB_IF)
    except Exception as err:
      print(err)
    usb.util.claim_interface(dev, USB_IF)
    while True:
      try:
        control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
        print(control)
      except Exception as err:
        print(err)
      time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
        sherlock.nutanix.com/privileged: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: usb
  template:
    metadata:
      labels:
        app: usb
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: alpine
        image: alpine
        volumeMounts:
        - name: scripts
          mountPath: /scripts
        command:
        - sh
        - -c 
        - cd /scripts && ./entrypoint.sh
      volumes:
      - name: scripts
        configMap:
          name: usb-scripts
          defaultMode: 0766

Configure Service Domain Environment Variables

Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.

As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:

  • You can provide environment variables and values by using a Helm chart you upload when you create an app. See Creating an Application and Helm Chart Support and Requirements.
  • You can also set specific environment variables per Service Domain by updating a Service Domain in the cloud management console or by using the kps update svcdomain command line.
  • See also Privileged Kubernetes Apps.

As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Setting and Clearing Service Domain Environment Variables - Command Line

How to set environment variables for a Service Domain.

Before you begin

You must be an infrastructure admin to set variables and values.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing infra admin user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) to find the service domain where you set the environment variables.
    user@host$ kps get svcdomain -o yaml
  5. For a Service Domain named my-svc-domain , for example, set the Service Domain environment variable. In this example, set a secret variable named SD_PASSWORD with a value of passwd1234 .
    user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
  6. Verify the changes.
    user@host$ kps get svcdomain my-svc-domain -o yaml
    kind: edge
    name: my-svc-doamin
    connected: true
    .
    .
    .
    env: '{"SD_PASSWORD": "passwd1234"}'
    The Service Domain is updated. Any infra admin or project user can deploy an app to a Service Domain where you can refer to the secret by using the variable $(SD_PASSWORD) .
  7. You can continue to add environment variables by using the kps update svcdomain my-svc-domain --set-env '{" variable_name ": " variable_value "}' command.
  8. You an also clear one or all variables for Service Domain svc_domain_name .
    1. To clear (unset) all environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env
    2. To clear (unset) a specific environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
  9. To update an app, restart it. See also Deploying and Undeploying a Kubernetes Application.

What to do next

Using Service Domain Environment Variables - Example

Using Service Domain Environment Variables - Example

Example: how to use existing environment variables for a Service Domain in application YAML.

About this task

  • You must be an infrastructure admin to set variables and values. See Setting and Clearing Service Domain Environment Variables - Command Line.
  • In this example, an infra admin has created these environment variables: a Kafka endpoint that requires a secret (authentication) to authorize the Kafka broker.
  • As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Procedure

  1. In your app YAML, add a container snippet similar to the following.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
      labels:
        app: my-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app
            image: some.container.registry.com/myapp:1.7.9
            ports:
            - containerPort: 80
         env:
            - name: KAFKA_ENDPOINT
              value: some.kafka.endpoint
            - name: KAFKA_KEY
            - value: placeholder
         command:
           - sh
           - c
           - "exec node index.js $(KAFKA_KEY)"
    
  2. Use this app YAML snippet when you create a Kubernetes app in Karbon Platform Services as described in Kubernetes Apps.

Karbon Platform Services Terminology

Category

Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.

Cloud Connector

Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.

Cloud Profile

Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.

Container Registry Profile

Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.

Data Pipeline

Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.

  • Input. An existing data stream or data source, identified according to a Category
  • Transformation. Code block such as a script defined in a Function to process or transform input data.
  • Output. A destination for data. Publish data to the cloud or cloud service (such as AWS Simple Queue Service) at the edge.
Data Service

Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.

Data Source

A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.

A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).

Functions

Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

Infrastructure Administrator

User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.

Project

A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.

Project User

User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.

Real-time Data Stream
  1. Data Pipeline output endpoint type with a Service Domain as an existing destination.
  2. An existing data pipeline real-time data stream that is used as the input to another data pipeline.
Run-time Environment

A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Service Domain

Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.

Karbon Platform Services Management Console

Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).

Karbon Platform Services

Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.

Version

Read article
Karbon Platform Services Project User Guide

Karbon Platform Services Hosted

Last updated: 2022-11-24

Karbon Platform Services Overview

Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.

In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.

With Karbon Platform Services, you can:

  • Quickly build and deploy intelligent applications across a public or private cloud infrastructure.
  • Connect various mobile, highly distributed data sensors (like video cameras, temperature or pressure sensors, streaming devices, and so on) to help collect data.
  • Create intelligent applications using data connectors and machine learning modules (for example, implementing object recognition) to transform the data. An application can be as simple as text processing code or it could be advanced code implementing AI by using popular machine learning frameworks like TensorFlow.
  • Push this data to your Service Domain or the public cloud to be stored or otherwise made available.

This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.

Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.

Cloud Management Console and User Experience

As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.

  • Admin Console drop-down menu for infra admins and Home Console drop-down menu for project users. Both menus provide a list of all current projects and 1-click access to an individual project. In the navigation sidebar, Projects also provides a navigation path to the overall list of projects.
  • Vertical tabs for most pages and dashboards. For example, clicking Administration > Logging shows the Logging dashboard with related task and Alert tabs. The left navigation menu remains persistent to help eliminate excessive browser refreshes and help overall navigation.
  • Simplified Service Domain Management . In addition to the standard Service Domain list showing all Service Domains, a consolidated dashboard for each Service Domain is available from Infrastructure > Service Domain . Karbon Platform Services also simplifies upgrade operations available from Administration > Upgrades . For convenience, infra admins can choose when to download available versions to all or selected Service Domains, when to upgrade them, and check status for all or individual Service Domains.
  • Updated Workflows for Infrastructure Admins and Project Users . Apart from administration operations such as Service Domain, user management, and resource management, the infra admin and project user work flow is project-centric. A project encapsulates everything required to successfully implement and monitor the platform.
  • Updated GPU/vGPU Configuration Support . If your Service Domain includes access to a GPU/vGPU, you can choose its use case. When you create or update a Service Domain, you can specify exclusive access by any Kubernetes app or data pipeline or by the AI Inferencing API (for example, if you are using ML Models).

Project User Role

A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:

  • Kubernetes Apps
  • Functions (scripts)
  • Data pipelines
  • Runtime environments

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Project User Click to enlarge

Karbon Platform Services Cloud Management Console

The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).

You can log on with your My Nutanix or local user credentials.

Logging On to The Cloud Management Console

Before you begin

  • The supported web browser is the current and two previous versions of Google Chrome.
  • If you are logging on for the first time, you might experience a guided onboarding workflow.
  • After three failed login attempts in one minute, you are locked out of your account for 30 minutes.
  • Users without My Nutanix credentials log on as a local user.

Procedure

  1. Open https://karbon.nutanix.com/ in a web browser.
  2. Choose one of the following to log on:
    • Click Login with My Nutanix and log on with your My Nutanix credentials.
    • Click Log In with Local User Account and log on with your project user or infrastructure administrator user name and password in the Username / Password fields.
    The web browser displays role-specific dashboard.

Project User View

The default view for a project user is the Dashboard .

  • Click the menu button in the view to expand and display all available pages in this view.
Figure. Default Project User View Click to enlarge Page shows the project user dashboard including an onboarding widget

Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. The user view indicate displays Project for project users.
Projects
  • Displays a list of all projects created by the infrastructure administrator.
  • Edit an existing project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.

Kubernetes Apps, Logging, and Services

Kubernetes Apps
  • Displays a list of available Kubernetes applications.
  • Create an application and associate it with a project.
Functions and Data Pipelines
  • Displays a list of available data pipelines.
  • Create a data pipeline and associate it with a project.
  • Create a visualization, a graphical representation of a data pipeline including data sources and Service Domain or cloud data destinations.
  • Displays a list of scripts (code used to perform one or more tasks) available for use in a project.
  • Create a function by uploading a script, and then associate it with a project.
  • Displays a list of available runtime environments.
  • Create an runtime environment and associate it with a project and a container registry.
AI Inferencing
  • Create and display machine learning models (ML Models) available for use in a project.
Services and Related Alerts
  • Kafka. Configured and deployed data streaming and messaging topics.
  • Istio. Defined secure traffic management service.
  • Prometheus. Defined app monitoring and metrics collection.
  • Nginx-Ingress. Configured and deployed ingress controllers.

Viewing Dashboards

After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.

Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.

The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.

Quick Start Menu

The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.

Figure. Quick Start Menu Click to enlarge The quick start menu is at the top right of the console.

Getting Started - Project User

The Quick Start Menu lists the common onboarding tasks for the project user. It includes links to project resource pages. You can also go directly to any project resource from the Apps & Data menu item.

As the project user, you can update a project by creating the following items.

  • Applications
  • Data pipelines
  • Runtime environments
  • Functions

If any Getting Started item shows Pending , the infrastructure administrator has not added you to that entity (like a project or application) or you need to create an entity (like an application).

Start At The Project Page

To get started after logging on to the cloud management console, see Projects.

Projects

A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

A project can consist of:

  • Existing administrator or project users
  • A Service Domain
  • Cloud profile
  • Container registry profile

When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.

Projects Page

The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.

For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.

When you click a project name, the project Summary dashboard is displayed and shows resources in the project.

You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).

Figure. Project Summary Dashboard Click to enlarge Project summary dashboard with component links.

Managing Project Services

The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.

The platform includes these ready-to-use services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.
App Runtime Services
These services are enabled by default on each Service Domain.
  • Kubernetes Apps. Containers as a service. You can create and run Kubernetes apps without having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Run server-less functions based on data triggers, then publish data to specific cloud or Service Domain endpoints.
  • AI Inferencing. Enable this service to use your machine learning (ML) models in your project. The ML Model feature provides a common interface for functions (that is, scripts) or applications.
Ingress Controller
Traefik or Nginx-Ingress. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. See Enable an Ingress Controller.
Service Mesh
Istio. Provides secure connection, traffic management, and telemetry. See Istio Service Mesh.
Data Streaming | Messaging
  • Kafka. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment. See Kafka as a Service.
  • NATS. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.
Logging | Monitoring | Alerting
  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing). See Prometheus Application Monitoring and Metrics as a Service.
  • Logging. Provides log monitoring and log bundling. See Audit Trail and Log Management.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kafka as a Service

Kafka is available as a data service through your Service Domain.

The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:

  • Secure by design.
  • No need to explicitly declare Kafka topics. With Karbon Platform Services Kafka as a service, topics are automatically created.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain monitors service health and raises service-specific alerts.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Using Kafka in an Application

Information about application requirements and sample YAML application file

Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.

Sample Kafka Application YAML Template File


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: some.container.registry.com/myapp:1.7.9
        ports:
        - containerPort: 80
     env:
        - name: KAFKA_ENDPOINT
          value: {{.Services.Kafka.Endpoint}}
Field Name Value or Field Name / Description Value or Field Name / Description
kind Deployment Specify the resource type. Here, use Deployment .
metadata name Provide a name for your deployment.
labels Provide at least one label. Here, specify the application name as app: my-app
spec Define the Kafka service specification.
replicas Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized.
selector Use matchLabels and specify the app name as in labels above.
template
Specify the application name here ( my-app ), same as metadata specifications above.
spec Here, define the specifications for the application using Kafka.
containers
  • name: my-app . Specify the container application
  • image: some.container.registry.com/myapp:1.7.9 . Define the container registry host address where the container image is stored.
  • ports: containerPort: 80 . Define the container registry port. Here, 80.
env

name: KAFKA_ENDPOINT

value: {{.Services.Kafka.Endpoint}}

Leave these values as shown.

Using Kafka in a Function for a Data Pipeline

Information about data pipeline function requirements.

See Functions and Data Pipelines.

You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:

  • Input. An existing data source or real-time data stream (output from another data pipeline).
  • Transformation. A function (code block or script) to transform data from a data source (or no function at all to pass data directly to the endpoint).
  • Output. An endpoint destination for the transformed or raw data.

Kafka Endpoint Function Requirements

For a data pipelines with a Kafka topic endpoint:

  • Language . Select the golang scripting language for the function.
  • Runtime Environment . Select the golang runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.

Viewing Kafka Status

In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Kafka .
    This dashboard shows a consolidated, high-level view of all Kafka topics, deployments, and related alerts. The default view is Topics.
  4. To show all topics for this project, click Topics .
    1. Topics. All Kafka topics used in this project.
    2. Service Domain. Service Domains in the project employing Kafka messaging.
    3. Bytes/sec produced and Bytes/sec consumed. Bandwidth use.
  5. To view Service Domain status where Kafka is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Brokers. A No Brokers Available message might mean the Service Domain is not connected.
    3. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with Kafka for this project, click Alerts .

Enable an Ingress Controller

The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.

An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.

When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.

If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.

In your application YAML, specify two snippets:

  • Service snippet. Use annotations to define the HTTP or HTTPS host protocol, path, service domain, and secret for each Service Domain to use as an ingress controller. You can specify one more Service Domains.
  • Secret snippet. Use this snippet to specify the certificates used to secure app traffic.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Sample Ingress Controller Service Domain Configuration

To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.

You can only enable and use one Ingress controller per Service Domain.

Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.

Ingress Controller Service Domain Host Annotations Specification

apiVersion: v1
kind: Service
metadata:
  name: whoami
  annotations:
    sherlock.nutanix.com/http-ingress-path: /notls
    sherlock.nutanix.com/https-ingress-path: /tls
    sherlock.nutanix.com/https-ingress-host: DNS_name
    sherlock.nutanix.com/http-ingress-host: DNS_name
    sherlock.nutanix.com/https-ingress-secret: whoami
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: whoami
Table 1. Ingress Controller Annotations Specification
Field Name Value or Field Name / Description Value or Field Name / Description
kind Service Specify the Kubernetes service. Here, use Service to indicate that this snippet defines the ingress controller details.
apiVersion v1 Here, the Kubernetes API version.
metadata name Provide an app name to which this controller applies.
annotations These annotations define the ingress controller encryption type and paths for Karbon Platform Services.
sherlock.nutanix.com/http-ingress-path: /notls /notls specifies no Transport Layer Security encryption.
sherlock.nutanix.com/https-ingress-path: /tls

/tls specifies Transport Layer Security encryption.

sherlock.nutanix.com/http-ingress-host: DNS_name Ingress service host path, where the service is bound to port 80.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-host: DNS_name Ingress service host path, where the service is bound to port 443.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-secret: whoami The sherlock.nutanix.com/https-ingress-secret: whoami snippet links the authentication Secret information defined above to this controller.
spec Define the transfer protocol, port type, and port for the application.
  • protocol: TCP . Transfer protocol of TCP.
  • ports: Define the port and type.

    name: web Port type.

    port: 80 TCP port.

A selector to specify the application. selector app: whoami

Securing The Application Traffic

Use a Secret snippet to specify the certificates used to secure app traffic.

apiVersion: v1
kind: Secret
metadata:
  name: whoami
type: kubernetes.io/tls
data:
  ca.crt: cert_auth_cert
  tls.crt: tls_cert
  tls.key: tls_key
Table 2. TLS Certificate Specification
Field Name Value or Field Name / Description Value or Field Name / Description
apiVersion v1 Here, the TLS API version.
kind Secret Specify the resource type. Here, use Secret to indicate that this snippet defines the authentication details.
metadata name Provide an app name to which this certification applies.
type Define the authentication type used to secure the app. Here, kubernetes.io/tls
data ca.crt Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key (
tls.crt
tls.key

Viewing Ingress Controller Details

In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Depending on the deployed Ingress controller, click Nginx-Ingress or Traefik .
    This dashboard shows a consolidated, high-level view of all rules, deployments, and related alerts. The default view is Rules.
  4. To show all rules for this project, click Rules .
    1. Application. App where traffic is being routed.
    2. Rules. Rules you have configured for this app. Here, Rules shows the host and paths
    3. Destination. Application and port number.
    4. Service Domain. Service Domains in the project employing routing.
    5. TLS. Transport Layer Security (TLS) protocol status. On indicates encrypted communication is enabled. Off indicates it is not used.
  5. To view Service Domain status where the controller is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Istio Service Mesh

Istio provides secure connection, traffic management, and telemetry.

Add the Istio Virtual Service and DestinationRules to an Application - Example

In the application YAML snippet or file, define the VirtualService and DestinationRules objects.

These objects specify traffic routing rules for the recommendation-service app host. If the traffic rules match, traffic flows to the named destination (or subset/version of it) as defined here.

In this example, traffic is routed to the recommendation-service app host if it is sent from the FireFox browser. The specific policy version ( subset ) for each host helps you identify and manage routed data.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - match:
    - headers:
        user-agent:
          regex: .*Firefox.*
    route:
    - destination:
        host: recommendation-service
        subset: v2
  - route:
    - destination:
        host: recommendation-service
        subset: v1

This DestinationRule YAML snippet defines a load-balancing traffic policy for the policy versions ( subsets ), where any healthy host can service the request.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: recomm-svc
spec:
  host: recommendation-service
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Manage Traffic by Weight - Example

In this YAML snipped, you can split traffic for each subset by specifying a weight of 30 in one case, and 70 in the other. You can also weight them evenly by giving each a weight value of 50.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - route:
    - destination:
       host: recommendation-service
       subset: v2
      weight: 30   
    - destination:
       host: recommendation-service
       subset: v1
      weight: 70

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Istio Details

In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.

About this task

The Istio tabs show service resource and routing configuration information derived from project-related Kubernetes Apps YAML files and service status.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Istio in the navigation sidebar to show the default Application Metrics page.
    Application Metrics shows initial details about the applications and related traffic metrics.
    1. Name and Service Domain. Application name and Service Domain where the application is deployed and the Istio service is enabled.
    2. Workloads. One or more workloads associated with the application. For example, Kubernetes Deployments, StatefulSets, DaemonSets, and so on.
    3. Inbound Request Volume / Outbound Request Volume. Inbound/Outbound HTTP requests per second for the last 5 minutes
  3. To see details about any traffic routing configurations associated with the application, click Virtual Services .
    An application can specify one or more virtual services, which you can deploy across one or more Service Domains.
    1. Application and Virtual Services. Application and service name.
    2. Service Domains. Number and name of the Service Domains where the service is deployed.
    3. Matches. Lists matching traffic routes.
    4. Destinations. Number of service destination host connection or request routes. Expand the number to show any destinations served by the virtual service (v1, v2, and so on) where traffic is routed. If specified in the Virtual Service YAML file, Weight indicates the proportion of traffic routed to each host. For example, Weight: 50 indicates that 50 percent of the traffic is routed to that host.
  4. To see rules associated with Destinations, click Destination Rules .
    An application can specify one or more destination rules, which you can deploy across one or more Service Domains. A destination rule is a policy that is applied after traffic is routed (for example, a load balancing configuration).
    1. Application. Application where the rule applies.
    2. Destination Rules. Rules by name.
    3. Service Domains. Number and name of the Service Domains where the rule is deployed.
    4. Subsets. Name and number of the specific policies (subsets). Expand the number to show any pod labels (v1, v2, and so on), service versions (v1, v2, and so on), and traffic weighting, where rules apply and traffic is routed.
  5. To view Service Domain status where Istio is used, click Deployments .
    1. List of Service Domains where the service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Prometheus Application Monitoring and Metrics as a Service

Note: When you disable Prometheus and then later enable it, the service creates a new PersistentVolumeClaim (PVC) and clears any previously-stored metrics data stored in PVC.

The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.

Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.

You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.

Default Service Settings

Table 1. Prometheus Default Settings
Setting Default Value or Description
Frequency interval to collect and store metrics (also known as scrape and store) Every 60 seconds 1
Collection endpoint /metrics 1
Default collection app collect-metrics
Data storage retention time 10 days
  1. You can create a customized ServiceMonitor YAML snippet to change this default setting. See Enable Prometheus App Monitoring and Metric Collection - Examples.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Prometheus Service Status

The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Prometheus to view the default Deployments page.
    1. Service Domain and Status. Service Domains where the Prometheus service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Prometheus Endpoints. Endpoints that the service is scraping, by ID.
    3. Alert Manager. Alert Manager shows alerts associated with the Prometheus endpoints, by ID.
    4. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  4. To see all alerts associated with Prometheus for this project, click Alerts .

Create Prometheus Graphs with Grafana - Example

This example shows how you can set up a Prometheus metrics dashboard with Grafana.

This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.

Define Prometheus as the Data Source for Grafana

The first ConfigMap YAML snippet example uses the environment variable {{.Services.Prometheus.Endpoint}} to define the service endpoint. If this YAML snippet is part of an application template created by an infra admin, a project user can then specify these per-Service Domain variables in their application.

The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com


apiVersion: v1
kind: ConfigMap
metadata:
 name: grafana-datasources
data:
 prometheus.yaml: |-
   {
       "apiVersion": 1,
       "datasources": [
           {
              "access":"proxy",
               "editable": true,
               "name": "prometheus",
               "orgId": 1,
               "type": "prometheus",
               "url": "{{.Services.Prometheus.Endpoint}}",
               "version": 1
           }
       ]
   }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-ini
data:
  grafana.ini: |
    [server]
    domain = woodkraft2.ntnxdomain.com
    root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
    serve_from_sub_path = true
---

Specify Deployment Information on the Service Domain

This YAML snippet provides a standard deployment specification for Grafana.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          ports:
            - name: grafana
              containerPort: 3000
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "1Gi"
              cpu: "500m"
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-storage
            - mountPath: /etc/grafana/provisioning/datasources
              name: grafana-datasources
              readOnly: false
            - name: grafana-ini
              mountPath: "/etc/grafana/grafana.ini"
              subPath: grafana.ini
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
            defaultMode: 420
            name: grafana-datasources
        - name: grafana-ini
          configMap:
            defaultMode: 420
            name: grafana-ini
---

Define the Grafana Service and Use an Ingress Controller

Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).


apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  selector:
    app: grafana
  ports:
    - port: 3000
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  rules:
    - host: woodkraft2.ntnxdomain.com
      http:
        paths:
        - path: /grafana
          backend:
            serviceName: grafana
            servicePort: 3000

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kubernetes Apps

You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.

You need to create a project with at least one user to create an app.

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.

Privileged Kubernetes Apps

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

For Kubernetes apps running as privileged, you might have to specify the Kubernetes namespace where the application is deployed. You can do this by using the {{ .Namespace }} variable you can define in app YAML template file.

In this example, the resource kind of ClusterRoleBinding specifies the {{ .Namespace }} variable as the namespace where the subject ServiceAccount is deployed. As all app resources are deployed in the project namespace, specify the project name as well (here, name: my-sa).

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: my-cluster-role
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: {{ .Namespace }}

Creating an Application

Create a Kubernetes application that you can associate with a project.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • If your app requires a service, make sure you enable it in the associated project.
  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • See also Configure Service Domain Environment Variables to set and use environment variables in your app.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps > Create Kubernetes App .
  3. Name your application. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description for your application.
  5. Select one or more Service Domains, individually or by category.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. For Select Individually , click Add Service Domains to select one or more Service Domains.
    2. For Select By Category , select to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
  6. Click Next .
  7. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
  8. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a values YAML file to override the values you specified in the Helm Chart package.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  9. Click Create .
    If your app requires a service, make sure you enable it in the associated project.
    The Kubernetes Apps page lists the application you just created as well as any existing applications. Apps start automatically after creation.

Editing an Application

Update an existing Kubernetes application.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • To set and use environment variables in your app, see Configure Service Domain Environment Variables.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then click an application in the list.
    The application dashboard is displayed along with an Edit button.
  3. Click Edit .
  4. Name . Update the application name. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your application.
  6. You cannot change the application's Project .
  7. Update the associated Service Domains.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. If Select Individually is selected:
      • Click X to remove a Service Domain in the list.
      • Click Edit to select or deselect one or more Service Domains.
    2. If Select By Category is selected, add or remove one or more categories associated with a Service Domain.
      • Select a category and value.
      • Click X to remove a category.
      • Click the plus sign ( + ) to add more categories and values.
  8. Click Next .
  9. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    1. You can edit the file directly on the Yaml Configuration page.
    2. Click Choose File to navigate to a YAML file or template.
    After uploading the file, you can also edit the file contents. You can choose the contrast levels Normal , Dark , or High Contrast Dark .
  10. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a YAML file to override the values you specified in the Helm Chart package. Use this option to update your application without having to re-upload a new chart.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  11. Click Update .
    The Kubernetes Apps page lists the application you just updated as well as any existing applications.

Removing an Application

Delete an existing Kubernetes application.

About this task

  • To complete this task, log on to the cloud management console.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. Click Actions > Remove , then click Delete to confirm.
    The Kubernetes Apps page lists any remaining applications.

Deploying and Undeploying a Kubernetes Application

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.

Before you begin

Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.

  • If your application specifies an explicit PersistentVolumeClaim object, when the app is undeployed, data stored in PersistentVolumeClaim is deleted. This data is not available if you then deploy the app.
  • If your application specifies a VolumeClaimTemplates object, when the app is undeployed, data stored in PersistentVolumeClaim persists. This data is available for reuse if you later deploy the app. If you plan to redeploy apps, Nutanix recommends using VolumeClaimTemplates to implement StatefulSets with stable storage.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. To undeploy a running application, select an application, then click Actions > Undeploy
    1. To undeploy every instance of the app running on all Service Domains, select Undeploy All , then click Undeploy .
    2. To undeploy the app running on specific Service Domains, select Undeploy Selected , select one or more Service Domains, then click Undeploy .
    3. Click Undeploy App to confirm.
  4. To deploy an undeployed application, select an application, then click Actions > Deploy , then click Deploy to confirm.
    1. To deploy every instance of the app running on all Service Domains, select Deploy All , then click Deploy .
    2. To deploy the app running on specific Service Domains, select Deploy Selected , select one or more Service Domains, then click Deploy .

Helm Chart Support and Requirements

Helm Chart Format
When you create a Kubernetes app in the cloud management console, you can upload a Helm chart package in a gzipped TAR file (.TGZ format) that describes your app. For example, it can include:
  • Helm chart definition YAML file
  • App YAML template files that define your application (deployment, service, and so on). See also Privileged Kubernetes Apps
  • Values YAML file where you declare variables to pass to your app templates. For example, you can specify Ingress controller annotations, cloud repository details, and other required settings
Supported Helm Version
Karbon Platform Services supports Helm version 3.
Kubernetes App Support
Karbon Platform Services supports the same Kubernetes resources in Helm charts as in application YAML files. For example, daemonsets, deployment, secrets, services, statefulsets, and so on are supported when defined in a Helm chart.

Data Pipelines

The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.

A data pipeline is a path for data that includes:

  • Input . An existing data source or real-time data stream.
  • Transformation . Code block such as a script defined in a Function to process or transform input data.
  • Output . A destination for your data. Publish data to the Service Domain, cloud, or cloud data service (such as AWS Simple Queue Service).

It also enables you to process and transform captured data for further consumption or processing.

To create a data pipeline, you must have already created or defined at least one of the following:

  • Project
  • Category
  • Data source
  • Function
  • Cloud profile. Required for cloud data destinations or Service Domain endpoints
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
  • You can also stop and start a data pipeline. See Stopping and Starting a Data Pipeline.

Data Pipeline Visualization

After you create one or more data pipelines, the Data Pipelines > Visualization page shows data pipelines and the relationship among data pipeline components.

You can view data pipelines associated with a Service Domain by clicking the filter icon under each title (Data Sources, Data Pipelines to Service Domain, Data Pipelines on Cloud) and selecting one or more Service Domains in the drop-down list.

Figure. Data Pipeline Visualization Click to enlarge Shows the existing data pipelines and the relationship among data pipeline components.

Creating a Data Pipeline

Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.

Before you begin

You must have already created at least one of each: project, data source, function, and category. Also, a cloud profile is required for cloud data destinations or Service Domain endpoints. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Click Create .
  5. Data Pipeline Name . Name your data pipeline. Up to 63 lowercase alphanumeric characters and the dash character (-) are allowed.

Input - Add a Data Source

Procedure

Click Add Data Source , then select Data Source .
  1. Select a data source Category and one related Value .
  2. [Optional] Click Add new to add another Category and Value . Continue adding as many as you have defined.
  3. Click the trashcan to delete a data source category and value.

Input - Add a Real-Time Data Stream

Procedure

Click Add Data Source , then select Real-Time Data Stream .
  1. Select an existing pipeline as a Real-Time Data Stream .

Transformation - Add a Function

Procedure

  1. Click Add Function and select an existing Function.
  2. Define a data Sampling Interval by selecting Enable .
    1. Enter a interval value.
    2. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Create Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .

Output - Add a Destination

Procedure

  1. Click Add Destination to specify where the transformed data is to be output: Publish to Service Domain or Publish to External Cloud .
  2. If you select Destination > Service Domain :
    1. Endpoint Type . Select Kafka , MQTT , Realtime Data Stream , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EndpointType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  5. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
      d
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  6. Click Create .

Editing a Data Pipeline

Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline in the list, then click Actions > Edit
    You cannot update the data pipeline name.

Input - Edit a Data Source

Procedure

You can do any of the following:
  1. Select a different Data Source to change the data pipeline Input source (or keep the existing one).
  2. Select a different value for an existing Category.
  3. Click the trashcan to delete a data source category and value.
  4. Click Add new to add another Category and Value . Continue adding as many as you have defined.
  5. Select a different category.

Input - Edit a Real-Time Data Stream

Procedure

Select a different Realtime Data Stream to change the data pipeline Input source.

Transformation - Edit a Function

About this task

You can do any of the following tasks.

Procedure

  1. Select a different Function .
  2. Add or update the Sampling Interval for any new or existing function.
    1. If not selected, create a Sampling Interval by selecting Enable
    2. Enter a interval value.
    3. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Add Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .
  6. Click Create .

Output - Edit a Destination

About this task

You can do any of the following tasks.

Procedure

  1. Select a Infrastructure or External Cloud Destination to specify where the transformed data is to be output.
  2. If you select Destination > Infrastructure :
    1. Endpoint Type . Select MQTT , Realtime Data Stream , Kafka , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  5. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EdType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  6. Click Update .

Removing a Data Pipeline

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline, click Actions > Remove , then click Delete again to confirm.
    The Data Pipelines page lists any remaining data pipelines.

Stopping and Starting a Data Pipeline

About this task

  • You can stop and start a data pipeline.
  • You can select the table or tile view on this page by clicking one of the view icons.
    Figure. View Icons Click to enlarge The view icons on the page let you switch between table and tile view.

About this task

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Functions and Data Pipelines .
  3. To stop an active data pipeline, select a data pipeline, then click Actions > Stop .
    You can also click a data pipeline, then click Stop or Start , depending on the data pipeline state.
    Stop stops any data from being transformed or processed, and terminates any data transfer to your data destination.
  4. To start the data pipeline, select Start (after stopping a data pipeline) from the Actions drop-down menu.

Functions

A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.

  • You can use ML models in your function code. Use the ML Model API to call the model and version.
  • When you create, clone, or edit a function, you can define one or more parameters.
  • When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.

Creating a Function

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions > Create .
  4. Name . Name the function. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your function.
  6. Language . Select the scripting language for the function: golang, python, node.
  7. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  8. Click Next .
  9. Add function code.
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  10. Click Create .

Editing a Function

Edit an existing function. To complete this task, log on to the cloud management console.

About this task

Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Edit .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Update .

Cloning a Function

Clone an existing function. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Clone .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Next to update a data pipeline with this function, if desired.
  13. Click Create , then click Confirm in response to the data pipeline warning.
    An updated function can cause data pipelines to break (that is, stop collecting data correctly).

AI Inferencing and ML Model Management

You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.

The Karbon Platform Services Release Notes list currently supported ML model types.

An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.

Before You Begin
  • To allow access by the AI Inferencing API, ensure that the infrastructure admin selects Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • Ensure that the infrastructure admin has enabled the AI Inferencing service for your project.
ML Models Guidelines and Limitations
  • The maximum ML model zip file size you can upload is 1 GB.
  • Each ML model zip file must contain a binary file and metadata file.
  • You can use ML models in your function code. Use the ML Model API to call the model and version.
ML Model Validation
  • The Karbon Platform Services platform validates any ML model that you upload. It checks the following:
    • The uploaded model is a ZIP file, with the correct folder of file structure in the ZIP file.
    • The ML model binaries in the ZIP file are valid and in the required format.
    • TensorFlow ML model binaries in the ZIP file are valid and in the required format.

Adding an ML Model

About this task

To complete this task, log on to the cloud management console.

Before you begin

  • An infrastructure admin can allow access to the AI Inferencing API by selecting Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing > Add Model .
  3. Name . Name the ML model. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description.
  5. Select a specific project to make your ML model available to that project.
  6. Select a Framework Type from the drop-down menu. For example, Tensorflow 2.1.0 .
  7. Click Add Version in the List of Model Versions panel.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  8. [Optional] Click Add new to add another model version to the model.
    A single ML model can consist of multiple model versions.
  9. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Editing an ML Model

About this task

  • To complete this task, log on to the cloud management console.
  • You cannot change the name or framework type associated with an ML model.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing , select a model in the list, and click Edit .
  3. Description . Update an existing description.
  4. Edit or Remove an existing model version from the List of Model Versions.
  5. [Optional] Click Add new to add another model version.
    A single ML model can consist of multiple model versions.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  6. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Removing an ML Model

How to delete an ML model.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing and select a model in the list.
  3. Cick Remove , then click Delete again to confirm.
    The ML Models page lists any remaining ML models.

What to do next

Check ML model status in the ML Model page.

Viewing ML Model Status

The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing to show model information like Version, File Size, and Framework Type.
  3. To see where a model is deployed, click the model name.
    This page shows a list of associated Service Domains.

Runtime Environments

A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.

  • Golang
  • NodeJS
  • Python 2
  • Python 3
  • Tensorflow Python
You can add your own custom runtime environment for use by all or specific projects, functions, and container registries.
Note: Custom Golang runtime environments are not supported. Use the provided standard Golang runtime environment in this case.

Creating a Runtime Environment

How to create a user-added runtime environment for use with your project.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Click Create .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Click Add Profile to create a new container registry profile or use an existing profile. Do one of the following sets of steps.
    1. Select Add New .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
      For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    5. Username . Enter the user name credential for the profile.
    6. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    7. Email Address . Add an email address in this field.
    1. Select Use Existing cloud profile and elect an existing profile from Cloud Profile .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  8. Click Done .
  9. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  10. Languages . Select the scripting language for the runtime: golang, python, node.
  11. [Optional] Click Add Dockerfile to choose and upload a Dockerfile for the container image, then click Done .
    A Dockerfile typically includes instructions used to build images automatically or run commands.
  12. Click Create .

Editing a Custom Runtime Environment

How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment in the list, then click Edit .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Select an existing container registry profile.
  8. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  9. Languages . Select the scripting language for the runtime: golang, python, node.
  10. [Optional] Remove or Edit any existing Docker file.
    After editing a file, click Done .
  11. Click Add Docker Profile to choose a Docker file for the container image.
  12. Click Update .

Removing a Runtime Environment

How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment, click Remove , then click Delete again to confirm.
    The Runtime Environments page lists any remaining runtime environments.

Audit Trail and Log Management

Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.

From Logging or System Logs > Logging or the summary page for a specific project, you can:

  • Run Log Collector on every or selected Service Domains (admins only)
  • See created log bundles, which you can then download or delete
  • Create, edit, and delete log forwarding policies to help make collection more granular and then forward those logs to the cloud
  • View alerts
  • View an audit trail of any operations performed by users in the last 30 days

You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Audit Trail Dashboard

Access the Audit Log dashboard to view the most recent operations performed by users.

Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .

Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.

Running Log Collector - Kubernetes Apps

Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard and navigation sidebar is displayed.
  3. Click Kubernetes Apps , then click an app in the list.
  4. Select the Log Bundles tab, then click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  5. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Creating and Updating a Log Forwarding Policy

Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.

Before you begin

Make sure your infrastructure admin has created a cloud profile first.

Procedure

  1. Go to System Logs > Logging from Dashboard or the summary page for a specific project.
  2. Click Log Forwarding Policy > Create New Policy .
  3. Name . Name your policy.
  4. Select a Service Domain.
    1. Select All . Logs for all Service Domains are forwarded to the cloud destination.
    2. Select Select By Category to choose and apply one or more categories associated with a Service Domain. Click the plus sign ( + ) to add more categories.
    3. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  5. Select a destination.
    1. Select Profile . Choose an existing cloud profile.
    2. Select Service . Choose a cloud service. For example, choose Cloudwatch to stream logs to Amazon CloudWatch. Other services include Kinesis and Firehose .
    3. Select Region . Enter a valid AWS region name or CloudWatch endpoint fully qualified domain name. For example, us-west-2 or monitoring.us-west-2.amazonaws.com
    4. Select Stream . Enter a log stream name.
    5. Select Groups . (CloudWatch only) Enter a Log group name.
  6. Click Done .
    The dashboard or summary page shows the policy tile.
  7. From the Logging dashboard, you can edit or delete a policy.
    1. Click Edit and change any of the fields, then click Done .
    2. To delete a policy, click Delete , then click Delete again to confirm.

Creating a Log Collector for Log Forwarding - Command Line

Create a log collector for log forwarding by using the kps command line.

Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Each sample YAML file defines a log collector. Log collectors can be:

  • Infrastructure-based: collects infrastructure-related (Service Domain) information. For use by infra admins.
  • Project-based: project-related information (applications, data pipelines, and so on). For use by project users and infra admins (with assigned projects).

See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.

Example Usage

Create a log collector defined in a YAML file:

user@host$ kps create -f infra-logcollector-cloudwatch.yaml

infra-logcollector-cloudwatch.yaml

This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.

kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name infra-log-name Specify the unique log collector name
type infrastructure Log collector for infrastructure
destination cloudwatch Cloud destination type
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

project-logcollector.yaml

This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name project-log-name Specify the unique log collector name
type project Log collector for specific project
project project-name Specify the project name
destination cloud-destination type Cloud destination type such as CloudWatch
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

Stream Real-Time Application and Pipeline Logs

Real-Time Log Monitoring built into Karbon Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.

Note: Infrastructure administrators can stream logs if they have been added to a project.

Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Services cloud platform).

The cloud management console shows the most recent log messages, up to 2 MB.

Displaying Real-Time Logs

View the most recent real-time logs for applications and data pipelines.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Go to the Deployments page for an application or data pipeline.
    1. From the home menu, click Projects , then click a project.
    2. Click Kubernetes Apps or Functions and Data Pipelines .
    3. Click an app name or data pipeline name, then click Deployments .
    4. Click an application or a data pipeline in the table, then click Deployments .
      A table shows each Service Domain deploying the application or data pipeline.
  2. Select a Service Domain, then click View Real-time Logs .
    The window displays streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application, from the Service Domain.

What to do next

Real-Time Log Monitoring Console describes what you see in the terminal-style display.

Real-Time Log Monitoring Console

The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.

After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.

Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.

Figure. Real-Time Log Monitoring Console Click to enlarge One or more tabs with a terminal style display shows messages generated by your application or function.

No Logs Message

If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.

Errors

You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.

  • If your application, function, or other resource fails
  • Network connectivity fails or is highly intermittent between the Karbon Platform Services Service Domain or your browser and karbon.nutanix.com

Tips for Real-Time Log Monitoring for Applications

  • The first tab shows the first container associated with the application running on the Karbon Platform Services Service Domain. The console shows one tab for each container associated with the application, including a tab for each container replicated in the same application.
  • Each tab displays the application and container name as app_name : container_id ) and might be truncated. To see the full name on the tab, hover over it.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs.

Tips for Real-Time Log Monitoring for Functions in Data Pipelines

  • The first tab shows the first function deployed in the data pipeline running on the Service Domain. The console shows one tab for each function deployed in the data pipeline.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs. For data pipelines, the displayed tab order is the order of the functions (that is, scripts of transformations) in the data pipeline.
  • Duplicate function names. If you use the same function more than once in the data pipeline, you will see duplicate tabs. That is, one tab for each function instance.
  • Each tab displays the data pipeline and function name as data_pipeline_ : function ) and might be truncated. To see the full name on the tab, hover over it.

Managing API Keys

Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.

As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.

Number of API Keys Per Users and Expiration
  • Each user can create and use two API keys.
  • Keys do not have an expiration date.
Deleting and Reusing API Keys
  • You can delete an API key at any time.
  • You cannot use or reuse a key after deleting it.
Creating and Securing the API Keys
  • When you create a key, the Manage API Keys dialog box displays the key.
  • When you create a key, make a copy of the key and secure it. You cannot see the key later. It is not recoverable at any time.
  • Do not share the key with anyone, including other users.

Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.

Using API Keys With HTTPS API Requests

Example API request using an API key.

After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.

For example, here is an example Node JS code snippet:

var http = require("https");

var options = {
  "method": "GET",
  "hostname": "karbon.nutanix.com",
  "port": null,
  "path": "/v1.0/applications",
  "headers": {
    "authorization": "Bearer API_key"
  }
};

Creating API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

  2. Click Create API Key .
    Manage API Keys shows that the key is created. Keys do not have an expiration date.
  3. Click Copy to Clipboard .
    • Make a copy of the API key and secure it. It is not recoverable at any time.
    • Click View to see the key value. You cannot see the key later.
    • Copy to Clipboard is a mandatory step. The Close button is inactive until you click Copy to Clipboard .
  4. To create another key, click Create API Key and Copy to Clipboard .
    Make a copy of the key and secure it. It is not recoverable at any time. Click View to see the key value. You cannot see the key later.
    Note that Status for each key is Active .
    Figure. Key Creation Click to enlarge The API keys show as Active.

  5. Click Close .

Disabling, Enabling, or Deleting API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

    Manage API Keys shows information about each API key: Create Time, Last Accessed, Status (Active or Disabled).
    Depending on the current state of the API key, you can disable, enable, or delete it.
    Figure. API Keys Click to enlarge The dialog shows API Key status as active or disabled.

  2. Disable an enabled API key.
    1. Click Disable .
      Note that Status for the API key is now Disabled .
    2. Click Close .
  3. Enable a disabled API key.
    1. Click Enable .
      Note that Status for the API key is now Active .
    2. Click Close .
  4. Delete an API Key.
    1. Click Delete , then click Delete again to confirm.
    2. Click Close .
      You cannot use or reuse a key after deleting it. Any client authenticating with this now-deleted API key will need a new key to authenticate again.

Alerts

The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.

  • Infrastructure Administrator . All alerts associated with Projects, Apps & Data, Infrastructure
  • Project User Alerts . All alerts associated with Projects and Apps & Data

To see alert details:

  • On the Alerts page, click the alert Description to see details about that alert.
  • From the Alerts Dashboard panel, click View Details , then click the alert Description to see details.

Click Filters to sort the alerts by:

  • Severity . Critical or Warning.
  • Time Range . Select All, Last 1 Hour, Last 1 Day, or Last 1 Week.

An Alert link is available on each Apps & Data and Infrastructure page.

Figure. Sorting and Filtering Alerts Click to enlarge Filters flyout menu to sort alerts by severity

Naming Guidelines

Data Pipelines and Functions
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
Service Domain, Data Source, and Data Pipeline Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.For example, my.service-domain is a valid Service Domain name.
  • Maximum length of 63 characters
Data Source Topic and Field Naming
Topic and field naming must be unique across the same data source types. You are allowed to duplicate names across different data source types. For example, an MQTT topic name and RTSP protocol stream name can share /temperature/frontroom.
  • An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
  • An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
  • A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.
Container Registry Profile Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.
  • Maximum length of 200 characters
All Other Resource Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name
  • Maximum length of 200 characters

For Karbon Platform Services Developers

Information and links to resources for Karbon Platform Services developers.

This section contains information about Karbon Platform Services development.

API Reference
Go to https://www.nutanix.dev/api-reference/ for API references, code samples, blogs , and more for Karbon Platform Services and other Nutanix products.
Karbon Platform Services Public Github Repository

The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.

Karbon Platform Services Command Line

Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Ingress Controller Support - Command Line

Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.

Kafka Data Service Support - Command Line

Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.

For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.

Free Karbon Platform Services Trial
Sign up for a free Karbon Platform Services Trial at https://www.nutanix.com/products/iot/try.

Set Privileged Mode For Applications

Enable a container application to run with elevated privileges.

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

For information about installing the kps command line, see For Karbon Platform Services Developers.

Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.

Setting Privileged Mode

Configure your Service Domain to enable a container application to run with elevated privileges.

Before you begin

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
  • Ensure that you have created an infrastructure administrator or project user with associated resources (Service Domain, project, applications, and so on).
  • To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing Karbon Platform Services user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) where the container application needs to run with elevated privileges.
    user@host$ kps get svcdomain -o yaml
  5. Set privilege mode for a specific Service Domain svc_domain_name .
    user@host$ kps update svcdomain svc_domain_name --set-privileged
    Successfully updated Service Domain: svc_domain_name
  6. Verify privileged mode is set to true .
    user@host$ kps get svcdomain svc_domain_name -o yaml
    kind: edge
    name: svc_domain_name
    
    connected: true
    .
    .
    .
    profile: 
       privileged: true 
       enableSSH: true 
    effectiveProfile: 
       privileged: true
       enableSSH: true
    effectiveProfile privileged set to true indicates that Nutanix Support has enabled this feature. If the setting is false , contact Nutanix Support to enable this feature. In this example, Nutanix has also enabled SSH access to this Service Domain (see Secure Shell (SSH) Access to Service Domains in Karbon Platform Services Administration Guide ).

What to do next

See Using Privileged Mode with an Application (Example).

Using Privileged Mode with an Application (Example)

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain

YAML Snippet, Sample Privileged Mode USB Device Access Application

Add a tag similar to the following in the Deployment section in your application YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
    sherlock.nutanix.com/privileged: "true"

Full YAML File, Sample Privileged Mode USB Device Access Application


apiVersion: v1
kind: ConfigMap
metadata:
  name: usb-scripts
data:
  entrypoint.sh: |-
    apk add python3
    apk add libusb
    pip3 install pyusb
    echo Read from USB keyboard
    python3 read-usb-keyboard.py
  read-usb-keyboard.py: |-
    import usb.core
    import usb.util
    import time

    USB_IF      = 0     # Interface
    USB_TIMEOUT = 10000 # Timeout in MS

    USB_VENDOR  = 0x627
    USB_PRODUCT = 0x1

    # Find keyboard
    dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
    endpoint = dev[0][(0,0)][0]
    try:
      dev.detach_kernel_driver(USB_IF)
    except Exception as err:
      print(err)
    usb.util.claim_interface(dev, USB_IF)
    while True:
      try:
        control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
        print(control)
      except Exception as err:
        print(err)
      time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
        sherlock.nutanix.com/privileged: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: usb
  template:
    metadata:
      labels:
        app: usb
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: alpine
        image: alpine
        volumeMounts:
        - name: scripts
          mountPath: /scripts
        command:
        - sh
        - -c 
        - cd /scripts && ./entrypoint.sh
      volumes:
      - name: scripts
        configMap:
          name: usb-scripts
          defaultMode: 0766

Configure Service Domain Environment Variables

Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.

As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:

  • You can provide environment variables and values by using a Helm chart you upload when you create an app. See Creating an Application and Helm Chart Support and Requirements.
  • You can also set specific environment variables per Service Domain by updating a Service Domain in the cloud management console or by using the kps update svcdomain command line.
  • See also Privileged Kubernetes Apps.

As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Setting and Clearing Service Domain Environment Variables - Command Line

How to set environment variables for a Service Domain.

Before you begin

You must be an infrastructure admin to set variables and values.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing infra admin user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) to find the service domain where you set the environment variables.
    user@host$ kps get svcdomain -o yaml
  5. For a Service Domain named my-svc-domain , for example, set the Service Domain environment variable. In this example, set a secret variable named SD_PASSWORD with a value of passwd1234 .
    user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
  6. Verify the changes.
    user@host$ kps get svcdomain my-svc-domain -o yaml
    kind: edge
    name: my-svc-doamin
    connected: true
    .
    .
    .
    env: '{"SD_PASSWORD": "passwd1234"}'
    The Service Domain is updated. Any infra admin or project user can deploy an app to a Service Domain where you can refer to the secret by using the variable $(SD_PASSWORD) .
  7. You can continue to add environment variables by using the kps update svcdomain my-svc-domain --set-env '{" variable_name ": " variable_value "}' command.
  8. You an also clear one or all variables for Service Domain svc_domain_name .
    1. To clear (unset) all environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env
    2. To clear (unset) a specific environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
  9. To update an app, restart it. See also Deploying and Undeploying a Kubernetes Application.

What to do next

Using Service Domain Environment Variables - Example

Using Service Domain Environment Variables - Example

Example: how to use existing environment variables for a Service Domain in application YAML.

About this task

  • You must be an infrastructure admin to set variables and values. See Setting and Clearing Service Domain Environment Variables - Command Line.
  • In this example, an infra admin has created these environment variables: a Kafka endpoint that requires a secret (authentication) to authorize the Kafka broker.
  • As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Procedure

  1. In your app YAML, add a container snippet similar to the following.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
      labels:
        app: my-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app
            image: some.container.registry.com/myapp:1.7.9
            ports:
            - containerPort: 80
         env:
            - name: KAFKA_ENDPOINT
              value: some.kafka.endpoint
            - name: KAFKA_KEY
            - value: placeholder
         command:
           - sh
           - c
           - "exec node index.js $(KAFKA_KEY)"
    
  2. Use this app YAML snippet when you create a Kubernetes app in Karbon Platform Services as described in Kubernetes Apps.

Karbon Platform Services Terminology

Category

Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.

Cloud Connector

Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.

Cloud Profile

Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.

Container Registry Profile

Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.

Data Pipeline

Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.

  • Input. An existing data stream or data source, identified according to a Category
  • Transformation. Code block such as a script defined in a Function to process or transform input data.
  • Output. A destination for data. Publish data to the cloud or cloud service (such as AWS Simple Queue Service) at the edge.
Data Service

Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.

Data Source

A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.

A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).

Functions

Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

Infrastructure Administrator

User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.

Project

A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.

Project User

User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.

Real-time Data Stream
  1. Data Pipeline output endpoint type with a Service Domain as an existing destination.
  2. An existing data pipeline real-time data stream that is used as the input to another data pipeline.
Run-time Environment

A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Service Domain

Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.

Karbon Platform Services Management Console

Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).

Karbon Platform Services

Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.

Version

Read article
License Manager Guide

Last updated: 2022-11-29

Welcome to License Manager

The Nutanix corporate web site includes up-to-date information about AOS software editions .

License Manager provides Licensing as a Service (LaaS) by integrating the Nutanix Support portal Licensing page with licensing management and agent software residing on Prism Element and Prism Central clusters. Unlike previous license schemes and work flows that were dependent on specific Nutanix software releases, License Manager is an independent software service residing in your cluster software. You can update it independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.

License Manager provides these features and benefits.

  • Simplified upgrade path. Upgradeable through Life Cycle Manager (LCM), which enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure that your cluster is running the latest licensing agent logic.
  • Streamlined web console interface for Prism Element and Prism Central. After licensing unlicensed clusters and adding new software products through the Nutanix Support Portal, the web console provides a single 1-click control plane for most work flows. The web console only displays your licensed products and their current status.
  • Simplified 1-click work flow. Invisible integration with the Nutanix Support Portal helps simplify the most common licensing work flows and use cases.
  • Consistent experience. License Manager works for all licensable products that are available for Prism Element and Prism Central.

Licensing Management Options

License Manager provides more than one way to manage your licenses, depending on your preference and cluster deployment. Except for dark-site clusters, where AOS and Prism Central clusters are not connected to the Internet, these options require that your cluster is connected to the Internet.

Manage Licenses with License Manager and 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet.

See Enable 1-Click Licensing and Manage Licenses with 1-Click Licensing.

Nutanix recommends that you configure and enable 1-click licensing. 1-click licensing simplifies license and add-on management by integrating the licensing work flow into a single interface in the web console. Once you enable this feature, you can perform most licensing tasks from the web console. It is disabled by default.

Depending on the product license you purchase, you apply it through the Prism Element or Prism Central web console. See Prism Element Cluster Licensing or Prism Central License Categories.

Manage Licenses with License Manager Without Enabling 1-Click Licensing (3-Step Licensing)

This feature is not available to dark site clusters, which are not connected to the Internet.

See Manage Licenses with Update License (3-Step Licensing).

After you license your cluster, the web console Licensing page allows you to manage your license tier by upgrading, downgrading, or otherwise updating a license. If you have not enabled 1-click licensing and want to use 3-step licensing, the Licensing page includes a Update License button.

Manage Licenses For Dark-Site Clusters
For legacy licenses (that is, licenses other than Nutanix cloud platform package licenses), see Manage Licenses for Dark Site Clusters (Legacy License Key).

For cloud platform package licenses, see Manage Licenses for Dark Site Clusters (Cloud Platform License Key).

Use these procedures if your dark site cluster is not connected to the Internet (that is, your cluster is deployed at a dark site). To enter dark site cluster information at the Nutanix Support Portal, these procedures require you to use a web browser from a machine connected to the Internet. If you do not have Internet access, you cannot use these procedures.

Nutanix Support Portal Licenses Page

This topic assumes that you already have user name and password credentials for the Nutanix Support portal.

After you log on to the Nutanix Support portal at https://portal.nutanix.com, click the Licenses link on the portal home page or the hamburger menu available from any page. Licenses provides access to these licensing landing pages:

  • Summary . Displays a license summary page with panes (widgets) for each type of license that you have purchased. Each pane specifies the tier type, usage percentage (how many you have used out of your total availability), and the metric for each type of license. Click the Manage Licenses button (top of page) to get started with new licenses or administer existing licenses.
  • License Inventory . Includes the following two license category pages.
    • Active Licenses . Displays a table of active licenses for your account (default view for License Inventory ). Active licenses are purchased licenses that are usable currently. Entry information in the table varies by tab and includes license ID, tier, class, expiration date, and other information. If you are using tags, the portal displays tag names on the page, which you can use to Filter licenses. To download a comma-separated values file containing the details from the table, click Download all as CSV . You can also select one or more specific entries to download only the selected cluster license details. The table includes the following tabs that filter the list by status:
      • All : Lists all active licenses in your inventory.
      • Available : Lists licenses that are currently available to be provisioned to a cluster.
      • Applied : Lists licenses that are provisioned to an active cluster.
      • Upgrade : Lists purchased upgrade licenses. An upgrade license enables you to activate one or more features from a higher licensing tier without having to purchase completely new licenses. That is, you upgrade your existing unexpired lower tier license (for example, AOS Pro) with an upgrade license (AOS Ultimate). For each upgrade license, you must have an existing lower tier license to activate the upgrade license. As with future licenses, you cannot apply an upgrade license to a cluster until the Start Date has passed. See Upgrade Licenses.
      • Reserved (Nutanix Cloud Clusters only): Lists the licenses that you have partially or fully reserved for Nutanix Cloud Clusters. You can download the licensing details as a CSV file and manage the license reservations. You can also specify the capacity allocation for Nutanix Cloud Clusters (see Reserve Licenses for Nutanix Cloud Clusters).
      • Expiring : Lists licenses that will be expiring soon.
      • Future : Lists licenses that start at a future date and cannot be applied to clusters until the start date is in the past.
    • Inactive Licenses . Displays a table of inactive licenses for your account. Inactive licenses are purchased licenses that have either expired or were decommissioned. Entry information includes license ID, expiration date, days since expiry, quantity, tier, class, PO number, and tag.
  • Clusters . Includes one or more of the following category pages depending on your account type. To download a comma-separated values file containing the details from any of the pages, click Download all as CSV . Click the Cluster UUID link to see cluster license details.
    • Licensed Clusters . Displays a table of licensed clusters including the cluster name, cluster UUID, license tier, and license metric.
    • Cloud Clusters . Displays a table of licensed Nutanix Cloud Clusters including the cluster name, cluster UUID, billing mode, and status.
    • License with file . Displays a table of clusters licensed with a license summary file (LSF) including the cluster name, cluster UUID, license tier, and license metric.
    • License with key . Displays a table of (usually dark site) clusters licensed with a license key including the cluster name, cluster UUID, license key, and license tags.
    • Cluster Expirations . Displays a table that lists when a license will expire. Entries include the expiry date, days to expiration, cluster name, and cluster UUID. To filter the table, select the duration from the pull-down list in the View all clusters expiring in the next XX Days field.

Displaying License Features, Details, and Status

About this task

The most current information about your licenses is available from the Prism Element (PE) or Prism Central (PC) web console. It is also available at the Nutanix Support Portal License page. You can view information about license levels, expiration dates, and any free license inventory (that is, unassigned available licenses).

From a PE Controller VM (CVM) or PC VM, you can also display license details associated with your dark site license key.

Procedure

  1. In the web console, click the gear icon, and select Licensing .
  2. Click the license tile to show the feature list for that license type. For example:
    Figure. License Features Click to enlarge List of features for your license

    Scroll down to see other features, then click Back to return to the licensing page.
  3. To show a list of all licenses and their details, click license details .
    Figure. License Details Click to enlarge List of features for your license

    Scroll down to see all licenses, then click Back to return to the licensing page.

Showing License Details Associated with Your Dark Site License Key

About this task

Use the command line license_key_util script to see license key information.

Procedure

  1. Log on to the dark-site PE CVM or PC CVM with SSH as the Nutanix user.
  2. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Upgrading License Manager

License Manager is independent software and is therefore updated independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.

When upgrades are available, you can upgrade License Manager through Life Cycle Manager (LCM). LCM enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure that your cluster is running the latest licensing agent logic.

Nutanix has also designed License Manager so that cluster node restarts are not required.

The Life Cycle Manager Guide describes how to perform an inventory and, if a new version is available, how to update a component like License Manager.

What Version Of License Manager Do I Have?

Log on to the Prism Element or Prism Central web console and do one of the following.

  1. Click the gear icon, then select Licensing in the Settings panel. Under the gear icon, you can see the version. For example, LM.2021.7.2.
    Figure. License Manager Version Click to enlarge License Manager Version

  2. From the Home menu, click LCM > Inventory . Select View By > Component . The list shows the version. For example, LM.2021.7.2.

About Licenses

Products That Do Not Require a License

Not all Nutanix software products require a license. Nutanix provides these products and their features without requiring you to do anything license-wise:

  • Nutanix AHV
  • Karbon (enabled through Prism Central)
  • Prism Starter
  • Framework and utility software such as Life Cycle Manager (LCM), X-Ray, and Move
  • Foundation

Prism Element License Categories

See these Nutanix corporate web sites for the latest information about software licensing and the latest available platforms.

  • Nutanix Software Editions and AOS Software Licensing Models
  • Nutanix Hardware Platforms and Dynamic Specsheet

Nutanix generally categorizes licenses as follows.

Software-only
  • Platforms are qualified by Nutanix (NX Series) or by approved third-party vendors, such as HPE, Cisco, Dell, and others.
  • No license is embedded or delivered as a default, as you purchase hardware and licenses separately.
  • You can purchase AOS Starter, Pro, or Ultimate, and Add-On licenses.
  • Your purchased license metric is based on capacity based licensing (CBL), remote office/back office (ROBO), or virtual desktop infrastructure (VDI).
  • Licenses are transferable.
Third-party OEM
  • Includes OEM platforms qualified by Dell (Nutanix on Dell XC Series appliances and XC Core nodes), Lenovo (Lenovo HX Series Certified Nodes), and other vendors.
  • No license is embedded or delivered as a default, as you purchase hardware and licenses separately.
  • You can purchase AOS Starter, Pro, or Ultimate, and Add-On licenses.
  • Your purchased license metric is based on capacity based licensing (CBL), remote office/back office (ROBO), or virtual desktop infrastructure (VDI).
  • Licenses are transferable.

Prism Element Cluster Licensing

Licenses you can apply through the Prism Element web console include:

AOS Starter, Pro, and Ultimate Licenses

See Prism Element License Categories.

Software-only and third-party OEM platforms require you to download and install a Starter license file that you have purchased.

For all platforms: the AOS Pro and Ultimate license levels require you to install this license on your cluster. When you upgrade a license or add nodes or clusters to your environment, you must install the license.

If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply the license through the Prism Element web console without needing to log on to the support portal.

If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.

Note: Legacy-licensed life of device Nutanix NX and OEM AOS Appliance platforms include an embedded Starter license as part of your purchase. It does not have to be downloaded and installed or otherwise applied to your cluster.
AOS Remote and Branch Office (ROBO) and Virtual Desktop Infrastructure per User (VDI) Licenses

The Nutanix corporate web site includes the latest information about these license models.

AOS Capacity-Based Licenses

Capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and total raw Flash drive capacity in tebibytes (TiBs). See AOS Capacity-Based Licensing.

Add-Ons

You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.

See Add-On Licenses.

Prism Central License Categories

As the control console for managing multiple clusters, Prism (also known as Prism Central) consists of three license tiers. If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply licenses through the Prism web console without needing to log on to the support portal.

If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.

Licenses that are available or that you can apply through the Prism web console include:

Prism Starter

Default free Prism license, which enables you to register and manage multiple Prism Element clusters, upgrade Prism with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. You do not have to explicitly apply the Prism Starter license tier, which also never expires.

Prism Pro

Includes all Prism Starter features plus customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting.

Prism Ultimate

Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90-day trial version of this license tier.

Add-Ons

You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.

See Add-On Licenses.

Cluster-based Licensing

Prism Central cluster-based licensing allows you to choose the level of data to collect from an individual Prism Element cluster managed by your Prism Central deployment. It also lets you choose the related features you can implement for a cluster depending on the applied Prism Central license tier. You can collect data even if metering types (capacity [cores] and nodes) are different for each node in a Prism Element cluster. See Prism Central Cluster-Based Licensing for more details.

AOS Capacity-Based Licensing

AOS capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and raw total Flash drive capacity in tebibytes (TiBs). This licensing model helps ensure a consistent licensing experience across different platforms running Nutanix software.

Each license stores the currently licensed capacity (CPU cores/Flash TiBs). If the capacity of the cluster increases, the web console informs you that additional licensing is required.

Upgrade Licenses

An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses. For example, you can upgrade your lower tier AOS Pro license to an Ultimate Upgrade license, which allows you to now activate or use the available Ultimate features.

For each upgrade license, you must have an existing unexpired lower tier license to activate the upgrade license. As with future licenses, you cannot apply an upgrade license to a cluster until the Start Date has passed.

View Upgrade Licenses at the Nutanix Support Portal

When you log on to the Nutanix Support portal and go to Licenses > License Inventory > Upgrade Licenses , the table shows all purchased Upgrade Licenses. See also Nutanix Support Portal Licenses Page.
Figure. Nutanix Support Portal Upgrade Licenses Page Click to enlarge A web page listing upgrade license details
Click a License Id link to see the properties of the upgrade license.
Figure. Example Upgrade License Properties Click to enlarge A popup window listing license properties
  • License ID . Upgrade license ID.
  • Upgraded From License . License ID of the existing unexpired lower tier license.
  • Applied qty / Available qty . Number of licenses that you have applied to clusters and the available number of licenses.
  • Product . Software product type such as Acropolis (AOS), Prism Central, and so on.
  • Tier . The upgraded license tier. Depending on your base lower tier, Pro or Ultimate.
  • Meter . How the license usage is metered. In this example, a capacity-based license (CBL) using vCPU cores as the metric.
  • Expires . Upgrade license expiration date. Your base license most likely has a different expiration date, expiring sooner than the upgrade license. The upgrade license typically extends your license term.

Nutanix Calm Licensing

For more information about how to enable Calm in Prism Central, see Enabling Calm in the Prism Central Guide and Calm Administration and Operations Guide.

The Nutanix Calm license for Prism Central enables you to manage the number of VMs that are provisioned or managed by Nutanix Calm. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise Cloud or public clouds.

The most current status information about your Calm licenses is available from the Prism Central web console. It is also available at the Nutanix Support Portal.

Once Calm is enabled, Nutanix provides a free trial period of 60 days to use Calm. It might take up to 30 minutes to show that Calm is enabled and your trial period is started.

Approximately 30 minutes after you enable Nutanix Calm, the Calm licensing card and licensing details show the trial expiration date. In Use status is displayed as Yes . See also License Warnings in the Web Console.

How The Nutanix Calm License VM Count is Calculated

The Nutanix Calm license VM count is a concurrent VM management limit and is linked to the application life cycle, from blueprint launch to application deletion. Consider the following:

  1. You launch a Nutanix Marketplace blueprint as an application named Example that includes three VMs. These three VMs are counted against your Calm license (that is, three VMs under Calm management).
  2. You later scale Example with two additional VMs. Now, five VMs are under Calm management.
  3. You launch another blueprint as an application named Example2 with four new VMs. Total current number of VMs now under Calm management is nine (total from Example and Example2 ).
  4. You delete Example . Total current number of VMs now under Calm management is four (from the existing active Example2 deployment).

Any VM you have created and are managing independently of Nutanix Calm is not part of the Calm license count. For example, you created a Windows guest OS VM through the Prism Element web console or with other tools (like a Cloudinit script). This VM is not part of a Calm blueprint.

However, if you import an existing VM into an existing Calm blueprint, that VM counts toward the Calm license. It counts until you delete the application deployment. If you stop the VM in this case and the application deployment is active, the VM is considered as under Calm management and part of the license count.

Endpoints Associated with a Runbook

For license usage, Calm also counts each endpoint that you associate to a runbook as one VM under Calm management. This license usage count type is effective as of the Nutanix Calm 3.1 release.

Add-On Licenses

Individual products known as add-ons can be added to your existing license feature set. For more information, see Nutanix Platform Software Options.

Before You License Your Cluster

Requirements and considerations for licensing. Consider the following before you attempt to manage your licenses.

Create a Cluster

Before attempting to install a license, ensure that you have created a cluster and logged into the Prism Element or Prism Central web console at least once. You must install a license after creating a cluster for which you purchased AOS Pro or AOS Ultimate licenses. If you are using Nutanix Cloud Clusters, you can reserve licenses for those clusters (see Reserve Licenses for Nutanix Cloud Clusters).

Before You Destroy a Cluster, Reclaim Licenses By Unlicensing Your Cluster

In general, before destroying a cluster with AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms, you must reclaim your licenses by unlicensing your cluster. Unlicensing a cluster returns your purchased licenses to your inventory.

You do not need to reclaim licenses in the following cases.

  • When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.
  • You do not need to reclaim legacy-licensed life of device AOS Starter licenses for Nutanix and OEM AOS Appliance platforms. These licenses are embedded and are automatically applied whenever you create a cluster. You do need to reclaim AOS Pro and Ultimate licenses for these platforms.

Can I Mix License Tiers in a Cluster?

If a cluster includes nodes with different license tiers (for example, AOS Pro and AOS Ultimate), the cluster and each node in the cluster defaults to the feature set enabled by the lowest license tier. For example, if two nodes in the cluster have AOS Ultimate licenses and two nodes in the same cluster have AOS Pro licenses, all nodes effectively have AOS Pro licenses and access to that feature set only.

Attempts to access AOS Ultimate features in this case result in a license noncompliance warning in the web console.

License a Cluster and Add-On

As 1-click licensing is disabled by default, use these Update License procedures to license your cluster that is also connected to the Internet. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you).

Figure. Update License Click to enlarge Update License Button

Licensing a Cluster (Internet Connected)

Use this procedure to license your cluster. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you). After you complete this procedure, you can enable 1-click licensing.

Before you begin

For this procedure, keep two browser windows open:

  • One browser window for the Prism Element (PE) or Prism Central (PC) web console
  • One browser on an Internet-connected machine for the Nutanix Support Portal

Procedure

  1. Log on to your PE or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. First Time Licensing Page Click to enlarge This picture shows the license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
  5. Click Add Licenses in the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The Acropolis tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  6. [Optional] Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Acropolis work flow above (for example, Data Encryption). For add-ons that are based on capacity like Nutanix Files, you might have to specify the disk capacity to allocate to files. You can rebalance this add-on in the web console later if you need to make any changes.

    If you choose not to license any add-ons at this time, you can license them later. See Licensing An Add-On (Internet Connected).

  7. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  8. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

What to do next

After you complete this procedure, you can now enable 1-click licensing. See Enable 1-Click Licensing.

Licensing An Add-On (Internet Connected)

Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster. This procedure describes how to do this on a cluster connected to the Internet.

About this task

  • If you did not license any add-ons as part of Licensing a Cluster (Internet Connected), use this procedure.
  • The Nutanix corporate web site includes up-to-date information about add-ons like Nutanix Files, Calm, and so on.

Before you begin

  • Ensure that you have an active My Nutanix account with valid credentials.
  • Make sure you have licensed your cluster first. See Licensing a Cluster (Internet Connected) or Licensing a Cluster (Dark Site).

Procedure

  1. Log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. Download the cluster summary file.
    1. 1-click licensing enabled: Click License with Portal , then click Download to save a cluster summary file to your local machine.
    2. 1-click licensing not enabled or disabled: Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-Step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    Available add-ons in your inventory like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Nutanix Calm add-on in your inventory.
  5. Click Add Licenses in the add-on tile. For example, the Data Encryption tile.
    1. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    2. Click Save .
      The example the Data Encryption tile now shows two links: Edit License , where you can make any changes, and Unselect License , which returns the add-on to an unlicensed state.
  6. Continue licensing any add-ons in your inventory by clicking Add Licenses in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Data Encryption step above. Unrestricted add-ons will not have the Cluster Deployment Location? selection.

    For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the cluster disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes, as described in Using Rebalance to Adjust Your Licenses.

  7. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  8. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

    Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.

Enable 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, you must create an API key and download an SSL key from your My Nutanix dashboard. 1-click licensing simplifies licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, you need to enable and configure it first.

Note: Your network must allow outbound traffic to portal.nutanix.com:443 to use this feature.

If your cluster is unlicensed, make sure you have licensed it first, then enable 1-click licensing. See Licensing a Cluster (Internet Connected).

1-click licensing simplifies licensing by integrating the licensing work flow into a single control plane in the Prism Element (PE) and Prism Central (PC) web consoles. Once you enable 1-click licensing, you can perform most licensing tasks from the web console Licensing settings panel without needing to explicitly log on to the Nutanix Support Portal.

1-click license management requires you to create an API key and download an SSL key from your My Nutanix dashboard to secure communications between the Nutanix Support Portal and the PE or PC web console.

With the API and SSL keys associated with your My Nutanix account, the web console can communicate with the Nutanix Support Portal to detect any changes or updates to your cluster license status.

  • The API key is initially bound to the user that creates the key on the Nutanix Support Portal. All subsequent licensing operations are done on behalf of the user who created the key and any operations written to an audit log include information identifying this user. That is, licensing task X was performed by user Y (assuming user Y created the key).
  • The API key is also bound to the Prism user that logs on to the web console and then registers the key. For example, if user admin registers the key, any licensing task performed by admin is performed on behalf of user Y (who created the key on the Support Portal). A Prism user (such as admin) must have access to the API key to use this feature. You can use Role Mapping and/or Lightweight Directory Access Protocol to manage key access or provide an audit trail if more than one admin user is going to manage licensing.

After enabling 1-click licensing, you can also disable it.

Enabling 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, first create a Licensing API key and download an SSL public key. After you do this, register both through the Prism Element or Prism Central web console. You might need to turn off any pop-up blockers in your browser to display dialog boxes.

Before you begin

  • Ensure that you have an active My Nutanix account with valid credentials.
  • Make sure you have licensed your cluster first. See Licensing a Cluster (Internet Connected) or Licensing a Cluster (Dark Site).

Procedure

  1. Create an API key and download the SSL key as described in Creating an API Key.
    Make sure you select the Licensing scope when creating the API key. If you are currently logged on to the Nutanix Support Portal, you can access the API Key page by selecting API Keys from your profile. See also API Key Management.
  2. When you create an API key, make sure you give it a unique name.
  3. Register the keys through the Prism Element or Prism Central web console.
    1. Log on to the web console, click the gear icon, then click Licensing .
    2. Click Enable 1-Click Licensing .
    3. Paste the API key in the API KEY field in the dialog box.
    4. Add the SSL public key by clicking Choose File and browsing to the public key file that you downloaded when you created the key.
    5. Click Save .
    Figure. Enable 1-click licensing Click to enlarge Enable 1-click licensing dialog

    Messages display showing that 1-click licensing is enabled. You can also check related task messages on the Tasks page in the web console.

    The Licensing page now shows two buttons: Disable 1-Click Licensing , which indicates 1-click licensing is enabled, and License With Portal , which lets you upgrade license tiers and add-ons by using the manual Nutanix Support Portal 3-step licensing work flow.

    Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.

    After enabling 1-click licensing, you can also disable it .

Disabling 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. Disables 1-click licensing through the web console. For security reasons, you cannot reuse your previously-created API key after disabling 1-click licensing.

About this task

You might need to disable the 1-click licensing connection associated with the API and public keys. If you disable the connection as described here, you can enable it again by obtaining a new API key as described in Creating an API Key.

Procedure

  1. Log on to the web console, click the gear icon, then click Licensing .
  2. Click Disable 1-Click Licensing .
  3. Click Disable , then click Yes to confirm.
    A status message and dialog box is displayed.
  4. To close the dialog box, click Cancel .
    The Licensing page now shows two buttons: Enable 1-Click Licensing , which indicates 1-click licensing is enabled, and License With Portal , which lets you upgrade license tiers and license add-ons by using the manual Nutanix Support Portal 3-step licensing work flow.

Manage Licenses with 1-Click Licensing

After you license your cluster, 1-click licensing helps simplify licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, enable and configure it first.

This feature is not available to dark site clusters, which are not connected to the Internet.

Once you configure this feature, you can perform most tasks from the Prism web console without needing to explicitly log on to the Nutanix Support Portal.

When you open Licensing from the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses without leaving the web console. 1-click licensing communicates with the Nutanix Support Portal to detect any changes or updates to your cluster license status.

If you want to change your license tier by upgrading or downgrading your license, use the procedures in Upgrading or Downgrading (Changing Your License Tier).

Before You Begin
Before you begin, make sure your cluster and any add-ons are licensed.
  • See Before You License Your Cluster, License a Cluster and Add-On, and Enable 1-Click Licensing.
  • Your network must allow outbound traffic to portal.nutanix.com:443 to use this feature.

1-Click Licensing Actions

On the Licensing page in the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses directly from the web console.

Rebalance

If you have made changes to your cluster, choose Rebalance to help ensure your available licenses (including licensed add-ons) are applied correctly. Use Rebalance if you:

  • Have added a node and have an available license in your account
  • Have removed one or more nodes from your cluster. Choose Rebalance to reclaim the now-unused license and return it to your license inventory.
  • Move nodes from one cluster to another.
  • Have reduced in capacity in your cluster (number of nodes, Flash storage capacity, number of licenses, and so on)

Extend

Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions.

If your license has expired, you have to license the cluster as if it were unlicensed. See License a Cluster and Add-On.

Unlicense

Choose Unlicense to unlicense a cluster (including licensed add-ons) in one click. This action removes the licenses from a cluster and returns them to your license inventory. This action is sometimes referred to as reclaiming licenses.

  • Before destroying a cluster, you must unlicense your cluster to reclaim and return your licenses to your inventory.
  • You do not need to reclaim legacy-licensed life of device AOS Starter licenses for Nutanix and OEM AOS Appliance platforms. These licenses are embedded and are automatically applied whenever you create a cluster. You do need to reclaim AOS Pro and Ultimate licenses for these platforms.
  • You do need to reclaim AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms.

Using Rebalance to Adjust Your Licenses

Before you begin

If you have destroyed the cluster and did not reclaim the existing licenses by unlicensing the cluster first, contact Nutanix Support to help reclaim the licenses. For more information about how to destroy a cluster, see Destroying a Cluster in the Acropolis Advanced Administration Guide.

About this task

  • 1-Click Licensing Actions describes when to use Rebalance .
  • If License Manager does not detect any changes to your cluster after you click 1-Click Licensing > Rebalance , any subsequent dialog action buttons are grayed out or unavailable

Procedure

  1. Log on to your Prism Element or Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. To rebalance Acropolis or Prism Central licenses, choose 1-Click Licensing > Rebalance from the Acropolis or Prism Central tile.
    1. If you have reduced physical capacity in your cluster (for example, added or removed nodes or change Flash storage capacity) the rebalance task begins.
    2. If your license type is based on the number of users (such as a VDI license) or VMs (such as a ROBO license), you can adjust the number of licenses allocated. Increase or decrease the number in the dialog box and click 1-Click Licensing > Rebalance .
    License Manager contacts the Nutanix Support portal and rebalances your cluster. During this operation, web console Licensing is unavailable until the operation is completed.
  3. To rebalance add-on licenses, choose 1-Click Licensing > Rebalance from the add-on tile. For example, Files for AOS.
    1. For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the capacity to allocate. For example, to change the cluster disk capacity to allocate to Files, increase or decrease the number in the dialog box and click 1-Click Licensing > Rebalance .
    License Manager contacts the Nutanix Support portal and rebalances your cluster. During this operation, web console Licensing is unavailable until the operation is completed.

Using Extend to Update Your License Term

About this task

Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions. 1-Click Licensing Actions describes Extend and other actions.

If your licenses have expired, you basically have to license the cluster as described in License a Cluster and Add-On.

Procedure

  1. Log on to your Prism Element or Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. To update AOS licenses, choose 1-Click Licensing > Extend from the Acropolis tile.
  3. To update add-on licenses, choose 1-Click Licensing > Extend from the add-on tile. For example, Files for AOS.
    License Manager contacts the Nutanix Support portal. During this operation, web console Licensing is unavailable until the operation is completed. Note that the Valid Until date text under the extended tile changes to the new expiration date.

Using Unlicense to Reclaim and Return Licenses to Inventory

About this task

Choose Unlicense to unlicense your cluster (sometimes referred to as reclaiming licenses). 1-Click Licensing Actions describes when to choose Unlicense and other actions.

Perform this task for each cluster that you want to unlicense. If you unlicense Prism Central (PC), the default license type Prism Center Starter is applied as a result. Registered clusters other than the PC cluster remain licensed.

Procedure

  1. Log on to your Prism Element or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. To reclaim AOS or PC licenses and any add-on licenses (such as Nutanix Files, Data Encryption, and so on), choose 1-Click Licensing > Unlicense from the Acropolis or PC tile.
  3. To reclaim add-on licenses only, choose 1-Click Licensing > Unlicense from the add-on tile. For example, Files for AOS.
    License Manager contacts the Nutanix Support portal. During this operation, web console Licensing is unavailable until the operation is completed. Note that the tile text changes to No License or Starter , depending on your platform.

Applying An Upgrade License

Use this procedure when you have purchased an upgrade license. An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.

Before you begin

This procedure uses the 3-Step Licensing method. You cannot apply this license type automatically with 1-click licensing.

Procedure

  1. Log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, do one of the following steps.
    1. If you have enabled 1-click licensing, click License with Portal , then click Download .
    2. If you have not enabled 1-click licensing, click Update License , then click Download .
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.