×

Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Nutanix
Migration Guide

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-07-25

This Document Has Been Removed

Nutanix Move is the Nutanix-recommended tool for migrating a VM. Please see the Move documentation at the Nutanix Support portal.

vSphere Administration Guide for Acropolis

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-12-08

Overview

Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing infrastructure.

All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also enables data protection and High Availability that keep critical data and guest VMs protected.

This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.

Hardware Configuration

See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management tasks.

Limitations

For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.

Nutanix Software Configuration

The Nutanix Distributed Storage Fabric aggregates local SSD and HDD storage resources into a single global unit called a storage pool. In this storage pool, you can create several storage containers, which the system presents to the hypervisor and uses to host VMs. You can apply a different set of compression, deduplication, and replication factor policies to each storage container.

Storage Pools

A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only one storage pool for each Nutanix cluster.

Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds an extra data protection layer at the cost of more storage space for the copy. For use cases where applications provide their own data protection or high availability, you can set a replication factor of 1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS datastore. The replication factor of a storage container determines its usable capacity. For example, replication factor 2 tolerates one component failure and replication factor 3 tolerates two component failures. When you create a Nutanix cluster, three storage containers are created by default. Nutanix recommends that you do not delete these storage containers. You can rename the storage container named default - xxx and use it as the main storage container for hosting VM data.
Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore can host.

Capacity Optimization

  • Nutanix recommends enabling inline compression unless otherwise advised.
  • Nutanix recommends disabling deduplication for all workloads except VDI.

    For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable deduplication on that storage container.

Nutanix CVM Settings

CPU
Keep the default settings as configured by the Foundation during the hardware configuration.

Change the CPU settings only if Nutanix Support recommends it.

Memory
Most workloads use less than 32 GB RAM memory per CVM. However, for mission-critical workloads with large working sets, Nutanix recommends more than 32 GB CVM RAM memory.
Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .
Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
Caution: Do not use jumbo Frames for the Nutanix CVM.
Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.

Nutanix Cluster Settings

Nutanix recommends that you do the following.

  • Map a Nutanix cluster to only one vCenter Server.

    Due to the way the Nutanix architecture distributes data, there is limited support for mapping a Nutanix cluster to multiple vCenter Servers. Some Nutanix products (Move, Era, Calm, Files, Prism Central), and features (disaster recovery solution) are unstable when a Nutanix cluster maps to multiple vCenter Servers.

  • Configure a Nutanix cluster with replication factor 2 or replication factor 3.
    Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3 requires at least five nodes so that the data remains online even if two nodes fail concurrently.
  • Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable storage for replication factor 2 or two nodes for replication factor 3.

    The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and rebuild two nodes for self-healing.

  • Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.

    You can also create a single storage container. If you are creating multiple storage containers, ensure that all the storage containers follow the advertised capacity recommendation.

  • Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist.

Software Acceptance Level

The Foundation sets the software acceptance level of an ESXi image to CommunitySupported by default. If there is a requirement to upgrade the software acceptance level, run the following command to upgrade the software acceptance level to the maximum acceptance level of PartnerSupported .

root@esxi# esxcli software acceptance set --level=PartnerSupported

Scratch Partition Settings

ESXi uses the scratch partition (/scratch) to dump the logs when it encounters a purple screen of death (PSOD) or a kernel dump. The Foundation install automatically creates this partition on the SATA DOM or M.2 device with the ESXi installation. Moving the scratch partition to any location other than the SATA DOM or M.2 device can cause issues with LCM, 1-click hypervisor updates, and the general stability of the Nutanix node.

vSphere Networking

vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking components across various traffic types. To ensure availability, scalability, performance, management, and security of your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.

You can configure networks according to your requirements. For detailed information about vSphere virtual networking and different networking strategies, refer to the Nutanix vSphere Storage Solution Document and the VMware Documentation . This chapter describes the configuration elements required to run VMware vSphere on the Nutanix Enterprise infrastrucutre.

Virtual Networking Configuration Options

vSphere on Nutanix supports the following types of virtual switches.

vSphere Standard Switch (vSwitch)
vSphere Standard Switch (vSS) with Nutanix vSwitch is the default configuration for Nutanix deployments and suits most use cases. A vSwitch detects which VMs are connected to each virtual port and uses that information to forward traffic to the correct VMs. You can connect a vSwitch to physical switches by using physical Ethernet adapters (also referred to as uplink adapters) to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network.
Tip: A vSwitch works like a physical Ethernet switch.
vSphere Distributed Switch (vDS)

Nutanix recommends vSphere Distributed Switch (vDS) coupled with network I/O control (NIOC version 2) and load-based teaming. This combination provides simplicity, ensures traffic prioritization if there is contention, and reduces operational management overhead. A vDS acts as a single virtual switch across all associated hosts on a datacenter. It enables VMs to maintain consistent network configuration as they migrate across multiple hosts. For more information about vDS, see NSX-T Support on Nutanix Platform.

Nutanix recommends setting all vNICs as active on the port group and dvPortGroup unless otherwise specified. The following table lists the naming convention, port groups, and the corresponding VLAN Nutanix uses for various traffic types.

Table 1. Port Groups and Corresponding VLAN
Port group VLAN Description
MGMT_10 10 VM kernel interface for host management traffic
VMOT_20 20 VM kernel interface for vMotion traffic
FT_30 30 Fault tolerance traffic
VM_40 40 VM traffic
VM_50 50 VM traffic
NTNX_10 10 Nutanix CVM to CVM cluster communication traffic (public interface)
Svm-iscsi-pg N/A Nutanix CVM to internal host traffic
VMK-svm-iscsi-pg N/A VM kernel port for CVM to hypervisor communication (internal)

All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi management, VM traffic, vMotion, and so on.

Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication between the CVM and the internal hypervisor.

VMware NSX Support

Running VMware NSX on Nutanix infrastructure ensures that VMs always have access to fast local storage and compute, consistent network addressing and security without the burden of physical infrastructure constraints. The supported scenario connects the Nutanix CVM to a traditional VLAN network, with guest VMs inside NSX virtual networks. For more information, see the Nutanix vSphere Storage Solution Document .

NSX-T Support on Nutanix Platform

Nutanix platform relies on communication with vCenter to work with networks backed by vSphere standard switch (vSS) or vSphere Distributed Switch (vDS). With the introduction of a new management plane, that enables network management agnostic to the compute manager (vCenter), network configuration information is available through the NSX-T manager. To collect the network configuration information from the NSX-T Manager, you must modify the Nutanix infrastructure workflows (AOS upgrades, LCM upgrades, and so on).

Figure. Nutanix and the NSX-T Workflow Overview Click to enlarge Nutanix and NSX-T Workflow Overview

The Nutanix platform supports the following in the NSX-T configuration.

  • ESXi hypervisor only.
  • vSS and vDS virtual switch configurations.
  • Nutanix CVM connection to VLAN backed NSX-T segments only.
  • The NSX-T Manager credentials registration using the CLI.

The Nutanix platform does not support the following in the NSX-T configuration.

  • Network segmentation with N-VDS.
  • Nutanix CVM connection to overlay NSX-T segments.
  • Link Aggregation/LACP for the uplinks backing the NVDS host switch connecting Nutanix CVMs.
  • The NSX-T Manager credentials registration through Prism.

NSX-T Segments

Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.

NSX-T has the following types of segments.

VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network. Encapsulation occurs at the transport layer (which is the NVDS module on the host).

Multicast Filtering

Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).

Creating Segment for NVDS

This procedure provides details about creating a segment for nVDS.

About this task

To check the vSwitch configuration of the host and verify if NSX-T network supports the CVM port-group, perform the following steps.

Procedure

  1. Log on to vCenter sever and go to the NSX-T Manager.
  2. Click Networking , and go to Connectivity > Segments in the left pane.
  3. Click ADD SEGMENT under the SEGMENTS tab on the right pane and specify the following information.
    Figure. Create New Segment Click to enlarge Create New Segment

    1. Segment Name : Enter a name for the segment.
    2. Transport Zone : Select the VLAN-based transport zone.
      This transport name is associated with the Transport Zone when configuring the NSX switch .
    3. VLAN : Enter the number 0 for native VLAN.
  4. Click Save to create a segment for NVDS.
  5. Click Yes when the system prompts to continue with configuring the segment.
    The newly created segment appears below the prompt.
    Figure. New Segment Created Click to enlarge New Segment Created

Creating NVDS Switch on the Host by Using NSX-T Manager

This procedure provides instructions to create an NVDS switch on the ESXi host. The management and CVM external interface of the host is migrated to the NVDS switch.

About this task

To create an NVDS switch and configure the NSX-T Manager, do the following.

Procedure

  1. Log on to NSX-T Manager.
  2. Click System , and go to Configuration > Fabric > Nodes in the left pane.
    Figure. Add New Node Click to enlarge Add New Node

  3. Click ADD HOST NODE under the HOST TRANSPORT NODES in the right pane.
    1. Specify the following information in the Host Details dialog box.
      Figure. Add Host Details Click to enlarge Add Host Details

        1. Name : Enter an identifiable ESXi host name.
        2. Host IP : Enter the IP address of the ESXi host.
        3. Username : Enter the username used to log on to the ESXi host.
        4. Password : Enter the password used to log on to the ESXi host.
        5. Click Next to move to the NSX configuration.
    2. Specify the following information in the Configure NSX dialog box.
      Figure. Configure NSX Click to enlarge Configure NSX

        1. Mode : Select Standard option.

          Nutanix recommends the Standard mode only.

        2. Name : Displays the default name of the virtual switch that appears on the host. You can edit the default name and provide an identifiable name as per your configuration requirements.
        3. Transport Zone : Select the transport zone that you selected in Creating Segment for NVDS.

          These segments operate in the way similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.

        4. Uplink Profile : Select an uplink profile for the new nVDS switch.

          This selected uplink profile represents the NICs connected to the host. For more information about uplink profiles, see the VMware Documentation .

        5. LLDP Profile : Select the LLDP profile for the new nVDS switch.

          For more information about LLDP profiles, see the VMware Documentation .

        6. Teaming Policy Uplink Mapping : Map the uplinks with the physical NICs of the ESXi host.
          Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking > Physical Adapters .

          Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration to the NVDS.

          Note: Always migrate one physical NIC at a time to avoid connectivity failure with the ESXi host.
        7. PNIC only Migration : Turn on the switch to Yes if there are no VMkernal Adapters (vmks) associated with the PNIC selected for migration from vSS switch to the nVDS switch.
        8. Network Mapping for Install . Click Add Mapping to migrate the VMkernels (vmks) to the NVDS switch.
        9. Network Mapping for Uninstall : To revert the migration of VMKernels.
  4. Click Finish to create the ESXi host to the NVDS switch.
    The newly created nVDS switch appears on the ESXi host.
    Figure. NVDS Switch Created Click to enlarge NVDS Switch Created

Registering NSX-T Manager with Nutanix

After migrating the external interface of the host and the CVM to the NVDS switch, it is mandatory to inform Genesis about the registration of the cluster with the NSX-T Manager. This registration helps Genesis communicate with the NSX-T Manager and avoid failures during LCM, 1-click, and AOS upgrades.

About this task

This procedure demonstrates an AOS upgrade error encountered for a non-registered NSX-T Manager with Nutanix and how to register the the NSX-T Manager with Nutanix and resolve the issue.

To register an the NSX-T Manager with Nutanix, do the following.

Procedure

  1. Log on to the Prism Element web console.
  2. Select VM > Settings > Upgrade Software > Upgrade > Pre-upgrade to upgrade AOS on the host.
    Figure. Upgrade AOS Click to enlarge

  3. The upgrade process throws an error if the NSX-T Manager is not registered with Nutanix.
    Figure. AOS Upgrade Error for Unregistered NSX-T Click to enlarge

    The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information must be configured in the Nutanix cluster.

  4. To fix this upgrade issue, log on to the Prism Element web console using SSH.
  5. Access the cluster directory.
    nutanix@cvm$ cd ~/cluster/bin
  6. Verify if the NSX-T Manager was registered with the CVM earlier.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If the NSX-T Manager was not registered earlier, you get the following message.

    No MX-T manager configured in the cluster
  7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T Manager to the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
    IP address: 10.10.10.10
    Username: admin
    Password: 
    /usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/urllib3/conectionpool.py:843:
     InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate verification is strongly advised. 
    See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    Successfully persisted NSX-T manager information
  8. Verify the registration of NSX-T Manager with the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If there are no errors, the system displays a similar output.

    IP address: 10.10.10.10
    Username: admin
  9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.

    The AOS upgrade is completed successfully.

Networking Components

IP Addresses

All CVMs and ESXi hosts have two network interfaces.
Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information about backplane interface and traffic segmentation, see Security Guide.
Interface IP address vSwitch
ESXi host vmk0 User-defined vSwitch0
CVM eth0 User-defined vSwitch0
ESXi host vmk1 192.168.5.1 vSwitchNutanix
CVM eth1 192.168.5.2 vSwitchNutanix
CVM eth1:1 192.168.5.254 vSwitchNutanix
CVM eth2 User-defined vSwitch0
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet 192.168.5.0/24.

vSwitches

A Nutanix node is configured with the following two vSwitches.

  • vSwitchNutanix

    Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.

    Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do not modify vSwitchNutanix.
    Figure. vSwitchNutanix Configuration Click to enlarge vSwitchNutanix Configuration

  • vSwitch0

    All other external communications like CVM to a differnet host (in case of HA re-direction) use vSwitch0 that has uplinks to the physical network interfaces. Since network segmentation is disabled by default, the backplane traffic uses vSwitch0.

    vSwitch0 has the following two networks.

    • Management Network

      HA, vMotion, and vCenter communications use the Management Network.

    • VM Network

      All VMs use the VM Network.

    Caution:
    • The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
    • You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo Frames on hypervisor hosts, ensure to enable them end-to-end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.
    Figure. vSwitch0 Configuration Click to enlarge vSwitch0 Configuration

Configuring Host Networking (Management Network)

After you create the Nutanix cluster by using Foundation, configure networking for your ESXi hosts.

About this task

Figure. Configure Management Network Click to enlarge Ip Configuration image

Procedure

  1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
  2. Press the down arrow key until Configure Management Network highlights and then press Enter .
  3. Select Network Adapters and then press Enter .
  4. Ensure that the connected network adapters are selected.
    If they are not selected, press Space key to select them and press Enter key to return to the previous screen.
    Figure. Network Adapters Click to enlarge Select a Network Adapters
  5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter . In the dialog box, provide the VLAN ID and press Enter .
    Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  6. Select IP Configuration and press Enter .
    Figure. Configure Management Network Click to enlarge IP Address Configuration
  7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting.
  8. Provide values for the following: IP Address , Subnet Mask , and Default Gateway fields based on your environment and then press Enter .
  9. Select DNS Configuration and press Enter .
  10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting.
  11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter .
  12. Press Esc and then Y to apply all changes and restart the management network.
  13. Select Test Management Network and press Enter .
  14. Press Enter to start the network ping test.
  15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter .

    Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured.

    Figure. Test Management Network Click to enlarge Test Management Network

    Press Enter to close the test window.

  16. Press Esc to log off.

Changing a Host IP Address

About this task

To change a host IP address, perform the following steps. Perform the following steps once for each hypervisor host in the Nutanix cluster. Complete the entire procedure on a host before proceeding to the next host.
Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host using the following steps.
Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that one interface is on the same subnet as the CVM.

Procedure

  1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking (Management Network).
  2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter.
  3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Reconnecting a Host to vCenter

About this task

If you modify the IP address of a host, you must reconnect the host with the vCenter. To reconnect the host to the vCenter, perform the following procedure.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the host with the changed IP address and select Disconnect .
  3. Right-click the host again and select Remove from Inventory .
  4. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
    You can see the host with the updated IP address in the left pane of vCenter.

Selecting a Management Interface

Nutanix tracks the management IP address for each host and uses that IP address to open an SSH session into the host to perform management activities. If the selected vmk interface is not accessible through SSH from the CVMs, activities that require interaction with the hypervisor fail.

If multiple vmk interfaces are present on a host, Nutanix uses the following rules to select a management interface.

  1. Assigns weight to each vmk interface.
    • If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4. Otherwise, the weight assigned is 0.
    • If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its weight.
    • If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its weight.
  2. The vmk interface that has the highest weight is selected as the management interface.

Example of Selection of Management Network

Consider an ESXi host with following configuration.

  • vmk0 IP address and mask: 2.3.62.204, 255.255.255.0
  • vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
  • vmk2 IP address and mask: 2.3.63.24, 255.255.255.0

Consider a CVM with following configuration.

  • eth0 inet address and mask: 2.3.63.31, 255.255.255.0
  • eth2 inet address and mask: 2.3.62.12, 255.255.255.0

According to the rules, the following weights are assigned to the vmk interfaces.

  • vmk0 = 4 + 0 + 1 = 5
  • vmk1 = 0 + 0 + 0 = 0
  • vmk2 = 0 + 2 + 0 = 2

Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.

To verify that vmk0 interface is selected for management IP address, use the following command.

root@esx# esxcli network ip interface tag get -i vmk0

You see the following output.

Tags: Management, VMmotion

For the other two interfaces, no tags are displayed.

If you want any other interface to act as the management IP address, enable management traffic on that interface by following the procedure described in Selecting a New Management Interface.

Selecting a New Management Interface

You can mark the vmk interface to select as a management interface on an ESXi host by using the following method.

Procedure

  1. Log on to vCenter with the web client.
  2. Do the following on the ESXi host.
    1. Go to Configure > Networking > VMkernel adapters .
    2. Select the interface on which you want to enable the management traffic.
    3. Click Edit settings of the port group to which the vmk belongs.
    4. Select Management check box from the Enabled services option to enable management traffic on the vmk interface.
  3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
    root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management

    Replace vmkN with the vmk interface where you want to enable the management traffic.

Updating Network Settings

After you configure networking of your vSphere deployments on Nutanix Enterprise Cloud, you may want to update the network settings.

  • To know about the best practice of ESXi network teaming policy, see Network Teaming Policy.

  • To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch (vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

  • To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch (vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG.

    .

Network Teaming Policy

On an ESXi host, NIC teaming policy allows you to bundle two or more physical NICs into a single logical link to provide more network bandwidth aggregation and link redundancy to a vSwitch. The NIC teaming policies in the ESXi networking configuration for a vSwitch consists of the following.

  • Route based on originating virtual port.
  • Route based on IP hash.
  • Route based on source MAC hash.
  • Explicit failover order.

In addition to the earlier mentioned NIC teaming policy, vDS uses an extra teaming policy that consists of - Route based on physical NIC load.

When Foundation or Phoenix imaging is performed on a Nutanix cluster, the following two standard virtual switches are created on ESXi hosts:

  • vSwitch0
  • vSwitchNutanix

On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the following recommendations for NIC teaming:

  • vSwitch. Route based on originating virtual port
  • vDS. Route based on physical NIC load

On vSwitchNutanix, there are no uplinks to the virtual switch, so there is no NIC teaming configuration required.

Migrate from a Standard Switch to a Distributed Switch

This topic provides detailed information about how to migrate from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS).

The following are the two types of virtual switches (vSwitch) in vSphere.

  • vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking).
  • vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking).
Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation .

For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch without LACP/LAG.

Standard Switch Configuration

The standard switch configuration consists of the following.

vSwitchNutanix
vSwitchNutanix handles internal communication between the CVM and the ESXi host. There are no uplink adapters associated with this vSwitch. This virtual switch enables the communication between the CVM and the host. Administrators must not modify the settings of this virtual switch or its port groups. The only members of this port group must be the CVM and its host. Do not modify this virtual switch configuration as it can disrupt communication between the host and the CVM.
vSwitch0
vSwitch0 consists of the vmk (VMkernel) management interface, vMotion interface, and VM port groups. This virtual switch connects to uplink network adapters that are plugged into a physical switch.

Planning the Migration

It is important to plan and understand the migration process. An incorrect configuration can disrupt communication, which can require downtime to resolve.

Consider the following while or before planning the migration.

  • Read Nutanix Best Practice Guide for VMware vSphere Networking available here.

  • Understand the various teaming and load-balancing algorithms on vSphere.

    For more information, see the VMware Documentation .

  • Confirm communication on the network through all the connected uplinks.
  • Confirm access to the host using IPMI when there are network connectivity issues during migration.

    Access the host to troubleshoot the network issue or move the management network back to the standard switch depending on the issue.

  • Confirm that the hypervisor external management IP address and the CVM IP address are in the same public subnet for the data path redundancy functionality to work.
  • When performing migration to the distributed switch, migrate one host at a time and verify that networking is working as desired.
  • Do not migrate the port groups and vmk (VMkernel) interfaces that are on vSwitchNutanix to the distributed switch (dvSwitch).

Unassigning Physical Uplink of the Host for Distributed Switch

All the physical adapters connect to the vSwitch0 of the host. A live distributed switch must have a physical uplink connected to it to work. To assign the physical adapter of the host to the distributed switch, unassign the physical adapter of the host and assign it to the new distributed switch.

About this task

To unassign the physical uplink of the host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Networking > Virtual Switches .
  4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters that you want to unassign from the list of physical adapters of the host.
    Figure. Managing Physical Adapters Click to enlarge Managing Physical Adapters

  5. Click X on the top.
    The selected adapter is unassigned from the list of physical adapters of the host.
    Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host. If you lose network connectivity to the ESXi host during this test, review your network configuration.

Migrating to a New Distributed Switch without LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Configuring Port Group Policies

Creating a Distributed Switch

Connect to vCenter and create a distributed switch.

About this task

To create a distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Distributed Switch Creation Click to enlarge Distributed Switch Creation

  3. Right-click the host, select Distributed Switch > New Distributed Switch , and specify the following information in the New Distributed Switch dialog box.
    1. Name and Location : Enter name for the distributed switch.
    2. Select Version : Select a distributed switch version that is compatible with all your hosts in that datacenter.
    3. Configure Settings : Select the number of uplinks you want to connect to the distributed switch.
      Select Create a default port group checkbox to create a port group. To configure a port group later, see Creating Port Groups on the Distributed Switch.
    4. Ready to complete : Review the configuration and click Finish .
    A new distributed switch is created with the default uplink port group. The uplink port group is the port group to which the uplinks connect. This uplink is different from the vmk (VMkernel) or the VM port groups.
    Figure. New Distributed Switch Created in the Host Click to enlarge New Distributed Switch Created in the Host

Creating Port Groups on the Distributed Switch

Create one or more vmk (VMkernel) port groups and VM port groups depending on the vSphere features you plan to use and or the physical network layout. The best practice is to have the vmk Management interface, vmk vMotion interface, and vmk iSCSI interface on separate port groups.

About this task

To create port groups on the distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Creating Distributed Port Groups Click to enlarge Creating Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port Group , and follow the wizard to create the remaining distributed port group (vMotion interface and VM port groups).
    You would need the following port groups because you would be migrating from the standard switch to the distributed switch.
    • VMkernel Management interface . Use this port group to connect to the host for all management operations.
    • VMNetwork . Use this port group to connect to the new VMs.
    • vMotion . This port group is an internal interface and the host will use this port during failover for vMotion traffic.
    Note: Nutnaix recommends you to use static port binding instead of ephemeral port binding when you create a port group.
    Figure. Distibuted Port Groups Created Click to enlarge Distibuted Port Groups Created

    Note: The port group for vmk management interface is created during the distributed switch creation. See Creating a Distributed Switch for more information.

Configuring Port Group Policies

To configure port groups, you must configure VLANs, Teaming and failover, and other distributed port groups policies at the port group layer or at the distributed switch layer. Refer to the following topics to configure the port group policies.

  1. Configuring Policies on the Port Group Layer
  2. Configuring Policies on the Distributed Switch Layer
  3. Adding ESXi Host to the Distributed Switch

Configuring Policies on the Port Group Layer

Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to configure the physical network switch. Configure the load balancing policy as per the network configuration requirements on the physical switch.

About this task

To configure the port group policies, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Configure Port Group Policies on the Distributed Switch Click to enlarge Configure Port Group Policies on the Distributed Switch

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings , and follow the wizard to configure the VLAN, Teaming and failover, and other options.
    Note: For more information about configuring port group policies, see the VMware Documentation .
  4. Click OK to complete the configuration.
  5. Repeat steps 2–4 to configure the other port groups.
Configuring Policies on the Distributed Switch Layer

You can configure the same policy for all the port groups simultaneously.

About this task

To configure the same policy for all the port groups, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Manage Distributed Port Groups Click to enlarge Manage Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port Groups , and specify the following information in Manage Distributed Port Group dialog box.
    1. In the Select port group policies tab, select the port group policies that you want to configure and click Next .
      Note: For more information about configuring port group policies, see the VMware Documentation .
    2. In the Select port groups tab, select the distributed port groups on which you want to configure the policy and click Next .
    3. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks , and click Next .
    4. In the Ready to complete window, review the configuration and click Finish .
Adding ESXi Host to the Distributed Switch

Migrate the management interface and CVM of the host to the distributed switch.

About this task

To migrate the Management interface and CVM of the ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Figure. Select Physical Adapter for Uplinking Click to enlarge Select Physical Adapter for Uplinking

          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
          Figure. Select a Port Group Click to enlarge Select a Port Group

        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .
  4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network configuration for the host.
    Note: Run a ping test to confirm that the networking on the host works as expected.
  5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.

Migrating to a New Distributed Switch with LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Creating Link Aggregation Group on Distributed Switch
  4. Creating Port Groups to use the LAG
  5. Adding ESXi Host to the Distributed Switch

Creating Link Aggregation Group on Distributed Switch

Using Link Aggregation Group (LAG) on a distributed switch, you can connect the ESXi host to physical switched by using dynamic link aggregation. You can create multiple link aggregation groups (LAGs) on a distributed switch to aggregate the bandwidth of physical NICs on ESXi hosts that are connected to LACP port channels.

About this task

To create a LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Distributed Switch > Configure > LACP .
    Figure. Create LAG on Distributed Switch Click to enlarge Create LAG on Distributed Switch

  4. Click New and enter the following details in the New Link Aggregation Group dialog box.
    1. Name : Enter a name for the LAG.
    2. Number of Ports : Enter the number of ports.
      The number of ports must match the physical ports per host in the LACP LAG. For example, if the Number of Ports two, then you can attach two physical ports per ESXi host to the LAG.
    3. Mode : Specify the state of the physical switch.
      Based on the configuration requirements, you can set the mode to Active or Passive .
    4. Load balancing mode : Specify the load balancing mode for the physical switch.
      For more information about the various load balancing options, see the VMware Documentation .
    5. VLAN trunk range : Specify the VLANs if you have VLANs configured in your environment.
  5. Click OK .
    LAG is created on the distributed switch.

Creating Port Groups to Use LAG

To use LAG as the uplink you have to edit the settings of the port group created on the distributed switch.

About this task

To edit the settings on the port group to use LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Management port Group > Edit Setting .
  4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.
    Figure. Configure the Management Port Group Click to enlarge Configure the Management Port Group

    1. Load Balancing : Select Route based IP hash .
    2. Active uplinks : Move the LAG under the Unused uplinks section to Active Uplinks section.
    3. Unused uplinks : Select the physical uplinks ( Uplink 1 and Uplink 2 ) and move them to the Unused uplinks section.
  5. Repeat steps 2–4 to configure the other port groups.

Adding ESXi Host to the Distributed Switch

Add the ESXi host to the distributed switch and migrate the network from the standard switch to the distributed switch. Migrate the management interface and CVM of the ESXi host to the distributed switch.

About this task

To migrate the Management interface and CVM of ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
      Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate this adapter to the corresponding port group on the distributed switch.
      Note: Do not migrate the VMkernel adapter associated with vSwitchNutanix.
      Note: If the are any VLANs associated with the port group on the standard switch, ensure that the corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to ensure it is configured as required.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .

vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. You can either create a vCenter Server or use an existing vCenter Server. To create a vCenter Server, refer to the VMware Documentation .

This section considers that you already have a vCenter Server and therefore describes the operations you can perform on an existing vCenter Server. To deploy vSphere clusters running Nutanix Enterprise Cloud, perform the following steps in the vCenter.

Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism Central. For more information, see Registering a Cluster to vCenter Server

1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best practices. For more information, see Creating a Nutanix Cluster in vCenter.

2. Configure HA. For more information, see vSphere HA Settings.

3. Configure DRS. For more information, see vSphere DRS Settings.

4. Configure EVC. For more information, see vSphere EVC Settings.

5. Configure override. For more information, see VM Override Settings.

6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to vCenter.

Registering a Cluster to vCenter Server

To perform core VM management operations directly from Prism without switching to vCenter Server, you need to register your cluster with the vCenter Server.

Before you begin

Ensure that you have vCenter Server Extension privileges as these privileges provide permissions to perform vCenter registration for the Nutanix cluster.

About this task

Following are some of the important points about registering vCenter Server.

  • Nutanix does not store vCenter Server credentials.
  • Whenever a new node is added to Nutanix cluster, vCenter Sever registration for the new node is automatically performed.
  • Nutanix supports vCenter Enhanced Linked Mode.

    When registering a Nutanix cluster to a vCenter Enhanced Linked Mode (EHM) enabled ESXi environment, ensure that Prism is registered to the vCenter containing the vSphere Cluster and Nutanix nodes (often the local vCenter). For more information about vCenter Enhanced Linked Mode, see vCenter Enhanced Linked Mode in the vCenter Server Installation and Setup documentation.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
  3. Click the Register link.
    The IP address is auto-populated in the Address field. The port number field is also auto-populated with 443. Do not change the port number. For the complete list of required ports, see Port Reference.
  4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
    Figure. vCenter Registration Figure 1 Click to enlarge vcenter registration

  5. Click Register .
    During the registration process a certificate is generated to communicate with the vCenter Server. If the registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
    Figure. vCenter Registration Figure 2 Click to enlarge vcenter registration

Unregistering a Cluster from the vCenter Server

To unregister the vCenter Server from your cluster, perform the following procedure.

About this task

  • Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter Server. After you change the IP address of the vCenter Sever, you should register the vCenter Server again with the new IP address with the cluster.
  • The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host Connection field changes to Not Connected , it implies that the hosts are being managed by a different vCenter Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to register to this vCenter Server.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    A message that cluster is already registered to the vCenter Server is displayed.
  3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
  4. Click Unregister .
    If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is displayed in the Tasks dashboard.

Creating a Nutanix Cluster in vCenter

Before you begin

Nutanix recommends creating a storage container in the Prism Element running on the host or using the default container to mount NFS datastore on all ESXi hosts.

About this task

To enable the vCenter to discover the Nutanix clusters, perform the following steps in the vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Do one of the following.
    • If you want the Nutanix cluster to be in an existing datacenter, proceed to step 3.
    • If you want the Nutanix cluster to be in a new datacenter or if there is no datacenter, perform the following steps to create a datacenter.
      Note: Nutanix clusters must be in a datacenter.
    1. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
    2. Click New Datacenter .
    3. Enter a meaningful name for the datacenter (for example, NTNX-DC ) and click OK .
  3. Right-click the datacenter node and click New Cluster .
    1. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster ).
    2. Turn on the vSphere DRS switch.
    3. Turn on the Turn on vSphere HA switch.
    4. Uncheck Manage all hosts in the cluster with a single image .
    Nutanix cluster ( NTNX-Cluster ) is created with the default settings for vSphere HA and vSphere DRS.

What to do next

Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a Nutanix Node to vCenter.

Adding a Nutanix Node to vCenter

Before you begin

Configure the Nutanix cluster according to Nutanix specifications given in Creating a Nutanix Cluster in vCenter and vSphere Cluster Settings Checklist.

About this task

Note: To ensure that vCenter managed ESXi hosts are accessible through vCenter only and are not directly accessible, put the vCenter managed ESXi hosts in lockdown mode. Lockdown mode forces all operations through the vCenter Server.
Tip: Refer to KB-1661 for the default credentials of all cluster components.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
  3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile .
    Ensure that Lockdown Mode is Disabled because Nutanix does not support lockdown mode.
  4. Configure DNS servers.
    1. Go to Configure > Networking > TCP/IP configuration .
    2. Click Default under TCP/IP stack and go to TCP/IP .
    3. Click the pencil icon to configure DNS servers and perform the following.
        1. Select Enter settings manually .
        2. Type the domain name in the Domain field.
        3. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK .
  5. Configure NTP servers.
    1. Go to Configure > System > Time Configuration .
    2. Click Edit .
    3. Select the Use Network Time Protocol (Enable NTP client) .
    4. Type the NTP server address in the NTP Servers text box.
    5. In the NTP Service Startup Policy, select Start and stop with host from the drop-down list.
      Add multiple NTP servers if necessary.
    6. Click OK .
  6. Click Configure > Storage and ensure that NFS datastores are mounted.
    Note: Nutanix recommends creating a storage container in Prism Element running on the host.
  7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.
    Note: Automatic VM start and stop is disabled in clusters where HA is enabled.
    1. Go to Configure > Virtual Machines > VM Startup/Shutdown .
    2. Click Edit .
    3. Ensure that Automatically start and stop the virtual machines with the system is checked.
    4. If the CVM is listed in Manual Startup , click the up arrow to move the CVM into the Automatic Startup section.
    5. Click OK .

What to do next

Configure HA and DRS settings. For more information, see vSphere HA Settings and vSphere DRS Settings.

Nutanix Cluster Settings

To ensure the optimal performance of your vSphere deployment running on Nutanix cluster, configure the following settings from the vCenter.

vSphere General Settings

About this task

Configure the following general settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > General .
    1. Under General , set the Swap file location to Virtual machine directory .
      Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
    2. Under Default VM Compatibility , set the compatibility to Use datacenter setting and host version .
      Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.
      Figure. General Cluster Settings Click to enlarge General Cluster Settings

vSphere HA Settings

If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute resources available to restart all VMs that were running on the failed node.

About this task

Configure the following HA settings from vCenter.
Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and conform to Nutanix best practices.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere Availability .
  4. Click Edit next to the text showing vSphere HA status.
    Figure. vSphere Availability Settings: Failures and Responses Click to enlarge vSphere Availability Settings: Failures and Responses

    1. Turn on the vSphere HA and Enable Host Monitoring switches.
    2. Specify the following information under the Failures and Responses tab.
        1. Host Failure Response : Select Restart VMs from the drop-down list.

          This option configures the cluster-wide host isolation response settings.

        2. Response for Host Isolation : Select Power off and restart VMs from the drop-down list.
        3. Datastore with PDL : Select Disabled from the drop-down list.
        4. Datastore with APD : Select Disabled from the drop-down list.
          Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.
        5. VM Monitoring : Select Disabled from the drop-down list.
    3. Specify the following information under the Admission Control tab.
      Note: If you are using replication factor 2 with cluster sizes up to 16 nodes, configure HA admission control settings to tolerate one node failure. For cluster sizes larger than 16 nodes, configure HA admission control to sustain two node failures and use replication factor 3. vSphere 6.7, and newer versions automatically calculate the percentage of resources required for admission control.
      Figure. vSphere Availability Settings: Admission Control Click to enlarge vSphere Availability Settings: Admission Control

        1. Host failures cluster tolerates : Enter 1 or 2 based on the number of nodes in the Nutanix cluster and the replication factor.
        2. Define host failover capacity by : Select Cluster resource Percentage from the drop-down list.
        3. Performance degradation VMs tolerate : Set the percentage to 100.

          For more information about settings of percentage of cluster resources reserved as failover spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment.

    4. Specify the following information under the Heartbeat Datastores tab.
      Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management network partition occurs while continuing to respond to failures.
      Figure. vSphere Availability Settings: Heartbeat Datastores Click to enlarge vSphere Availability Settings: Heartbeat Datastores

        1. Select Use datastores only from the specified list .
        2. Select the named storage container mounted as the NFS datastore (Nutanix datastore).

          If you have more than one named storage container, select all that are applicable.

        3. If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .
    5. Click OK .

vSphere HA Admission Control Settings for Nutanix Environment

Overview

If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.

N+2 Availability Configuration

The N+2 availability configuration can be achieved in the following two ways.

  • Redundancy factor 2 and N+2 vSphere HA admission control setting configured.

    Because Nutanix distributed file system recovers in the event of a node failure, it is possible to have a second node failure without data being unavailable if the Nutanix cluster has fully recovered before the subsequent failure. In this case, a N+2 vSphere HA admission control setting is required to ensure sufficient compute resources are available to restart all the VMs.

  • Redundancy factor 3 and N+2 vSphere HA admission control setting configured.
    If you want two concurrent node failures to be tolerated and the cluster has insufficient blocks to use block awareness, redundancy factor 3 in a cluster of five or more nodes is required. In either of these two options, the Nutanix storage pool must have sufficient free capacity to restore the configured redundancy factor (2 or 3). The percentage of free space required is the same as the required HA admission control percentage setting. In this case, redundancy factor 3 must be configured at the storage container layer. An N+2 vSphere HA admission control setting is also required to ensure sufficient compute resources are available to restart all the VMs.
    Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart following a failure.
Table 1. Minimum Reservation Percentage for vSphere HA Admission Control Setting For redundancy factor 2 deployments, the recommended minimum HA admission control setting percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated, the minimum required HA admission control setting percentage is marked with two asterisks (**) in the following table.
Nodes Availability Level
N+1 N+2 N+3 N+4
1 N/A N/A N/A N/A
2 N/A N/A N/A N/A
3 33* N/A N/A N/A
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
Nodes Availability Level
N+1 N+2 N+3 N+4
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three or more blocks exist within a cluster).

Block Awareness

For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block of up to four nodes configured with redundancy factor 2 can become unavailable.

If block awareness levels of availability are required, the vSphere HA admission control setting must ensure sufficient compute resources are available to restart all virtual machines. In addition, the Nutanix storage pool must have sufficient space to restore redundancy factor 2 to all data.

The vSphere HA minimum availability level must be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.

Rack Awareness

Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.

  • All power supplies in a rack fail.
  • Top-of-rack (TOR) switch fails.
  • Network partition occurs: one of the racks becomes inaccessible from the other racks.

With rack fault tolerance enabled, the cluster has rack awareness and guest VMs can continue to run even during the failure of one rack (with replication factor 2) or two racks (with replication factor 3). The redundant copies of guest VM data and metadata persist on other racks when one rack fails.

Table 2. Rack awareness has minimum requirements, described in the following table.
Replication factor Minimum number of nodes Minimum number of Blocks Minimum number of racks Data resiliency
2 3 3 3 Failure of 1 node, block, or rack
3 5 5 5 Failure of 2 nodes, blocks, or racks

vSphere DRS Settings

About this task

Configure the following DRS settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere DRS .
  4. Click Edit next to the text showing vSphere DRS status.
    Figure. vSphere DRS Settings: Automation Click to enlarge vSphere DRS Settings: Automation

    1. Turn on the vSphere DRS switch.
    2. Specify the following information under the Automation tab.
        1. Automation Level : Select Fully Automated from the drop-down list.
        2. Migration Threshold : Set the bar between conservative and aggressive (value=3).

          Migration threshold provides optimal resource utilization while minimizing DRS migrations with little benefit. This threshold automatically manages data locality in such a way that whenever VMs move, writes are always written on one of the replicas locally to maximize the subsequent read performance.

          Nutanix recommends the migration threshold at 3 in a fully automated configuration.

        3. Predictive DRS : Leave the option disabled.

          The value of predictive DRS depends on whether you use other VMware products such as vRealize operations. Unless you use vRealize operations, Nutanix recommends disabling predictive DRS.

        4. Virtual Machine Automation : Enable VM automation.
    3. Specifying anything under the Additional Options tab is optional.
    4. Specify the following information under the Power Management tab.
      Figure. vSphere DRS Settings: Power Management Click to enlarge vSphere DRS Settings: Power Management

        1. DPM : Leave the option disabled.

          Enabling DPM causes nodes in the Nutanix cluster to go offline, affecting cluster resources.

    5. Click OK .

vSphere EVC Settings

vSphere enhanced vMotion compatibility (EVC) ensures that workloads can live migrate, using vMotion, between ESXi hosts in a Nutanix cluster that are running different CPU generations. The general recommendation is to have EVC enabled as it will help you in the future where you will be scaling your Nutanix clusters with new hosts that might contain new CPU models.

About this task

To enable EVC in a brownfield scenario can be challenging. Configure the following EVC settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Shut down all the VMs on the hosts with feature sets greater than the EVC mode.
    Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.
  4. Click Configure , and go to Configuration > VMware EVC .
  5. Click Edit next to the text showing VMware EVC.
  6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK .
    If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the baseline.
    Tip: To know the processor class of a node, perform the following steps.
      1. Log on to Prism Element running on the Nutanix cluster.
      2. Click Hardware from the menu and go to Diagram or Table view.
      3. Click the node and look for the Block Serial field in Host Details .
    Figure. VMware EVC Click to enlarge VMware EVC

  7. Start the VMs in the Nutanix cluster to apply the EVC.
    If you try to enable EVC on a Nutanix cluster with mismatching host feature sets (mixed processor clusters), the lowest common feature set (lowest processor class) is selected. Hence, if VMs are already running on the new host and if you want to enable EVC on the host, you must first shut down the VMs and then enable EVC.
    Note: Do not shut down more than one CVM at the same time.

VM Override Settings

You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak the following VM overriding settings.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > VM Overrides .
  4. Select all the CVMs and click Next .
    If you do not have the CVMs listed, click Add to ensure that the CVMs are added to the VM Overrides dialog box.
    Figure. VM Override Click to enlarge VM Override

  5. In the VM override section, configure override for the following parameters.
    • DRS Automation Level: Disabled
    • VM HA Restart Priority: Disabled
    • VM Monitoring: Disabled
  6. Click Finish .

Migrating a Nutanix Cluster from One vCenter Server to Another

About this task

Perform the following steps to migrate a Nutanix cluster from one vCenter Server to another vCenter Server.
Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation. .

Procedure

  1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a Nutanix Cluster in vCenter.
  2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings.
  3. Unregister the Nutanix cluster from the source vCenter Server. See Unregistering a Cluster from the vCenter Server.
  4. Move the nodes from the source vCenter Server to the new vCenter Server.
    See the VMware Documentation to know the process.
  5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to vCenter Server.

Storage I/O Control (SIOC)

SIOC controls the I/O usage of a virtual machine and gradually enforces the predefined I/O share levels. Nutanix converged storage architecture does not require SIOC. Therefore, while mounting a storage container on an ESXi host, the system disables SIOC in the statistics mode automatically.

Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the following issues.
  • The storage can become unavailable because the hosts repeatedly create and delete the access .lck-XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
  • Site Recovery Manager (SRM) failover and failback does not run efficiently.
  • If you are using Metro Availability disaster recovery feature, activate and restore operations do not work.
    Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty storage container. Disable SIOC and delete all the files from the storage container that are related to SIOC. For more information, see KB-3501 .
Run the NCC health check (see KB-3358 ) to verify if SIOC and SIOC in statistics mode are disabled on storage containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the procedure described in Disabling Storage I/O Control (SIOC) on a Container.

Disabling Storage I/O Control (SIOC) on a Container

About this task

Perform the following procedure to disable storage I/O statistics collection.

Procedure

  1. Log on to vCenter with the web client.
  2. Click the Storage view in the left pane.
  3. Right-click the storage container under the Nutanix cluster and select Configure Storage I/O Controller .
    The properties for the storage container are displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
    1. Disable Storage I/O Control and statistics collection.
    2. Disable Storage I/O Control but enable statistics collection.
    3. Disable Storage I/O Control and statistics collection: Select the Disable Storage I/O Control and statistics collection option to disable SIOC.
      Uncheck Include I/O Statistics for SDRS option.
    4. Click OK .

Node Management

This chapter describes the management tasks you can do on a Nutanix node.

Node Maintenance (ESXi)

You are required to gracefully place a node into the maintenance mode or non-operational state for reasons such as making changes to the network configuration of a node, performing manual firmware upgrades or replacements, performing CVM maintenance or any other maintenance operations.

Entering and Exiting Maintenance Mode

With a minimum AOS release of 6.1.2, 6.5.1 or 6.6, you can only place one node at a time in maintenance mode for each cluster.​ When a host is in maintenance mode, the CVM is placed in maintenance mode as part of the node maintenance operation and any associated RF1 VMs are powered-off. The cluster marks the host as unschedulable so that no new VM instances are created on it. When a node is placed in the maintenance mode from the Prism web console, an attempt is made to evacuate VMs from the host. If the evacuation attempt fails, the host remains in the "entering maintenance mode" state, where it is marked unschedulable, waiting for user remediation.

When a host is placed in the maintenance mode, the CVM is placed in the maintenance mode as part of the node maintenance operation. The non-migratable VMs (for example, pinned or RF1 VMs which have affinity towards a specific node) are powered-off while live migratable or high availability (HA) VMs are moved from the original host to other hosts in the cluster. After exiting the maintenance mode, all non-migratable guest VMs are powered on again and the live migrated VMs are automatically restored on the original host.
Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.

See Putting a Node into Maintenance Mode (vSphere) to place a node under maintenance.

You can also enter or exit a host under maintenance through the vCenter web client. See Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.

Exiting a Node from Maintenance Mode

See Exiting a Node from the Maintenance Mode (vSphere) to remove a node from the maintenance mode.

Viewing a Node under Maintenance Mode

See Viewing a Node that is in Maintenance Mode to view the node under maintenance mode.

Guest VM Status when Node under Maintenance Mode

See Guest VM Status when Node is in Maintenance Mode to view the status of guest VMs when a node is undergoing maintenance operations.

Best Practices and Recommendations

Nutanix strongly recommends using the Enter Maintenance Mode option to place a node under maintenance.

Known Issues and Limitations ESXi

  • The Prism web console enabled maintenance operations (enter and exit node maintenance) are currently supported on ESXi.
  • Entering or exiting a node under maintenance using the vCenter for ESXi is not equivalent to entering or exiting the node under maintenance from the Prism Element web console.
  • You cannot exit the node from maintenance mode from Prism Element web console if the node is placed under maintenance mode using vCenter (ESXi node). However, you can enter the node maintenance through the Prism Element web console and exit the node maintenance using the vCenter (ESXi node).

Putting a Node into Maintenance Mode (vSphere)

Before you begin

Check the cluster status and resiliency before putting a node under maintenance. You can also verify the status of the guest VMs. See Guest VM Status when Node is in Maintenance Mode for more information.

About this task

As the node enter the maintenance mode, the following high-level tasks are performed internally.
  • The host initiates entering the maintenance mode.
  • The HA VMs are live migrated.
  • The pinned and RF1 VMs are powered-off.
  • The CVM enters the maintenance mode.
  • The CVM is shutdown.
  • The host completes entering the maintenance mode.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Select the node that you want to put under maintenance.
  5. Click the Enter Maintenance Mode option.
    Figure. Enter Maintenance Mode Option Click to enlarge

  6. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  7. On Host Maintenance window, select the Power off VMs that cannot migrate check box to enable the Enter Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.
  8. Click the Enter Maintenance Mode button.
    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is entering the maintenance mode.
    • The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely enters the maintenance mode.
      Figure. Enter Node Maintenance (On-going) Click to enlarge

    • You can also monitor the progress of the node maintenance operation through the newly created Host enter maintenance and Enter maintenance mode tasks which appear in the task tray.
    Note: In case of a node maintenance failure, certain roll-back operations are performed. For example, the CVM is rebooted. But the live-migrated VMs are not restored to the original host.

What to do next

Once the maintenance activity is complete, you can perform any of the following.
  • View the nodes under maintenance, see Viewing a Node that is in Maintenance Mode
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere))

Viewing a Node that is in Maintenance Mode

About this task

Note: This procedure is the same for AHV and ESXI nodes.

Perform the following steps to view a node under maintenance.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also view this icon in the host details view.
    Figure. Example: Node under Maintenance (Table and Host Details View) in AHV Click to enlarge

  5. Alternatively, view the node under maintenance from the Hardware > Diagram view.
    Figure. Example: Node under Maintenance (Diagram and Host Details View) in AHV Click to enlarge

What to do next

You can:
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) .

Exiting a Node from the Maintenance Mode (vSphere)

After you perform any maintenance activity, exit the node from the maintenance mode.

About this task

As the node exits the maintenance mode, the following high-level tasks are performed internally.
  • The host is taken out of maintenance.
  • The CVM is powered on.
  • The CVM is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. On the Prism web console home page, select Hardware from the drop-down menu.
  2. Go to the Table > Host view.
  3. Select the node which you intend to remove from the maintenance mode.
  4. Click the Exit Maintenance Mode option.
    Figure. Exit Maintenance Mode Option - Table View Click to enlarge

    Figure. Exit Maintenance Mode Option - Diagram View Click to enlarge

  5. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  6. On the Host Maintenance window, click the Exit Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is exiting the maintenance mode.
    • The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node completely exits the maintenance mode.
    • You can also monitor the progress of the exit node maintenance operation through the newly created Host exit maintenance and Exit maintenance mode tasks which appear in the task tray.

What to do next

You can:
  • View the status of node under maintenance, see Viewing a Node that is in Maintenance Mode.
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.

Guest VM Status when Node is in Maintenance Mode

The following scenarios demonstrate the behavior of three guest VM types - high availability (HA) VMs, pinned VMs, and RF1 VMs, when a node enters and exits a maintenance operation. The HA VMs are live VMs that can migrate across nodes if the host server goes down or reboots. The pinned VMs have the host affinity set to a specific node. The RF1 VMs have affinity towards a specific node or a CVM. To view the status of the guest VMs, go to VM > Table .

Note: The following scenarios are the same for AHV and ESXI nodes.

Scenario 1: Guest VMs before Node Entering Maintenance Mode

In this example, you can observe the status of the guest VMs on the node prior to the node entering the maintenance mode. All the guest VMs are powered-on and reside on the same host.

Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Scenario 2: Guest VMs during Node Maintenance Mode

  • As the node enter the maintenance mode, the following high-level tasks are performed internally.
    1. The host initiates entering the maintenance mode.
    2. The HA VMs are live migrated.
    3. The pinned and RF1 VMs are powered-off.
    4. The host completes entering the maintenance mode.
    5. The CVM enters the maintenance mode.
    6. The AHV host completes entering the maintenance mode.
    7. The CVM enters the maintenance mode.
    8. The CVM is shut down.
Figure. Example: VM and Hosts before Entering Maintenance Mode Click to enlarge

Scenario 3: Guest VMs after Node Exiting Maintenance Mode

  • As the node exits the maintenance mode, the following high-level tasks are performed internally.
    1. The CVM is powered on.
    2. The CVM is taken out of maintenance.
    3. The host is taken out of maintenance.
    After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.
Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Nonconfigurable ESXi Components

The Nutanix manufacturing and installation processes done by running Foundation on the Nutanix nodes configures the following components. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

ESXi

Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • NFS datastore settings
  • VM swapfile location
  • VM startup/shutdown order
  • CVM name
  • CVM virtual hardware configuration file (.vmx file)
  • iSCSI software adapter settings
  • Hardware settings, including passthrough HBA settings.

  • vSwitchNutanix standard virtual switch
  • vmk0 interface in Management Network port group
  • SSH
    Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi server through a control plane that does not depend on additional management systems or processes. The SSH connection is also required to modify the networking and control paths in the case of a host failure to maintain High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.
  • Open host firewall ports
  • CPU resource settings such as CPU reservation, limit, and shares of the CVM.
    Caution: Do not use the Reset System Configuration option.
  • ProductLocker symlink setting to point at the default datastore.

    Do not change the /productLocker symlink to point at a non-local datastore.

    Do not change the ProductLockerLocation advanced setting.

Putting the CVM and ESXi Host in Maintenance Mode Using vCenter

About this task

Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster undergoes maintenance or patch installations.
Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports putting the node in maintenance mode.

Procedure

  1. Log on to vCenter with the web client.
  2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the following.
    • Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
    • Shut down VMs other than the CVM that you do not want to migrate to another host.
  3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
  4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual machines to other hosts in the cluster and click OK .
    Note:

    In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either manually shut down those VMs or ensure the VMs can be migrated.

    Caution: When you put the host in maintenance mode, the maintenance mode process powers down or migrates all the VMs that are running on the host.
    The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS automatically attempts to migrate all the VMs to another host in the Nutanix cluster.

    The host enters maintenance mode after its CVM is shut down.

Shutting Down an ESXi Node in a Nutanix Cluster

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

You can put the ESXi host into maintenance mode and shut it down either from the web client or from the command line. For more information about shutting down a node from the command line, see Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. Log on to vCenter with the web client.
  2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.
    Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that is not present on the target host.
  3. Right-click the host and select Shut Down .
    Wait until the vCenter displays that the host is not responding, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

Procedure

  1. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
  2. Log on to another CVM in the Nutanix cluster with SSH.
  3. Shut down the host.
    nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command returns no output. If it fails with a message like the following, VMs are probably still running on the host.

    CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/maintenance_mode_enter failed with ret=-1

    Ensure that all VMs are shut down or moved to another host and try again before proceeding.

    nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host..

    Alternatively, you can put the ESXi host into maintenance mode and shut it down using the vSphere web client. For more information, see Shutting Down an ESXi Node in a Nutanix Cluster.

    If the host shuts down, a message like the following is displayed.

    INFO esx-shutdown:67 Please verify if ESX was successfully shut down using ping hypervisor_ip_addr

    hypervisor_ip_addr is the IP address of the ESXi host.

  4. Confirm that the ESXi host has shut down.
    nutanix@cvm$ ping hypervisor_ip_addr

    Replace hypervisor_ip_addr with the IP address of the ESXi host.

    If no ping packets are answered, the ESXi host shuts down.

Starting an ESXi Node in a Nutanix Cluster

About this task

You can start an ESXi host either from the web client or from the command line. For more information about starting a node from the command line, see Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.
  2. Log on to vCenter (or to the node if vCenter is not running) with the web client.
  3. Right-click the ESXi host and select Exit Maintenance Mode .
  4. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  5. Log on to another CVM in the Nutanix cluster with SSH.
  6. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  7. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.
  8. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)

About this task

You can start an ESXi host either from the command line or from the web client. For more information about starting a node from the web client, see Starting an ESXi Node in a Nutanix Cluster .

Procedure

  1. Log on to a running CVM in the Nutanix cluster with SSH.
  2. Start the CVM.
    nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command produces no output. If it fails, wait 5 minutes and try again.

    nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    .

    If the CVM starts, a message like the following is displayed.

    INFO esx-start-cvm:67 CVM started successfully. Please verify using ping cvm_ip_addr

    cvm_ip_addr is the IP address of the CVM on the ESXi host.

    After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.

    Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For more information, see Starting an ESXi Node in a Nutanix Cluster

  3. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  4. Verify the storage.
    1. Log on to the ESXi host with SSH.
    2. Rescan for datastores.
      root@esx# esxcli storage core adapter rescan --all
    3. Confirm that cluster VMFS datastores, if any, are available.
      root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting an ESXi Node using CLI

Before you begin

Shut down the guest VMs, including vCenter that are running on the node or move them to other nodes in the Nutanix cluster.

About this task

Procedure

  1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.
  2. Right-click the host and select Maintenance mode > Enter Maintenance Mode .
    In the Confirm Maintenance Mode dialog box, click OK .
    The host is placed in maintenance mode, which prevents VMs from running on the host.
    Note: The host does not enter in the maintenance mode until after the CVM is shut down.
  3. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  4. Right-click the node and select Power > Reboot .
    Wait until vCenter shows that the host is not responding and then is responding again, which takes several minutes.

    If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

  5. Right-click the ESXi host and select Exit Maintenance Mode .
  6. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  7. Log on to the CVM with SSH.
  8. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  9. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.

Rebooting an ESXI Node in a Nutanix Cluster

About this task

The Request Reboot operation in the Prism web console gracefully restarts the selected nodes one after the other.

Perform the following procedure to restart the nodes in the cluster.

Procedure

  1. Click the gear icon in the main menu and then select Reboot in the Settings page.
  2. In the Request Reboot window, select the nodes you want to restart, and click Reboot .
    Figure. Request Reboot of ESXi Node Click to enlarge

    A progress bar is displayed that indicates the progress of the restart of each node.

Changing an ESXi Node Name

After running a bare-metal Foundation, you can change the host (node) name from the command line or by using the vSphere web client.

To change the hostname, see VMware Documentation. .

Changing an ESXi Node Password

Although it is not required for the root user to have the same password on all hosts (nodes), doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

To change the host password, see VMware Documentation .

Changing the CVM Memory Configuration (ESXi)

About this task

You can increase the memory reserved for each CVM in your Nutanix cluster by using the 1-click CVM Memory Upgrade option available from the Prism Element web console.

Increase memory size depending on the workload type or to enable certain AOS features. For more information about CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .

VM Management

For the list of supported VMs, see Compatibility and Interoperability Matrix.

VM Management Using Prism Central

You can create and manage a VM on your ESXi from Prism Central. For more information, see Creating a VM through Prism Central (ESXi) and Managing a VM (ESXi).

Creating a VM through Prism Central (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

To create a VM, do the following:

Procedure

  1. Go to the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and click the Create VM button.
    The Create VM wizard appears.
  2. In the Configuration step, do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Cluster : Select the target cluster from the pull-down list on which you intend to create the VM.
    4. Number of VMs : Enter the number of VMs you intend to create. The created VM names are suffixed sequentially.
    5. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    6. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    7. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
    Figure. Create VM Window (Configuration) Click to enlarge VM update window display

  3. In the Resources step, do the following.

    Disks: To attach a disk to the VM, click the Attach Disk button. The Add Disks dialog box appears. Do the following in the indicated fields:

    Figure. Add Disk Dialog Box Click to enlarge

    1. Type : Select the type of storage device, Disk or CD-ROM , from the pull-down list.
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from NDSF file to copy any file from the cluster that can be used as an image onto the disk.

      • [ CD-ROM only] Select Empty CD-ROM to create a blank CD-ROM device. A CD-ROM device is needed when you intend to provide a system image from CD-ROM.
      • [Disk only] Select Allocate on Storage Container to allocate space without specifying an image. Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
      • Select Clone from Image to copy an image that you have imported by using image service feature onto the disk.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE , SCSI , PCI , or SATA .
    4. Path : Enter the path to the desired system image.
    5. Clone from Image : Select the image that you have created by using the image service feature.

      This field appears only when Clone from Image Service is selected. It specifies the image to copy.

      Note: If the image you created does not appear in the list, see KB-4892.
    6. Storage Container : Select the storage container to use from the pull-down list.

      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.

    7. Capacity : Enter the disk size in GiB.
    8. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    9. Repeat this step to attach additional devices to the VM.
    Figure. Create VM Window (Resources) Click to enlarge Create VM Window (Resources)

  4. In the Resources step, do the following.

    Networks: To create a network interface for the VM, click the Attach to Subnet button. The Attach to Subnet dialog box appears.

    Do the following in the indicated fields:

    Figure. Attach to Subnet Dialog Box Click to enlarge Create NIC window display

    1. Subnet : Select the target virtual LAN from the pull-down list.

      The list includes all defined networks (see Configuring Network Connections in the Prism Central Guide ).

    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see vCenter Server Integration in the Prism Central Guide .

    3. Network Connection State : Select the state for the network that you want it to operate in after VM creation. The options are Connected or Disconnected .
    4. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    5. Repeat this step to create more network interfaces for the VM.
  5. In the Management step, do the following.
    1. Categories : Search for the category to be assigned to the VM. The policies associated with the category value are assigned to the VM.
    2. Guest OS : Type and select the guest operating system.

      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see vCenter Server Integration in the Prism Central Guide .

      Figure. Create VM Window (Management) Click to enlarge VM update resources display - VM Flash Mode

  6. In the Review step, when all the field entries are correct, click the Create VM button to create the VM and close the Create VM dialog box.

    The new VM appears in the VMs entity page list.

Managing a VM (ESXi)

You can manage virtual machines (VMs) in an ESXi cluster through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

After creating a VM (see Creating a VM through Prism Central (ESXi)), you can use Prism Central to update the VM configuration, delete the VM, clone the VM, launch a console window, power on (or off) the VM, enable flash mode for a VM, assign the VM to a protection policy, create VM recovery point, add the VM to a recovery plan, run a playbook, manage categories, install and manage Nutanix Guest Tools (NGT), manage the VM ownership, or configure QoS settings.

You can perform these tasks by using any of the following methods:

  • Select the target VM in the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and choose the required action from the Actions menu.
  • Right-click on the target VM in the List tab of the VMs dashboard and select the required action from the drop-down list.
  • Go to the details page of a selected VM (see VM Details View in the Prism Central Guide ) and select the desired action.
Note: The available actions appear in bold; other actions are unavailable. The available actions depend on the current state of the VM and your permissions.

Procedure

  • To modify the VM configuration, select Update .

    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Make the desired changes and then click the Save button in the Review step.

    Figure. Update VM Window (Resources) Click to enlarge VM update resources display - VM Flash Mode

  • Disks: You can add new disks to the VM using the Attach Disk option. You can also modify the existing disk attached to the VM using the controls under the actions column. See Creating a VM through Prism Central (ESXi) before you create a new disk for a VM. You can enable or disable the flash mode settings for the VM. To enable flash mode on the VM, click the Enable Flash Mode check box. After you enable this feature on the VM, the status is updated in the VM table view.
  • Networks: You can attach new network to the VM using the Attach to Subnet option. You can also modify the existing subnet attached to the VM. See Creating a VM through Prism Central (ESXi) before you modify NIC network or create a new NIC for a VM.
  • To delete the VM, select Delete . A window prompt appears; click the OK button to delete the VM.
  • To clone the VM, select Clone .

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • You cannot override the secure boot setting while cloning a VM, unless the source VM already had secure boot setting enabled.
    Figure. Clone VM Window Click to enlarge clone VM window display

  • To launch a console window, select Launch Console .

    This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power On Actions (or Power Off Actions ) action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.

    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Chrome. (Firefox typically works best.)
    Figure. Console Window (VNC) Click to enlarge VNC console window display

  • To power on (or off) the VM, select Power on (or Power off ).
  • To disable (or enable) efficiency measurement for the VM, select Disable Efficiency Measurement (or Enable Efficiency Measurement ).
  • To disable (or enable) anomaly detection the VM, select Disable Anomaly Detection (or Enable Anomaly Detection ).
  • To assign the VM to a protection policy, select Protect . This opens a page to specify the protection policy to which this VM should be assigned. To remove the VM from a protection policy, select Unprotect .
    Note: You can create a protection policy for a VM or set of VMs that belong to one or more categories by enabling Leap and configuring the Availability Zone.
  • To migrate the VM to another host, select Migrate .

    This displays the Migrate VM dialog box. Select the target host from the pull-down list (or select the System will automatically select a host option to let the system choose the host) and then click the Migrate button to start the migration.

    Figure. Migrate VM Window Click to enlarge migrate VM window display

    Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated while heavily utilized, migration may fail because of limited bandwidth.
  • To add this VM to a recovery plan you created previously, select Add to Recovery Plan . For more information, see Adding Guest VMs Individually to a Recovery Plan in the Leap Administration Guide .
  • To create VM recovery point, select Create Recovery Point .

    This displays the Create VM Recovery Point dialog box. Enter a name of the recovery action for the VM. You can choose to create an App Consistent VM recovery point by enabling the check-box. The VM can be restored or replicated from a Recovery Point either locally or remotely in a state of a chosen recovery point.

    Figure. Create VM Recovery Point Window Click to enlarge Create VM Recovery Point Window display

  • To run a playbook you created previously, select Run Playbook . For more information, see Running a Playbook (Manual Trigger) in the Prism Central Guide .
  • To assign the VM a category value, select Manage Categories .

    This displays the Manage VM Categories page. For more information, see Assigning a Category in the Prism Central Guide .

  • To install Nutanix Guest Tools (NGT), select Install NGT . For more information, see Installing NGT on Multiple VMs in the Prism Central Guide .
  • To enable (or disable) NGT, select Manage NGT Applications . For more information, see Managing NGT Applications in the Prism Central Guide .
    The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    Note: If you clone a VM, by default NGT is not enabled on the cloned VM. You need to again enable and mount NGT on the cloned VM. If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .

    If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.

    ncli> ngt mount vm-id=virtual_machine_id

    For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

    ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
    c1601e759987
    Note:
    • Self-service restore feature is not enabled by default on a VM. You need to manually enable the self-service restore feature.
    • If you have created the NGT ISO CD-ROMs on AOS 4.6 or earlier releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You need to unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to a later AOS version.
  • To upgrade NGT, select Upgrade NGT . For more information, see Upgrading NGT in the Prism Central Guide .
  • To establish VM host affinity, select Configure VM Host Affinity .

    A window appears with the available hosts. Select (click the icon for) one or more of the hosts and then click the Save button. This creates an affinity between the VM and the selected hosts. If possible, it is recommended that you create an affinity to multiple hosts (at least two) to protect against downtime due to a node failure. For more information about VM affinity policies, see Affinity Policies Defined in Prism Central in the Prism Central Guide .

    Figure. Set VM Host Affinity Window Click to enlarge

  • To add a VM to the catalog, select Add to Catalog . This displays the Add VM to Catalog page. For more information, see Adding a Catalog Item in the Prism Central Guide .
  • To specify a project and user who own this VM, select Manage Ownership .
    In the Manage VM Ownership window, do the following in the indicated fields:
    1. Project : Select the target project from the pull-down list.
    2. User : Enter a user name. A list of matches appears as you enter a string; select the user name from the list when it appears.
    3. Click the Save button.
    Figure. VM Ownership Window Click to enlarge

  • To configure quality of service (QoS) settings, select Set QoS Attributes . For more information, see Setting QoS for an Individual VM in the Prism Central Guide .

VM Management using Prism Element

You can create and manage a VM on your ESXi from Prism Element. For more information, see Creating a VM (ESXi) and Managing a VM (ESXi).

Creating a VM (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through the web console.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but you cannot attach a volume group to the VM.

To create a VM, do the following:

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Create VM button.
    The Create VM dialog box appears.
  2. Do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Guest OS : Type and select the guest operating system.
      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
  3. To attach a disk to the VM, click the Add New Disk button.
    The Add Disks dialog box appears.
    Figure. Add Disk Dialog Box Click to enlarge configure a disk screen

    Do the following in the indicated fields:
    1. Type : Select the type of storage device, DISK or CD-ROM , from the pull-down list.
      The following fields and options vary depending on whether you choose DISK or CD-ROM .
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
      • Select Allocate on Storage Container to allocate space without specifying an image. (This option appears only when DISK is selected in the previous field.) Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE or SCSI .
    4. ADSF Path : Enter the path to the desired system image.
      This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the path name as / storage_container_name / vmdk_name .vmdk . For example to clone an image from myvm-flat.vmdk in a storage container named crt1 , enter /crt1/myvm-flat.vmdk . When a user types the storage container name ( / storage_container_name / ), a list appears of the VMDK files in that storage container (assuming one or more VMDK files had previously been copied to that storage container).
      Note: Make sure you are copying from a flat file.
    5. Storage Container : Select the storage container to use from the pull-down list.
      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.
    6. Size : Enter the disk size in GiBs.
    7. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    8. Repeat this step to attach more devices to the VM.
  4. To create a network interface for the VM, click the Add New NIC button.
    The Create NIC dialog box appears. Do the following in the indicated fields:
    1. VLAN Name : Select the target virtual LAN from the pull-down list.
      The list includes all defined networks. For more information, see Network Configuration for VM Interfaces in the Prism Web Console Guide .
    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .

    3. Network UUID : This is a read-only field that displays the network UUID.
    4. Network Address/Prefix : This is a read-only field that displays the network IP address and prefix.
    5. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    6. Repeat this step to create more network interfaces for the VM.
  5. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog box.
    The new VM appears in the VM table view. For more information, see VM Table View in the Prism Web Console Guide .

Managing a VM (ESXi)

You can use the web console to manage virtual machines (VMs) in the ESXi clusters.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

After creating a VM, you can use the web console to manage guest tools, power operations, suspend, launch a VM console window, update the VM configuration, clone the VM, or delete the VM. To accomplish one or more of these tasks, do the following:

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Table view.
  2. Select the target VM in the table (top section of screen).
    The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You can also right-click on a VM to select a relevant action.

    The possible actions are Manage Guest Tools , Launch Console , Power on (or Power off actions ), Suspend (or Resume ), Clone , Update , and Delete . The following steps describe how to perform each action.

    Figure. VM Action Links Click to enlarge

  3. To manage guest tools as follows, click Manage Guest Tools .
    You can also enable NGT applications (self-service restore, volume snapshot service and application-consistent snapshots) as part of manage guest tools.
    1. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
    2. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
      Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.

      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    3. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
      The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file or files from the VM. For more information about the self-service restore feature, see Self-Service Restore in the Data Protection and Recovery with Prism Element guide.

    4. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot feature is enabled by default.
      After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent Snapshots in the Data Protection and Recovery with Prism Element guide.

    5. To mount VMware guest tools, click Mount VMware Guest Tools check box.
      The VMware guest tools are mounted on the VM.
      Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular VM provided the VM has sufficient empty CD-ROM slots.
    6. Click Submit .
      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
      Note:
      • If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable NGT from the UI and restart the Nutanix guest agent service.
      • If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .
      If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.
      ncli> ngt mount vm-id=virtual_machine_id

      For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

      ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
      c1601e759987
      Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6 version and does not occur from AOS 4.6.x or later releases.
      Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
  4. To launch a VM console window, click the Launch Console action link.
    This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power Off Actions action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.
    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Google Chrome. (Firefox typically works best.)
  5. To start (or shut down) the VM, click the Power on (or Power off ) action link.

    Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the following options:

    • Power Off . Hypervisor performs a hard shut down action on the VM.
    • Reset . Hypervisor performs an ACPI reset action through the BIOS on the VM.
    • Guest Shutdown . Operating system of the VM performs a graceful shutdown.
    • Guest Reboot . Operating system of the VM performs a graceful restart.
    Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are installed.
  6. To pause (or resume) the VM, click the Suspend (or Resume ) action link. This option is available only when the VM is powered on.
  7. To clone the VM, click the Clone action link.

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits the most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • In the Clone window, you cannot update the disks.
    Figure. Clone VM Window Click to enlarge clone VM window display

  8. To modify the VM configuration, click the Update action link.
    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the configuration as needed (see Creating a VM (ESXi)), and in addition you can enable Flash Mode for the VM.
    Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
    1. Click the Enable Flash Mode check box.
      • After you enable this feature on the VM, the status is updated in the VM table view. To view the status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table view.
      • You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.
      Figure. Update VM Resources Click to enlarge VM update resources display - VM Flash Mode

      Figure. Update VM Resources - VM Disk Flash Mode Click to enlarge VM update resources display - VM Disk Flash Mode

  9. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the VM.
    The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered on.

vDisk Provisioning Types in VMware with Nutanix Storage

You can specify the vDisk provisioning policy when you perform certain VM management operations like creating a VM, migrating a VM, or cloning a VM.

Traditionally, a vDisk is provisioned with specific allocated space (thick space) or with space allocated on an as-needed basis (thin disk). The thick disks provisions the space using either lazy zero or eager zero disk formatting method.

For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types of disk provisioning. Thick disks provide second best performance and thin disks provide the least performance. However, this does not apply to modern storage systems found in Nutanix systems.

Nutanix uses a thick Virtual Machine Disk (VMDK) to reserve the storage space using the vStorage APIs for Array Integration (VAAI) reserve space API.

On a Nutanix system, there is no performance difference between thin and thick disks. This means that a thick eager zeroed virtual disk has no performance benefits over a thin virtual disk.

Nutanix uses thick disk (VMDK) in its configuration and the resulting disk will be the same whether the disk is a thin or a thick disk (despite the configuration differences).

Note: A thick-disk reservation is required for the reservation of the disk space. Nutanix VMDK has no performance requirement to provision a thick disk. For a single Nutanix container, even when a thick disk is provisioned, no disk space is allocated to write zeroes. So, there is no requirement for provisioning a thick disk.

When using the up-to-date VAAI for cloning operations, the following behavior is expected:

  • When cloning any type of disk format (thin, thick lazy zeroed or thick eager zeroed) to the same Nutanix datastore, the resulting VM will have a thin disk regardless of the explicit choice of a disk format in the vSphere client.

    Nutanix uses a thin provisioned disk because a thin disk performs the same as a thick disk in the system. The thin disk prevents disk space from wasting. In the cloning scenario, Nutanix disallows the flow of the reservation property from the source to the destination when creating a fast clone on the same datastore. This prevents space wastage due to unnecessary reservation.

  • When cloning a VM to a different datastore, the destination VM will have the disk format that you specified in the vSphere client.
    Important: A thick disk will be shown as thick in ESXi, and within NDFS (Nutanix Distributed File System) it is shown as a thin disk with an extra field configuration.

Nutanix recommends using thin disks over any other disk type.

VM Migration

You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.

  • Migrate VMs from existing storage platform to Nutanix.
  • Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.

In migrating VMs between Nutanix clusters running vSphere, the source host and NFS datastore are the ones presently running the VM. The target host and NFS datastore are the ones where the VM runs after migration. The target ESXi host and datastore must be part of a Nutanix cluster.

To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the migration is complete, you must unmount the datastores and block access.

Migrating a VM to Another Nutanix Cluster

Before you begin

Before migrating a VM to another Nutanix cluster running vSphere, verify that you have provisioned the target Nutanix environment.

About this task

The shared storage feature in vSphere allows you to move both compute and storage resources from the source legacy environment to the target Nutanix environment at the same time without disruption. This feature also removes the need to do any sort of file systems allow lists on Nutanix.

You can use the shared storage feature through the migration wizard in the web client.

Procedure

  1. Log on to vCenter with the web client.
  2. Select the VM that you want to migrate.
  3. Right-click the VM and select Migrate .
  4. Under Select Migration Type , select Change both compute resource and storage .
  5. Select Compute Resource and then Storage and click Next .
    If necessary, change the disk format to the one that you want to use during the migration process.
  6. Select a destination network for all VM network adapters and click Next .
  7. Click Finish .
    Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a temporary storage network over vmk0 for the period where the disk files are on Nutanix.

Cloning a VM

About this task

To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in, refer KB-1868 .

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the VM and select Clone .
  3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
  4. Select the datastore that contains source VM and click Next .
    Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the VMware implementation and not the Nutanix VAAI plug-in.
  5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
  6. Click Finish .

vStorage APIs for Array Integration

To improve the vSphere cloning process, Nutanix provides a vStorage API for array integration (VAAI) plug-in. This plug-in is installed by default during the Nutanix factory process.

Without the Nutanix VAAI plug-in, the process of creating a full clone takes a significant amount of time because all the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption.

The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the clone was created.

vSphere ESXi Hardening Settings

Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.
Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.
HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no

Match Address x.x.x.11,x.x.x.12,x.x.x.13,x.x.x.14,192.168.5.0/24
PermitRootLogin yes
PasswordAuthentication yes

ESXi Host Upgrade

You can upgrade your host either automatically through Prism Element (1-click upgrade) or manually. For more information about automatic and manual upgrades, see ESXi Upgrade and ESXi Host Manual Upgrade respectively.

This paragraph describes the Nutanix hypervisor support policy for vSphere and Hyper-V hypervisor releases. Nutanix provides hypervisor compatibility and support statements that should be reviewed before planning an upgrade to a new release or applying a hypervisor update or patch:
  • Compatibility and Interoperability Matrix
  • Hypervisor Support Policy- See Support Policies and FAQs for the supported Acropolis hypervisors.

Review the Nutanix Field Advisory page also for critical issues that Nutanix may have uncovered with the hypervisor release being considered.

Note: You may need to log in to the Support Portal to view the links above.

The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft, respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.

ESXi Upgrade

These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter server or other third-party software, see your vendor documentation for this information.

AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature (also known as 1-click upgrade).

You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console. In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel. You can see the current status of your software versions and start an upgrade.

VMware ESXi Hypervisor Upgrade Recommendations and Limitations

  • To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
  • Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a hypervisor version might require that you upgrade vCenter first.
  • If you have not enabled DRS in your environment and want to upgrade the ESXi host, you need to upgrade the ESXi host manually. For more information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere Administration Guide .
  • Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console Software Upgrade feature.

Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline bundles (not ISOs) from the VMware web site.

Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ. For updates that are made available by VMware that do not have a Nutanix-provided JSON metadata upgrade file, obtain the offline bundle and md5sum checksum available from VMware, then use the web console Software Upgrade feature to upgrade ESXi.

Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting and specifically enable EVC for Intel hosts.
CPU Level for Enhanced vMotion Compatibility (EVC)

AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level must be L4 - Sandy Bridge .

vCenter Requirements and Limitations
Note: ENG-358564 You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at the Nutanix Support portal for symptoms and solutions to this issue.
  • If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
  • If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1 hour before performing upgrades to allow cluster settings to become updated. Also do not register the cluster in vCenter and perform any upgrades at the same time.
  • Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must rules for VMs.

    Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter maintenance mode.

Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node as part of a break-fix procedure, planned migrations, and similar temporary operations.

Upgrading ESXi Hosts by Uploading Binary and Metadata Files

Before you begin

About this task

Do the following steps to download Nutanix-qualified ESXi metadata .JSON files and upgrade the ESXi hosts through Upgrade Software in the Prism Element web console. Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files.

Procedure

  1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster Check (NCC) health checks and upgrade NCC if necessary.
  2. Run NCC as described in Run NCC Checks .
  3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.
    1. The default view is All . From the drop-down menu, select Nutanix - VMware ESXi , which shows all available JSON versions.
    2. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a .
    3. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
    Figure. Downloads Page for ESXi Metadata JSON Click to enlarge This picture shows the portal page for ESXi metadata JSON downloads
  4. Log on to the Prism Element web console for any node in the cluster.
  5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  6. Click the upload the Hypervisor binary link.
  7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (obtained from VMware), respectively, browse to the file locations, select the file, and click Upload Now .
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .
  10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the inventory information.
    For more information, see Performing Inventory With LCM in the Acropolis Upgrade Guide .

Upgrading ESXi by Uploading An Offline Bundle File and Checksum

About this task

  • Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware, then upgrade ESXi through Upgrade Software in the Prism Element web console.
  • Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.

Procedure

  1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip ) and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not manually generated from the bundle by you.
  2. Save the files to your local machine or media, such as a USB drive or other portable media.
  3. Log on to the Prism Element web console for any node in the cluster.
  4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  5. Click the upload the Hypervisor binary link.
  6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
  7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file, and click Upload Now .
    Figure. ESXi 1-Click Upgrade, Unqualified Bundle Click to enlarge ESXi 1-Click Upgrade dialog box
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .

ESXi Host Manual Upgrade

If you have not enabled DRS in your environment and want to upgrade the ESXi host, you must upgrade the ESXi host manually. This topic describes all the requirements that you must meet before manually upgrading the ESXi host.

Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ.

Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require cluster downtime.

  • If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is unavailable.
  • If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different blocks.
  • Never shut down or restart multiple Controller VMs or hosts simultaneously.
  • Always run the cluster status command to verify that all Controller VMs are up before performing a Controller VM or host shutdown or restart.

ESXi Host Upgrade Process

Perform the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.
  • If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster stop command.
    Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2 configuration.
  • If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are up by logging on to a CVM and running the cluster status command. If any nodes are not running, start them before proceeding with the upgrade. Shut down all guest VMs on the node or migrate them to other nodes in the Nutanix cluster.
  • Disable email alerts in the web console under Email Alert Services or with the nCLI command.
    ncli> alerts update-alert-config enable=false
  • Run the complete NCC health check by using the health check command.
    nutanix@cvm$ ncc health_checks run_all
  • Run the cluster status command to verify that all Controller VMs are up and running, before performing a Controller VM or host shutdown or restart.
    nutanix@cvm$ cluster status
  • Place the host in the maintenance mode by using the web client.
  • Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  • Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.

Upgrading ESXi Host

  • See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade

Run the complete NCC health check by using the following command.

nutanix@cvm$ ncc health_checks run_all

vSphere Cluster Settings Checklist

Review the following checklist of the settings that you have to configure to successfully deploy vSphere virtual environment running Nutanix Enterprise cloud.

vSphere Availability Settings

  • Enable host monitoring.
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster.

    For more information about settings of percentage of cluster resources reserved as failover spare capacity, vSphere HA Admission Control Settings for Nutanix Environment.

  • Set the VM Restart Priority of all CVMs to Disabled .
  • Set the Host Isolation Response of the cluster to Power Off & Restart VMs .
  • Set the VM Monitoring for all CVMs to Disabled .
  • Enable datastore heartbeats by clicking Use datastores only from the specified list and choosing the Nutanix NFS datastore.

    If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .

vSphere DRS Settings

  • Set the Automation Level on all CVMs to Disabled .
  • Select Automation Level to accept level 3 recommendations.
  • Leave power management disabled.

Other Cluster Settings

  • Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one node for replication factor 2 or two nodes for replication factor 3).
  • Store VM swapfiles in the same directory as the VM.
  • Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings.
  • Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings.
  • Check Nonconfigurable ESXi Components. Modifying the nonconfigurable components may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.
Read article
Security Guide

AOS Security 5.20

Product Release Date: 2021-05-17

Last updated: 2022-12-14

Audience & Purpose

This Security Guide is intended for security-minded people responsible for architecting, managing, and supporting infrastructures, especially those who want to address security without adding more human resources or additional processes to their datacenters.

This guide offers an overview of the security development life cycle (SecDL) and host of security features supported by Nutanix. It also demonstrates how Nutanix complies with security regulations to streamline infrastructure security management. In addition to this, this guide addresses the technical requirements that are site specific or compliance-standards (that should be adhered), which are not enabled by default.

Note:

Hardening of the guest OS or any applications running on top of the Nutanix infrastructure is beyond the scope of this guide. We recommend that you refer to the documentation of the products that you have deployed in your Nutanix environment.

Nutanix Security Infrastructure

Nutanix takes a holistic approach to security with a secure platform, extensive automation, and a robust partner ecosystem. The Nutanix security development life cycle (SecDL) integrates security into every step of product development, rather than applying it as an afterthought. The SecDL is a foundational part of product design. The strong pervasive culture and processes built around security harden the Enterprise Cloud Platform and eliminate zero-day vulnerabilities. Efficient one-click operations and self-healing security models easily enable automation to maintain security in an always-on hyperconverged solution.

Since traditional manual configuration and checks cannot keep up with the ever-growing list of security requirements, Nutanix conforms to RHEL 7 Security Technical Implementation Guides (STIGs) that use machine-readable code to automate compliance against rigorous common standards. With Nutanix Security Configuration Management Automation (SCMA), you can quickly and continually assess and remediate your platform to ensure that it meets or exceeds all regulatory requirements.

Nutanix has standardized the security profile of the Controller VM to a security compliance baseline that meets or exceeds the standard high-governance requirements.

The most commonly used references in United States to guide vendors to build products according to the set of technical requirements are as follows.

  • The National Institute of Standards and Technology Special Publications Security and Privacy Controls for Federal Information Systems and Organizations (NIST 800.53)
  • The US Department of Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIG)

SCMA Implementation

The Nutanix platform and all products leverage the Security Configuration Management Automation (SCMA) framework to ensure that services are constantly inspected for variance to the security policy.

Nutanix has implemented security configuration management automation (SCMA) to check multiple security entities for both Nutanix storage and AHV. Nutanix automatically reports log inconsistencies and reverts them to the baseline.

With SCMA, you can schedule the STIG to run hourly, daily, weekly, or monthly. STIG has the lowest system priority within the virtual storage controller, ensuring that security checks do not interfere with platform performance.
Note: Only the SCMA schedule can be modified. The AIDE schedule is run on a fixed weekly schedule. To change the SCMA schedule for AHV or the Controller VM, see Hardening Instructions (nCLI).

RHEL 7 STIG Implementation in Nutanix Controller VM

Nutanix leverages SaltStack and SCMA to self-heal any deviation from the security baseline configuration of the operating system and hypervisor to remain in compliance. If any component is found as non-compliant, then the component is set back to the supported security settings without any intervention. To achieve this objective, Nutanix has implemented the Controller VM to support STIG compliance with the RHEL 7 STIG as published by DISA.

The STIG rules are capable of securing the boot loader, packages, file system, booting and service control, file ownership, authentication, kernel, and logging.

Example: STIG rules for Authentication

Prohibit direct root login, lock system accounts other than root , enforce several password maintenance details, cautiously configure SSH, enable screen-locking, configure user shell defaults, and display warning banners.

Security Updates

Nutanix provides continuous fixes and updates to address threats and vulnerabilities. Nutanix Security Advisories provide detailed information on the available security fixes and updates, including the vulnerability description and affected product/version.

To see the list of security advisories or search for a specific advisory, log on to the Support Portal and select Documentation , and then Security Advisories .

Nutanix Security Landscape

This topic provides highlights on Nutanix security landscape and its highlights. The following table helps to identify the security features offered out-of-the-box in Nutanix infrastructure.

Topic Highlights
Authentication and Authorization
Network segmentation VLAN-based, data driven segmentation
Security Policy Management Implement security policies using Microsegmentation.
Data security and integrity
Hardening Instructions

Log monitoring and analysis

Flow Networking

See Flow Networking Guide

UEFI

See UEFI Support for VMs in the AHV Administration Guide

Secure Boot

See Secure Boot Support for VMs in the AHV Administration Guide

Windows Credential Guard support

See Windows Defender Credential Guard Support in AHV in the AHV Administration Guide

RBAC

See Controlling User Access (RBAC)

Hardening Instructions (nCLI)

This chapter describes how to implement security hardening features for Nutanix AHV and Controller VM.

Hardening AHV

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to AHV as described below.

Table 1. Configuration Settings to Harden the AHV
Description Command or Settings Output
Getting the cluster-wide configuration of the SCMA policy. Run the following command:
nutanix@cvm$ ncli cluster get-hypervisor-security-config
Enable Aide : false
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY
Enabling the Advanced Intrusion Detection Environment (AIDE) to run on a weekly basis. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-aide=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY 
Enabling the high-strength password policies (minlen=15, difok=8, maxclassrepeat=4). Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params \
enable-high-strength-password=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : false
Schedule : DAILY
Enabling the defense knowledge consent banner of the US department. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : DAILY
Changing the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params schedule=hourly
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Enabling the settings so that AHV can generate stack traces for any cluster issue. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-core=true
Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
Enable Aide : true
Enable Core : true
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
When a high governance official needs to run the hardened configuration. The settings should be as follows:
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : false
Schedule : HOURLY
When a federal official needs to run the hardened configuration. The settings should be as follows:
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Note: A banner file can be modified to support non-DoD customer banners.
Backing up the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# sudo cp -a /srv/salt/security/KVM/sshd/DODbanner \
/srv/salt/security/KVM/sshd/DODbannerbak
Modifying the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# sudo vi /srv/salt/security/KVM/sshd/DODbanner
Note: Repeat all the above steps on every AHV in a cluster.
Setting the banner for all nodes through nCLI. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true

The following options are configured or customized to harden the AHV:

  • Enable AIDE : Advanced Intrusion Detection Environment (AIDE) is a Linux utility that monitors a given node. After you install the AIDE package, the system will generate a database that contains all the files you selected in your configuration file by entering the aide -–init command as a root user. You can move the database to a secure location in a read-only media or on other machines. After you create the database, you can use the aide -–check command for the system to check the integrity of the files and directories by comparing the files and directories on your system with the snapshot in the database. In case there are unexpected changes, a report gets generated, which you can review. If the changes to existing files or files added are valid, you can use the aide --update command to update the database with the new changes.
  • Enable high strength password : You can run the command as shown in the table in this section to enable high-strength password policies (minlen=15, difok=8, maxclassrepeat=4).
    Note:
    • minlen is the minimum required length for a password.
    • difok is the minimum number of characters that must be different from the old password.
    • maxclassrepeat is the number of consecutive characters of same class that you can use in a password.
  • Enable Core : A core dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program gets crashed or terminated abnormally. Core dumps are used to assist in diagnosing or debugging errors in computer programs. You can enable the core for troubleshooting purposes.
  • Enable Banner : You can set a banner to display a specific message. For example, set a banner to display a warning message that the system is available to authorized users only.

Hardening Controller VM

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to CVM as described below.

  • Run the following command to support cluster-wide configuration of the SCMA policy.

    nutanix@cvm$ ncli cluster get-cvm-security-config

    The current cluster configuration is displayed.

    Enable Aide : false
    Enable Core : false
    Enable High Strength P...: false
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to schedule weekly execution of Advanced Intrusion Detection Environment (AIDE).

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-aide=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : false
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the strong password policy.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-high-strength-password=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the defense knowledge consent banner of the US department.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the settings to allow only SNMP version 3.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-snmpv3-only=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : DAILY
  • Run the following command to change the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly.

    nutanix@cvm$ ncli cluster edit-cvm-security-params schedule=hourly

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
  • Run the following command to enable the settings so that Controller VM can generate stack traces for any cluster issue.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-core=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : true
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
    Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
  • When a high governance official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : false
    Enable SNMPv3 Only : true
    Schedule : HOURLY
  • When a federal official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
    Note: A banner file can be modified to support non-DoD customer banners.
  • Run the following command to backup the DoD banner file.

    nutanix@cvm$ sudo cp -a /srv/salt/security/CVM/sshd/DODbanner \
    /srv/salt/security/CVM/sshd/DODbannerbak
  • Run the following command to modify DoD banner file.

    nutanix@cvm$ sudo vi /srv/salt/security/CVM/sshd/DODbanner
    Note: Repeat all the above steps on every CVM in a cluster.
  • Run the following command to backup the DoD banner file of the PCVM.

    nutanix@pcvm$ sudo cp -a /srv/salt/security/PC/sshd/DODbanner \
    /srv/salt/security/PC/sshd/DODbannerbak
  • Run the following command to modify DoD banner file of the PCVM.

    nutanix@pcvm$ sudo vi /srv/salt/security/PC/sshd/DODbanner
  • Run the following command to set the banner for all nodes through nCLI.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

TCP Wrapper Integration

Nutanix Controller VM uses the tcp_wrappers package to allow TCP supported daemons to control the network subnets which can access the libwrapped daemons. By default, SCMA controls the /etc/hosts.allow file in /srv/salt/security/CVM/network/hosts.allow and contains a generic entry to allow access to NFS, secure shell, and SNMP.

sshd: ALL : ALLOW
rpcbind: ALL : ALLOW
snmpd: ALL : ALLOW
snmptrapd: ALL : ALLOW

Nutanix recommends that the above configuration is changed to include only the localhost entries and the management network subnet for the restricted operations; this applies to both production and high governance compliance environments. This ensures that all subnets used to communicate with the CVMs are included in the /etc/hosts.allow file.

Common Criteria

Common Criteria is an international security certification that is recognized by many countries around the world. Nutanix AOS and AHV are Common Criteria certified by default and no additional configuration is required to enable the Common Criteria mode. For more information, see the Nutanix Trust website.
Note: Nutanix uses FIPS-validated cryptography by default.

Security Management Using Prism Element (PE)

Nutanix provides several mechanisms to maintain security in a cluster using Prism Element.

Configuring Authentication

About this task

Nutanix supports user authentication. To configure authentication types and directories and to enable client authentication or to enable client authentication only, do the following:
Caution: The web console (and nCLI) does not allow the use of the not secure SSLv2 and SSLv3 ciphers. There is a possibility of an SSL Fallback situation in some browsers which denies access to the web console. To eliminate this, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. Click the gear icon in the main menu and then select Authentication in the Settings page.
    The Authentication Configuration window appears.
    Note: The following steps combine three distinct procedures, enabling authentication (step 2), configuring one or more directories for LDAP/S authentication (steps 3-5), and enabling client authentication (step 6). Perform the steps for the procedures you need. For example, perform step 6 only if you intend to enforce client authentication.
  2. To enable server authentication, click the Authentication Types tab and then check the box for either Local or Directory Service (or both). After selecting the authentication types, click the Save button.
    The Local setting uses the local authentication provided by Nutanix (see User Management) . This method is employed when a user enters just a login name without specifying a domain (for example, user1 instead of user1@nutanix.com ). The Directory Service setting validates user@domain entries and validates against the directory specified in the Directory List tab. Therefore, you need to configure an authentication directory if you select Directory Service in this field.
    Figure. Authentication Types Tab Click to enlarge
    Note: The Nutanix admin user can log on to the management interfaces, including the web console, even if the Local authentication type is disabled.
  3. To add an authentication directory, click the Directory List tab and then click the New Directory option.
    A set of fields is displayed. Do the following in the indicated fields:
    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to the web console (or nCLI). Ensure users with this attribute first login to a domain workstation and change their password prior to accessing the web console. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • An Active Directory user name or group name containing spaces is not supported for Prism Element authentication.
        • Active Directory domain created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII text in Active Directory configuration, see the Internationalization (i18n) section.
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project. Nutanix currently supports the OpenLDAP 2.4 release running on CentOS distributions only.
    2. Name : Enter a directory name.
      This is a name you choose to identify this entry; it need not be the name of an actual directory.
    3. Domain : Enter the domain name.
      Enter the domain name in DNS format, for example, nutanix.com .
    4. Directory URL : Enter the URL address to the directory.
      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:
      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
        Note: The LDAP server SSL certificate must include a Subject Alternative Name (SAN) that matches the URL provided during the LDAPS setup.
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note: When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        Note: When querying the global catalog, the users sAMAccountName field must be unique across the AD forest. If the sAMAccountName field is not unique across the subdomains, authentication may fail intermittently or consistently.
      Note: For the complete list of required ports, see Port Reference .
    5. (OpenLDAP only) Configure the following additional fields:
      1. User Object Class : Enter the value that uniquely identifies the object class of a user.
      2. User Search Base : Enter the base domain name in which the users are configured.
      3. Username Attribute : Enter the attribute to uniquely identify a user.
      4. Group Object Class : Enter the value that uniquely identifies the object class of a group.
      5. Group Search Base : Enter the base domain name in which the groups are configured.
      6. Group Member Attribute : Enter the attribute that identifies users in a group.
      7. Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Enter the service account user name in the user_name@domain.com format that you want the web console to use to log in to the Active Directory.

      A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

      A service account is under the Managed Service Accounts in the Active Directory server. An application or service uses the service account to interact with the operating system. Enter your Active Directory service account credentials in this (username) and the following (password) field.

      Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).
      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.
    10. Repeat this step for each authentication directory you want to add.
    Note:
    • The Controller VMs need access to the Active Directory server, so open the standard Active Directory ports to each Controller VM in the cluster (and the virtual IP if one is configured).
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Assigning Role Permissions).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service. Additionally, for successful Prism Element authentication, the users must also have search or read privileges.
    Figure. Directory List Tab Click to enlarge
  4. To edit a directory entry, click the Directory List tab and then click the pencil icon for that entry.
    After clicking the pencil icon, the Directory List fields reappear (see step 3). Enter the new information in the appropriate fields and then click the Save button.
  5. To delete a directory entry, click the Directory List tab and then click the X icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  6. To enable client authentication, do the following:
    1. Click the Client tab.
    2. Select the Configure Client Chain Certificate check box.
      Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.
      Note: To authenticate on the PE with Client Chain Certificate the 'Subject name’ field must be present. The subject name should match the userPrincipalName (UPN) in the AD. The UPN is a username with domain address. For example user1@nutanix.com .
      Figure. Client Tab (1) Click to enlarge
    3. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note: Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
      Figure. Client Tab (2) Click to enlarge
    4. To enable client authentication, click Enable Client Authentication .
    5. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.
      Figure. Authentication Window: Client Tab (3) Click to enlarge

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on their local machine or by providing it through a smart card reader. Providing a valid certificate enables user login from a client machine with the relevant user certificate without utilizing user name and password. If the user is required to login from a client machine which does not have the certificate installed, then authentication using user name and password is still available.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  7. To specify a service account that the web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    Figure. Common Access Card Authentication Click to enlarge
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The web console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:
    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Element supports both certificate authentication and basic authentication in order to handle both Prism Element login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Element login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism by using CAC authentication, to successfully log out of Prism, close the browser after you click Log Out .
  8. Click the Close button to close the Authentication Configuration dialog box.

Assigning Role Permissions

About this task

When user authentication is enabled for a directory service (see Configuring Authentication), the directory users do not have any permissions by default. To grant permissions to the directory users, you must specify roles for the users (with associated permissions) to organizational units (OUs), groups, or individuals within a directory.

If you are using Active Directory, you must also assign roles to entities or users, especially before upgrading from a previous AOS version.

To assign roles, do the following:

Procedure

  1. In the web console, click the gear icon in the main menu and then select Role Mapping in the Settings page.
    The Role Mapping window appears.
    Figure. Role Mapping Window Click to enlarge
  2. To create a role mapping, click the New Mapping button.

    The Create Role Mapping window appears. Do the following in the indicated fields:

    1. Directory : Select the target directory from the pull-down list.

      Only directories previously defined when configuring authentication appear in this list. If the desired directory does not appear, add that directory to the directory list (see Configuring Authentication) and then return to this procedure.

    2. LDAP Type : Select the desired LDAP entity type from the pull-down list.

      The entity types are GROUP , USER , and OU .

    3. Role : Select the user role from the pull-down list.
      There are three roles from which to choose:
      • Viewer : This role allows a user to view information only. It does not provide permission to perform any administrative tasks.
      • Cluster Admin : This role allows a user to view information and perform any administrative task (but not create or modify user accounts).
      • User Admin : This role allows the user to view information, perform any administrative task, and create or modify user accounts.
    4. Values : Enter the case-sensitive entity names (in a comma separated list with no spaces) that should be assigned this role.
      The values are the actual names of the organizational units (meaning it applies to all users in those OUs), groups (all users in those groups), or users (each named user) assigned this role. For example, entering value " admin-gp,support-gp " when the LDAP type is GROUP and the role is Cluster Admin means all users in the admin-gp and support-gp groups should be assigned the cluster administrator role.
      Note:
      • Do not include a domain in the value, for example enter just admin-gp , not admin-gp@nutanix.com . However, when users log into the web console, they need to include the domain in their user name.
      • The AD user UPN must be in the user@domain_name format.
      • When an admin defines user role mapping using an AD with forest setup, the admin can map to the user with the same name from any domain in the forest setup. To avoid this case, set up the user-role mapping with AD that has a specific domain setup.
    5. When all the fields are correct, click Save .
      This saves the configuration and redisplays the Role Mapping window. The new role map now appears in the list.
      Note: All users in an authorized service directory have full administrator permissions when role mapping is not defined for that directory. However, after creating a role map, any users in that directory that are not explicitly granted permissions through the role mapping are denied access (no permissions).
    6. Repeat this step for each role map you want to add.
      You can create a role map for each authorized directory. You can also create multiple maps that apply to a single directory. When there are multiple maps for a directory, the most specific rule for a user applies. For example, adding a GROUP map set to Cluster Admin and a USER map set to Viewer for select users in that group means all users in the group have administrator permission except those specified users who have viewing permission only.
    Figure. Create Role Mapping Window Click to enlarge
  3. To edit a role map entry, click the pencil icon for that entry.
    After clicking the pencil icon, the Edit Role Mapping window appears, which contains the same fields as the Create Role Mapping window (see step 2). Enter the new information in the appropriate fields and then click the Save button.
  4. To delete a role map entry, click the "X" icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  5. Click the Close button to close the Role Mapping window.

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).

Authentication Best Practices

The authentication best practices listed here are guidance to secure the Nutanix platform by using the most common authentication security measures.

Emergency Local Account Usage

You must use the admin account as a local emergency account. The admin account ensures that both the Prism Web Console and the Controller VM are available when the external services such as Active Directory is unavailable.

Note: Local emergency account usage does not support any external access mechanisms, specifically for the external application authentication or external Rest API authentication.

For all the external authentication, you must configure the cluster to use an external IAM service such as Active Directory. You must create service accounts on the IAM and the accounts must have access grants to the cluster through Prism web console user account management configuration for authentication.

Modifying Default Passwords

You must change the default Controller VM password for nutanix user account by adhering to the password complexity requirements.

Procedure

  1. SSH to the Controller VM.
  2. Change the "nutanix" user account password.
    nutanix@cvm$ passwd nutanix
  3. Respond to the prompts and provide the current and new root password.
    Changing password for nutanix.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    Note:
    • Changing the user account password on one of the Controller VMs is applied to all Controller VMs in the cluster.
    • Ensure that you preserve the modified nutanix user password, since the local authentication (PAM) module requires the previous password of the nutanix user to successfully start the password reset process.
    • For the root account, both the console and SSH direct login is disabled.
    • It is recommended to use the admin user as the administrative emergency account.

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Setup Admin Session Timeout

By default, the users are logged out automatically after being idle for 15 minutes. You can change the session timeout for users and configure to override the session timeout by following the steps shown below.

Procedure

  1. Click the gear icon in the main menu and then select UI Settings in the Settings page.
  2. Select the session timeout for the current user from the Session Timeout For Current User drop-down list.
    Figure. Session Timeout Settings Click to enlarge displays the window for setting an idle logout value and for disabling the logon background animation

  3. Select the appropriate option from the Session Timeout Override drop-down list to override the session timeout.

Password Retry Lockout

For enhanced security, Prism Element locks out the admin account for a period of 15 minutes after a default number of unsuccessful login attempts. Once the account is locked out, the following message is displayed at the logon screen.

Account locked due to too many failed attempts

You can attempt entering the password after the 15 minutes lockout period, or contact Nutanix Support in case you have forgotten your password.

Internationalization (i18n)

The following table lists all the supported and unsupported entities in UTF-8 encoding.

Table 1. Internationalization Support
Supported Entities Unsupported Entities
Cluster name Acropolis file server
Storage Container name Share path
Storage pool Internationalized domain names
VM name E-mail IDs
Snapshot name Hostnames
Volume group name Integers
Protection domain name Password fields
Remote site name Any Hardware related names ( for example, vSwitch, iSCSCI initiator, vLAN name)
User management
Chart name
Caution: The creation of none of the above entities are supported on Hyper-V because of the DR limitations.

Entities Support (ASCII or non-ASCII) for the Active Directory Server

  • In the New Directory Configuration, Name field is supported in non-ASCII.
  • In the New Directory Configuration, Domain field is not supported in non-ASCII.
  • In Role mapping, Values field is supported in non-ASCII.
  • User names and group names are supported in non-ASCII.

User Management

Nutanix user accounts can be created or updated as needed using the Prism web console.

  • The web console allows you to add (see Creating a User Account), edit (see Updating a User Account), or delete (see Deleting a User Account) local user accounts at any time.
  • You can reset the local user account password using nCLI if you are locked out and cannot login to the Prism Element or Prism Central web console ( see Resetting Password (CLI)).
  • You can also configure user accounts through Active Directory and LDAP (see Configuring Authentication). Active Directory domain created by using non-ASCII text may not be supported.
Note: In addition to the Nutanix user account, there are IPMI, Controller VM, and hypervisor host users. Passwords for these accounts cannot be changed through the web console.

Creating a User Account

About this task

The admin user is created automatically when you get a Nutanix system, but you can add more users as needed. Note that you cannot delete the admin user. To create a user, do the following:
Note: You can also configure user accounts through Active Directory (AD) and LDAP (see Configuring Authentication).

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. To add a user, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
      Note: AOS uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Password : Enter a password (maximum of 255 characters).
      A second field to verify that the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.
      By default English is selected. You can select Simplified Chinese or Japanese . Depending on the language that you select here, the cluster locale is be updated for the new user. For example, if you select Simplified Chinese , the next time that the new user logs on to the web console, the user interface is displayed in Simplified Chinese.
    7. Roles : Assign a role to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user tasks.

        Note: Backup admin user is designed for Nutanix Mine integrations as of AOS version 5.19 and has minimal functionality in cluster management. This role has restricted access to the Nutanix Mine cluster.
        • Health , Analysis , and Tasks features are available in read-only mode.
        • The File server and Data Protection options in the web console are not available for this user.
        • The following features are available for Backup Admin users with limited functionality.
            • Home - The user cannot a register a cluster with Prism Central. The registration widget is disabled. Other read-only data is displayed and available.
            • Alerts - Alerts and events are displayed. However, the user cannot resolve or acknowledge any alert or event. The user cannot configure Alert Policy or Email configuration .
            • Hardware - The user cannot expand the cluster or remove hosts from the cluster. Read-only data is displayed and available.
            • Network - Networking data or configuration is displayed but configuration options are not available.
            • Settings - The user can only upload a new image using the Settings page.
            • VM - The user cannot configure options like Create VM and Network Configuration in the VM page. The following options are available for the user in the VM page:
              • Launch console
              • Power On
              • Power Off
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user tasks.
    8. When all the fields are correct, click Save .
      This saves the configuration and the web console redisplays the dialog box with the new user-administrative appearing in the list.
    Figure. Create User Window Click to enlarge

Updating a User Account

About this task

Update credentials and change the role for an existing user by using this procedure.
Note: To update your account credentials (that is, the user you are currently logged on as), see Updating My Account. Changing the password for a different user is not supported; you must log in as that user to change the password.

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
  2. Enable or disable the login access for a user by clicking the toggle text Yes (enabled) or No (disabled) in the Enabled column.
    A Yes value in the Enabled column means that the login is enabled; a No value in the Enabled column means it is disabled.
    Note: A user account is enabled (login access activated) by default.
  3. To edit the user credentials, click the pencil icon for that user and update one or more of the values in the displayed fields:
    1. Username : The username is fixed when the account is created and cannot be changed.
    2. First Name : Enter a different first name.
    3. Last Name : Enter a different last name.
    4. Email : Enter a different valid email address.
      Note: AOS Prism uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Roles : Change the role assigned to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user administrative tasks.
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user-administrative administrative tasks.
    6. Reset Password : Change the password of this user.
      Enter the new password for Password and Confirm Password fields. Click the info icon to view the password complexity requirements.
    7. When all the fields are correct, click Save .
      This saves the configuration and redisplays the dialog box with the new user appearing in the list.
    Figure. Update User Window Click to enlarge

Updating My Account

About this task

To update your account credentials (that is, credentials for the user you are currently logged in as), do the following:

Procedure

  1. To update your password, select Change Password from the user icon pull-down list in the web console.
    The Change Password dialog box appears. Do the following in the indicated fields:
    1. Current Password : Enter the current password.
    2. New Password : Enter a new password.
    3. Confirm Password : Re-enter the new password.
    4. When the fields are correct, click the Save button (lower right). This saves the new password and closes the window.
    Figure. Change Password Window Click to enlarge
    Note: You can change the password for the "admin" account only once per day. Please contact Nutanix support if you need to update the password multiple times in one day
  2. To update other details of your account, select Update Profile from the user icon pull-down list.
    The Update Profile dialog box appears. Update (as desired) one or more of the following fields:
    1. First Name : Enter a different first name.
    2. Last Name : Enter a different last name.
    3. Email : Enter a different valid user email address.
    4. Language : Select a language for your account.
    5. API Key : Enter the key value to use a new API key.
    6. Public Key : Click the Choose File button to upload a new public key file.
    7. When all the fields are correct, click the Save button (lower right). This saves the changes and closes the window.
    Figure. Update Profile Window Click to enlarge

Deleting a User Account

About this task

To delete an existing user, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. Click the X icon for that user. Note that you cannot delete the admin user.
    A window prompt appears to verify the action; click the OK button. The user account is removed and the user no longer appears in the list.

Certificate Management

This chapter describes how to install and replace an SSL certificate for configuration and use on the Nutanix Controller VM.

Note: Nutanix recommends that you check for the validity of the certificate periodically, and replace the certificate if it is invalid.

Installing an SSL Certificate

About this task

Nutanix supports SSL certificate-based authentication for console access. To install a self-signed or custom SSL certificate, do the following:
Important: Ensure that SSL certificates are not password protected.
Note:
  • Nutanix recommends that customers replace the default self-signed certificate with a CA signed certificate.
  • SSL certificate (self-signed or signed by CA) can only be installed cluster-wide from Prism. SSL certificates can not be customized for individual Controller VM.

Procedure

  1. Click the gear icon in the main menu and then select SSL Certificate in the Settings page.
    The SSL Certificate dialog box appears.
    Figure. SSL Certificate Window Click to enlarge
  2. To replace (or install) a certificate, click the Replace Certificate button.
  3. To create a new self-signed certificate, click the Regenerate Self Signed Certificate option and then click the Apply button.
    A dialog box appears to verify the action; click the OK button. This generates and applies a new RSA 2048-bit self-signed certificate for the Prism user interface.
    Figure. SSL Certificate Window: Regenerate Click to enlarge
  4. To apply a custom certificate that you provide, do the following:
    1. Click the Import Key and Certificate option and then click the Next button.
      Figure. SSL Certificate Window: Import Click to enlarge
    2. Do the following in the indicated fields, and then click the Import Files button.
      Note:
      • All the three imported files for the custom certificate must be PEM encoded.
      • Ensure that the private key does not have any extra data (or custom attributes) before the beginning (-----BEGIN CERTIFICATE-----) or after the end (-----END CERTIFICATE-----) of the private key block.
      • Private Key Type : Select the appropriate type for the signed certificate from the pull-down list (RSA 4096 bit, RSA 2048 bit, EC DSA 256 bit, or EC DSA 384 bit).
      • Private Key : Click the Browse button and select the private key associated with the certificate to be imported.
      • Public Certificate : Click the Browse button and select the signed public portion of the server certificate corresponding to the private key.
      • CA Certificate/Chain : Click the Browse button and select the certificate or chain of the signing authority for the public certificate.
      Figure. SSL Certificate Window: Select Files Click to enlarge
      In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. Refer to the following table to ensure the proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
      Note: There is no specific requirement for the subject name of the certificates (subject alternative names (SAN) or wildcard certificates are supported in Prism).
      Table 1. Recommended Key Configurations
      Key Type Size/Curve Signature Algorithm
      RSA 4096 SHA256-with-RSAEncryption
      RSA 2048 SHA256-with-RSAEncryption
      EC DSA 256 prime256v1 ecdsa-with-sha256
      EC DSA 384 secp384r1 ecdsa-with-sha384
      EC DSA 521 secp521r1 ecdsa-with-sha512
      Note: RSA 4096 bit certificates might not work with certain AOS and Prism Central releases. Please see the release notes for your AOS and Prism Central versions. Specifying an RSA 4096 bit certificate might cause multiple cluster services to restart frequently. To work around the issue, see KB 12775.
      You can use the cat command to concatenate a list of CA certificates into a chain file.
      $ cat signer.crt inter.crt root.crt > server.cert
      Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

Results

After generating or uploading the new certificate, the interface gateway restarts. If the certificate and credentials are valid, the interface gateway uses the new certificate immediately, which means your browser session (and all other open browser sessions) will be invalid until you reload the page and accept the new certificate. If anything is wrong with the certificate (such as a corrupted file or wrong certificate type), the new certificate is discarded, and the system reverts back to the original default certificate provided by Nutanix.
Note: The system holds only one custom SSL certificate. If a new certificate is uploaded, it replaces the existing certificate. The previous certificate is discarded.

Replacing a Certificate

Nutanix simplifies the process of certificate replacement to support the need of Certificate Authority (CA) based chains of trust. Nutanix recommends you to replace the default supplied self-signed certificate with a CA signed certificate.

Procedure

  1. Login to the Prism and click the gear icon.
  2. Click SSL Certificate .
  3. Select Replace Certificate to replace the certificate.
  4. Do one of the following.
    • Select Regenerate self signed certificate to generate a new self-signed certificate.
      Note:
      • This automatically generates and applies a certificate.
    • Select Import key and certificate to import the custom key and certificate. RSA 4096 bit, RSA 2048 bit, Elliptic Curve DSA 256 bit, and Elliptic Curve DSA 384 bit types of key and certificate are supported.

    The following files are required and should be PEM encoded to import the keys and certificate.

    • The private key associated with the certificate. The below section describes generating a private key in detail.
    • The signed public portion of the server certificate corresponding to the private key.
    • The CA certificate or chain of the signing authority for the certificate.
    Note:

    You must obtain the Public Certificate and CA Certificate/Chain from the certificate authority.

    Figure. Importing Certificate Click to enlarge

    Generating an RSA 4096 and RSA 2048 private key

    Tip: You can run the OpenSSL commands for generating private key and CSR on a Linux client with OpenSSL installed.
    Note: Some OpenSSL command parameters might not be supported on older OpenSSL versions and require OpenSSL version 1.1.1 or above to work.
    • Run the following OpenSSL command to generate a RSA 4096 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:40966
              -nodes -sha256 -keyout server.key
    • Run the following OpenSSL command to generate an RSA 2048 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:2048
              -nodes -sha256 -keyout server.key

      After executing the openssl command, the system prompts you to provide more details that will be incorporated into your certificate. The mandatory fields are - Country Name, State or Province Name, and Organization Name. The optional fields are - Locality Name, Organizational Unit Name, Email Address, and Challenge Password.

    Nutanix recommends including a DNS name for all CVMs in the certificate using the Subject Alternative Name (SAN) extension. This avoids SSL certificate errors when you access a CVM by direct DNS instead of the shared cluster IP. This example shows how to include a DNS name while generating an RSA 4096 private key:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example.com" 
    -keyout server.key 

    For a 3-node cluster you can provide DNS name for all three nodes in a single command. For example:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example1.com,DNS:example2.com,DNS:example3.com" 
    -keyout server.key 

    If you have added a SAN ( subjectAltName ) extension to your certificate, then every time you add or remove a node from the cluster, you must add the DNS name when you generate or sign a new certificate.

    Generating an EC DSA 256 and EC DSA 384 private key

    • Run the following OpenSSL command to generate a EC DSA 256 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name prime256v1 –genkey 
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes -sha256 
    • Run the following OpenSSL command to generate a EC DSA 384 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name secp384r1 –genkey
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes –sha384 
      
    Note: To adhere the high security standards of NIST SP800-131a compliance, requirements of the RFC 6460 for NSA Suite B, provide the optimal performance for encryption. The certificate import process validates the correct signature algorithm used for a given key or certificate pair.
  5. If the CA chain certificate provided by the certificate authority is not in a single file, then run the following command to concatenate the list of CA certificates into a chain file.
    cat signer.crt inter.crt root.crt > server.cert
    Note: The chain should start with the certificate of the signer and ends with the root CA certificate.
  6. Browse and add the Private Key, Public Certificate, and CA Certificate/Chain.
  7. Click Import Files .

What to do next

Prism restarts and you must login to use the application.

Exporting an SSL Certificate for Third-party Backup Applications

Nutanix allows you to export an SSL certificate for Prism Element on a Nutanix cluster and use it with third-party backup applications.

Procedure

  1. Log on to a Controller VM in the cluster using SSH.
  2. Run the following command to obtain the virtual IP address of the cluster:
    nutanix@cvm$ ncli cluster info

    The current cluster configuration is displayed.

        Cluster Id           : 0001ab12-abcd-efgh-0123-012345678m89::123456
        Cluster Uuid         : 0001ab12-abcd-efgh-0123-012345678m89
        Cluster Name         : three
        Cluster Version      : 6.0
        Cluster Full Version : el7.3-release-fraser-6.0-a0b1c2345d6789ie123456fg789h1212i34jk5lm6
        External IP address  : 10.10.10.10
        Node Count           : 3
        Block Count          : 1
        . . . . 
    Note: The external IP address in the output is the virtual IP address of the cluster.
  3. Run the following command to enter into the Python prompt:
    nutanix@cvm$ python

    The Python prompt appears.

  4. Run the following command to import the SSL library.
    $ import ssl
  5. From the Python console, run the following command to print the SSL certificate.
    $ print ssl.get_server_certificate(('virtual_IP_address',9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    Example: Refer to the following example where virtual_IP_address value is replaced by 10.10.10.10.
    $ print ssl.get_server_certificate(('10.10.10.10', 9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    The SSL certificate is displayed on the console.
    -----BEGIN CERTIFICATE-----
    0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01
    23456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123
    456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345
    6789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567
    89ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789AB
    CDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCD
    EFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEF
    GHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGH
    IJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJ
    KLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL
    MNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMN
    OPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOP
    QRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQR
    STUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRST
    UVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUV
    WXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWX
    YZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZab
    cdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcd
    efghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef
    ghij
    -----END CERTIFICATE-----

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Data-at-Rest Encryption

Nutanix provides an option to secure data while it is at rest using either self-encrypted drives or software-only encryption and key-based access management (cluster's native or external KMS for software-only encryption).

Encryption Methods

Nutanix provides you with the following options to secure your data.

  • Self Encrypting Drives (SED) Encryption - You can use a combination of SEDs and an external KMS to secure your data while it is at rest.
  • Software-only Encryption - Nutanix AOS uses the AES-256 encryption standard to encrypt your data. Once enabled, software-only data-at-rest encryption cannot be disabled, thus protecting against accidental data leaks due to human errors. Software-only encryption supports both Nutanix Native Key Manager (local and remote) and External KMS to secure your keys.

Note the following points regarding data-at-rest encryption.

  • Encryption is supported for AHV, ESXi, and Hyper-V.
    • For ESXi and Hyper-V, software-only encryption can be implemented at a cluster level or container level. For AHV, encryption can be implemented at the cluster level only.
  • Nutanix recommends using cluster-level encryption. With the cluster-level encryption, the administrative overhead of selecting different containers for the data storage gets eliminated.
  • Encryption cannot be disabled once it is enabled at a cluster level or container level.
  • Encryption can be implemented on an existing cluster with data that exists. If encryption is enabled on an existing cluster (AHV, ESXi, or Hyper-V), the unencrypted data is transformed into an encrypted format in a low priority background task that is designed not to interfere with other workload running in the cluster.
  • Data can be encrypted using either self-encrypted drives (SEDs) or software-only encryption. You can change the encryption method from SEDs to software-only. You can perform the following configurations.
    • For ESXi and Hyper-V clusters, you can switch from SEDs and External Key Management (EKM) combination to software-only encryption and EKM combination. First, you must disable the encryption in the cluster where you want to change the encryption method. Then, select the cluster and enable encryption to transform the unencrypted data into an encrypted format in the background.
    • For AHV, background encryption is supported.
  • Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
  • In the case of mixed clusters with ESXi and AHV nodes, where the AHV nodes are used for storage only, the encryption policies consider the cluster as an ESXi cluster. So, the cluster-level and container-level encryption are available.
  • You can use a combination of SED and non-SED drives in a cluster. After you encrypt a cluster using the software-only encryption, all the drives are considered as unencrypted drives. In case you switch from the SED encryption to the software-only encryption, you can add SED or non-SED drives to the cluster.
  • Data is not encrypted when it is replicated to another cluster. You must enable the encryption for each cluster. Data is encrypted as a part of the write operation and decrypted as a part of the read operation. During the replication process, the system reads, decrypts, and then sends the data over to the other cluster. You can use a third-party network solution if there a requirement to encrypt the data during the transmission to another cluster.
  • Software-only encryption does not impact the data efficiency features such as deduplication, compression, erasure coding, zero block suppression, and so on. The software encryption is the last data transformation performed. For example, during the write operation, compression is performed first, followed by encryption.

Key Management

Nutanix supports a Native Key Management Server, also called Local Key Manager (LKM), thus avoiding the dependency on an External Key Manager (EKM). Cluster localised Key Management Service support requires a minimum of 3-node in a cluster and is supported only for software-only encryption. So, 1-node and 2-node clusters can use either the Native KMS (remote) option or an EKM. .

The following types of keys are used for encryption.

  • Data Encryption Key (DEK) - A symmetric key, such as AES-256, that is used to encrypt the data.
  • Key Encryption Key (KEK) - This key is used to encrypt or decrypt the DEK.

Note the following points regarding the key management.

  • Nutanix does not support the use of the Local Key Manager with a third party External Key Manager.
  • Dual encryption (both SED and software-only encryption) requires an EKM. For more information, see Configuring Dual Encryption.
  • You can switch from an EKM to LKM, and inversely. For more information, see Switching between Native Key Manager and External Key Manager.
  • Rekey of keys stored in the Native KMS is supported for the Leader Keys. For more information, see Changing Key Encryption Keys (SEDs) and Changing Key Encryption Keys (Software Only).
  • You must back up the keys stored in the Native KMS. For more information, see Backing up Keys.
  • You must backup the encryption keys whenever you create a new container or remove an existing container. Nutanix Cluster Check (NCC) checks the status of the backup and sends an alert if you do not take a backup at the time of creating or removing a container.

Data-at-Rest Encryption (SEDs)

For customers who require enhanced data security, Nutanix provides a data-at-rest security option using Self Encrypting Drives (SEDs) included in the Ultimate license.

Note: If you are running the AOS Pro License on G6 platforms and above, you can use SED encryption by installing an add-on license.

Following features are supported:

  • Data is encrypted on all drives at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • A key authorization method allows password rotation at arbitrary times.
  • Protection can be enabled or disabled at any time.
  • No performance penalty is incurred despite encrypting all data.
  • Re-key of the leader encryption key (MEK) at arbitrary times is supported.
Note: If an SED cluster is present, then while executing the data-at-rest encryption, you will get an option to either select data-at-rest encryption using SEDs or data-at-rest encryption using AOS.
Figure. SED and AOS Options Click to enlarge

Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish these goals, Nutanix implements a data security configuration that uses SEDs with keys maintained through a separate key management device. Nutanix uses open standards (TCG and KMIP protocols) and FIPS validated SED drives for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge Graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  1. The security implementation begins by installing SEDs for all data drives in a cluster.

    The drives are FIPS 140-2 validated and use FIPS 140-2 validated cryptographic modules.

    Creating a new cluster that includes SEDs only is straightforward, but an existing cluster can be converted to support data-at-rest encryption by replacing the existing drives with SEDs (after migrating all the VMs/vDisks off of the cluster while the drives are being replaced).

    Note: Contact Nutanix customer support for assistance before attempting to convert an existing cluster. A non-protected cluster can contain both SED and standard drives, but Nutanix does not support a mixed cluster when protection is enabled. All the disks in a protected cluster must be SED drives.
  2. Data on the drives is always encrypted but read or write access to that data is open. By default, the access to data on the drives is protected by the in-built manufacturer key. However, when data protection for the cluster is enabled, the Controller VM must provide the proper key to access data on a SED. The Controller VM communicates with the SEDs through a Trusted Computing Group (TCG) Security Subsystem Class (SSC) Enterprise protocol.

    A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to the drive controller and never leaves the physical subsystem, so there is no way to access the data directly from the drive.

    Another key, known as a key encryption key (KEK), is used to encrypt/decrypt the DEK and authenticate to the drive. (Some vendors call this the authentication key or PIN.)

    Each drive has a separate KEK that is generated through the FIPS compliant random number generator present in the drive controller. The KEK is 32 bytes long to resist brute force attacks. The KEKs are sent to the key management server for secure storage and later retrieval; they are not stored locally on the node (even though they are generated locally).

    In addition to the above, the leader encryption key (MEK) is used to encrypt the KEKs.

    Each node maintains a set of certificates and keys in order to establish a secure connection with the external key management server.

  3. Keys are stored in a key management server that is outside the cluster, and the Controller VM communicates with the key management server using the Key Management Interoperability Protocol (KMIP) to upload and retrieve drive keys.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

  4. When a node experiences a full power off and power on (and cluster protection is enabled), the controller VM retrieves the drive keys from the key management server and uses them to unlock the drives.

    If the Controller VM cannot get the correct keys from the key management server, it cannot access data on the drives.

    If a drive is re-seated, it becomes locked.

    If a drive is stolen, the data is inaccessible without the KEK (which cannot be obtained from the drive). If a node is stolen, the key management server can revoke the node certificates to ensure they cannot be used to access data on any of the drives.

Preparing for Data-at-Rest Encryption (External KMS for SEDs and Software Only)

About this task

Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!

Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

If you are using an external KMS for encryption using AOS, preparation steps outside the web console are required. The information in this section is applicable if you choose to use an external KMS for configuring encryption.

You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.

The system accesses the EKM under the following conditions:

  • Starting a cluster

  • Regenerating a key (key regeneration occurs automatically every year by default)

  • Adding or removing a node (only when Self Encrypting Drives is used for encryption)

  • Switching between Native to EKM or EKM to Native

  • Starting, and restarting a service (only if Software-based encryption is used)

  • Upgrading AOS (only if Software-based encryption is used)

  • NCC heartbeat check if EKM is alive

Procedure

  1. Configure a key management server.

    The key management server devices must be configured into the network so the cluster has access to those devices. For redundant protection, it is recommended that you employ at least two key management server devices, either in active-active cluster mode or stand-alone.

    Note: The key management server must support KMIP version 1.0 or later.
    • SafeNet

      Ensure that Security > High Security > Key Security > Disable Creation and Use of Global Keys is checked.

    • Vormetric

      Set the appliance to compatibility mode. Suite B mode causes the SSL handshake to fail.

  2. Generate a certificate signing request (CSR) for each node in the cluster.
    • The Common Name field of the CSR is populated automatically with unique_node_identifier .nutanix.com to identify the node associated with the certificate.
      Tip: After generating the certificate from Prism, (if required) you can update the custom common name (CN) setting by running the following command using nCLI.
      ncli data-at-rest-encryption-certificate update-csr-information domain-name=abcd.test.com

      In the above command example, replace "abcd.test.com" with the actual domain name.

    • A UID field is populated with a value of Nutanix . This can be useful when configuring a Nutanix group for access control within a key management server, since it is based on fields within the client certificates.
    Note: Some vendors when doing client certificate authentication expect the client username to be a field in the CSR. While the CN and UID are pre-generated, many of the user populated fields can be used instead if desired. If a node-unique field such as CN is chosen, users must be created on a per node basis for access control. If a cluster-unique field is chosen, customers must create a user for each cluster.
  3. Send the CSRs to a certificate authority (CA) and get them signed.
    • Safenet

      The SafeNet KeySecure key management server includes a local CA option to generate signed certificates, or you can use other third-party vendors to create the signed certificates.

      To enable FIPS compliance, add user nutanix to the CA that signed the CSR. Under Security > High Security > FIPS Compliance click Set FIPS Compliant .

    Note: Some CAs strip the UID field when returning a signed certificate.
    To comply with FIPS, Nutanix does not support the creation of global keys.

    In the SafeNet KeySecure management console, go to Device > Key Server > Key Server > KMIP Properties > Authentication Settings .

    Then do the following:

    • Set the Username Field in Client Certificate option to UID (User ID) .
    • Set the Client Certificate Authentication option to Used for SSL session and username .

    If you do not perform these settings, the KMS creates global keys and fails to encrypt the clusters or containers using the software only method.

  4. Upload the signed SSL certificates (one for each node) and the certificate for the CA to the cluster. These certificates are used to authenticate with the key management server.
  5. Generate keys (KEKs) for the SED drives and upload those keys to the key management server.

Configuring Data-at-Rest Encryption (SEDs)

Nutanix offers an option to use self-encrypting drives (SEDs) to store data in a cluster. When SEDs are used, there are several configuration steps that must be performed to support data-at-rest encryption in the cluster.

Before you begin

A separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity (see step 5) that is resilient to a single failure.

About this task

To configure cluster encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page.
  3. Select the Encryption Type as Drive-based Encryption . This option is displayed only when SEDs are detected.
  4. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 6.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  5. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a leader/follower (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  6. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 4c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  7. Go to the Key Management Server section (see step 5) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    1. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  8. When the configuration is complete, click the Protect button on the opening page to enable encryption protection for the cluster.
    A clear key icon appears on the page.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    The key turns gold when cluster encryption is enabled.
    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must rekey the disks for the modification to take full effect (see Changing Key Encryption Keys (SEDs)).
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Enabling/Disabling Encryption (SEDs)

Data on a self encrypting drive (SED) is always encrypted, but enabling/disabling data-at-rest encryption for the cluster determines whether a separate (and secured) key is required to access that data.

About this task

To enable or disable data-at-rest encryption after it has been configured for the cluster (see Configuring Data-at-Rest Encryption (SEDs)), do the following:
Note: The key management server must be accessible to disable encryption.

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, do one of the following:
    • If cluster encryption is enabled currently, click the Unprotect button to disable it.
    • If cluster encryption is disabled currently, click the Protect button to enable it.
    Enabling cluster encryption enforces the use of secured keys to access data on the SEDs in the cluster; disabling cluster encryption means the data can be accessed without providing a key.

Changing Key Encryption Keys (SEDs)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey All Disks button under Hardware Encryption .
    Rekeying a cluster under heavy workloads may result in higher-than-normal IO latency, and some data may become temporarily unavailable. To continue with the rekey operation, click Confirm Rekey .
    This step resets the KEKs for all the self encrypting disks in the cluster.
    Note:
    • The Rekey All Disks button appears only when cluster protection is active.
    • If the cluster is already protected and a new key management server is added, you must press the Rekey All Disks button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

Destroying Data (SEDs)

Data on a self encrypting drive (SED) is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the drive controller. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by having the controller change the DEK. This is known as a crypto-erase.

About this task

To crypto-erase a SED, do the following:

Procedure

  1. In the web console, go to the Hardware dashboard and select the Diagram tab.
  2. Select the target disk in the diagram (upper section of screen) and then click the Remove Disk button (at the bottom right of the following diagram).

    As part of the disk removal process, the DEK for that disk is automatically cycled on the drive controller. The previous DEK is lost and all new disk reads are indecipherable. The key encryption key (KEK) is unchanged, and the new DEK is protected using the current KEK.

    Note: When a node is removed, all SEDs in that node are crypto-erased automatically as part of the node removal process.
    Figure. Removing a Disk Click to enlarge screen shot of the diagram tab of the hardware dashboard demonstrating how to remove a disk

Data-at-Rest Encryption (Software Only)

For customers who require enhanced data security, Nutanix provides a software-only encryption option for data-at-rest security (SEDs not required) included in the Ultimate license.
Note: On G6 platforms running the AOS Pro license, you can use software encryption by installing an add-on license.
Software encryption using a local key manager (LKM) supports the following features:
  • For AHV, the data can be encrypted on a cluster level. This is applicable to an empty cluster or a cluster with existing data.
  • For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
  • Data is encrypted at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • Re-key of the leader encryption key at arbitrary times is supported.
  • Cluster’s native KMS is supported.
Note: In case of mixed hypervisors, only the following combinations are supported.
  • ESXi and AHV
  • Hyper-V and AHV
Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish the above mentioned goals, Nutanix implements a data security configuration that uses AOS functionality along with the cluster’s native or an external key management server. Nutanix uses open standards (KMIP protocols) for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  • For software encryption, data protection must be enabled for the cluster before any data is encrypted. Also, the Controller VM must provide the proper key to access the data.
  • A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to AOS, so there is no way to access the data directly from the drive.
  • In case of an external KMS:

    Each node maintains a set of certificates and keys in order to establish a secure connection with the key management server.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

Configuring Data-at-Rest Encryption (Software Only)

Nutanix offers a software-only option to perform data-at-rest encryption in a cluster or container.

Before you begin

  • Nutanix provides the option to choose the KMS type as the Native KMS (local), Native KMS (remote), or External KMS.
  • Cluster Localised Key Management Service (Native KMS (local)) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
  • Software encryption using Native KMS is supported for remote office/branch office (ROBO) deployments using the Native KMS (remote) KMS type.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
    Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!!

    Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

    Note: You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.
  • This feature requires an Ultimate license, or as an Add-On to the PRO license (for the latest generation of products). Ensure that you have procure the add-on license key to use the data-at-rest encryption using AOS, contact Sales team to procure the license.
  • Caution: For security, you can't disable software-only data-at-rest encryption once it is enabled.

About this task

To configure cluster or container encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page
  3. Select the Encryption Type as Encrypt the entire cluster or Encrypt storage containers . Then click Save Encryption Type .
    Caution: You can enable encryption for the entire cluster or just the container. However, if you enable encryption on a container; and there are any encryption key issue like loss of encryption key, you can encounter the following:
    • The entire cluster data is affected, not just the encrypted container.
    • All the user VMs of the cluster will not able to access the data.
    The hardware option is displayed only when SEDs are detected. Else, software based encryption type will be used by default.
    Figure. Select encryption type Click to enlarge select KMS type

    Note: For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
    To enable encryption for every new storage container, do the following:
    1. In the web console, select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage Container tabs.
    2. To enable encryption, select the target storage container and then click the Update link.
      The Update Storage Container window appears.
    3. In the Advanced Settings area, select the Enable check box to enable encryption for the storage container you selected.
      Figure. Update storage container Click to enlarge Selecting encryption check box

    4. Click Save to complete.
  4. Select the Key Management Service.
    To keep the keys safe with the native KMS, select Native KMS (local) or Native KMS (remote) and click Save KMS type . If you select this option, skip to step 9 to complete the configuration.
    Note:
    • Cluster Localised Key Management Service ( Native KMS (local) ) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
    • For enhanced security of ROBO environments (typically, 1 or 2 node clusters), select the Native KMS (remote) for software based encryption of ROBO clusters managed by Prism Central.
      Note: This is option is available only if the cluster is registered to Prism Central.
    For external KMS type, select the External KMS option and click Save KMS type . Continue to step 5 for further configuration.
    Figure. Select KMS Type Click to enlarge section of the data-at-rest encryption window for selecting KMS type

    Note: You can switch between the KMS types at a later stage if the specific KMS prerequisites are met, see Switching between Native Key Manager and External Key Manager.
  5. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 5.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  6. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a master/slave (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  7. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 3c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  8. Go to the Key Management Server section (see step 4) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    4. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  9. When the configuration is complete, click the Enable Encryption button.
    Enable Encryption window is displayed.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    Caution: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.
  10. Enter ENCRYPT .
  11. Click Encrypt button.
    The data-at-rest encryption is enabled. To view the status of the encrypted cluster or container, go to Data at Rest Encryption in the Settings menu.

    When you enable encryption, a low priority background task runs to encrypt all the unencrypted data. This task is designed to take advantage of any available CPU space to encrypt the unencrypted data within a reasonable time. If the system is occupied with other workloads, the background task consumes less CPU space. Depending on the amount of data in the cluster, the background task can take 24 to 36 hours to complete.

    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must do the rekey operation for the modification to take full effect. In case of EKM, rekey to change the KEKs stored in the EKM. In case of LKM, rekey to change the leader key used by native key manager, see Changing Key Encryption Keys (Software Only)) for details.
    Note: Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Switching between Native Key Manager and External Key Manager

After Software Encryption has been established, Nutanix supports the ability to switch the KMS type from the External Key Manager to the Native Key Manager or from the Native Key Manager to an External Key Manager, without any down time.

Note:
  • The Native KMS requires a minimum of 3-node cluster.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
  • It is recommended that you backup and save the encryption keys with identifiable names before and after changing the KMS type. For backing up keys, see Backing up Keys.
To change the KMS type, change the KMS selection by editing the encryption configuration. For details, see step 3 in Configuring Data-at-Rest Encryption (Software Only) section.
Figure. Select KMS type Click to enlarge select KMS type

After you change the KMS type and save the configuration, the encryption keys are re-generated on the selected KMS storage medium and data is re-encrypted with the new keys. The old keys are destroyed.
Note: This operation completes in a few minutes, depending on the number of encrypted objects and network speed.

Changing Key Encryption Keys (Software Only)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey button under Software Encryption .
    Note: The Rekey button appears only when cluster protection is active.
    Note: If the cluster is already protected and a new key management server is added, you must press the Rekey button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

    Note: The system automatically regenerates the leader key yearly.

Destroying Data (Software Only)

Data on the AOS cluster is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the AOS. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by deleting the container or cluster. This is known as a crypto-erase.

About this task

Note: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.

To crypto-erase the container or cluster, do the following:

Procedure

  1. Delete the storage container or destroy the cluster.
    • For information on how to delete a storage container, see Modifying a Storage Container in the Prism Web Console Guide .
    • For information on how to destroy a cluster, see Destroying a Cluster in the Acropolis Advanced Administration Guide .
    Note:

    When you delete a storage container, the Curator scans and deletes the DEK and KEK keys automatically.

    When you destroy a cluster, then:

    • the Native Key Manager (local) destroys the master key shares and the encrypted DEKs/KEKs.
    • the Native Key Manager (remote) retains the root key on the PC if the cluster is still registered to a PC when it is destroyed. You must unregister a cluster from the PC and then destroy the cluster to delete the root key.
    • the External Key Manager deletes the encrypted DEKs. However, the KEKs remain on the EKM. You must use an external key manager UI to delete the KEKs.
  2. Delete the key backup files, if any.

Switching from SED-EKM to Software-LKM

This section describes the steps to switch from SED and External KMS combination to software-only and LKM combination.

About this task

To switch from SED-EKM to Software-LKM, do the following.

Procedure

  1. Perform the steps for the software-only encryption with External KMS. For more information, see Configuring Data-at-Rest Encryption (Software Only).
    After the background task completes, all the data gets encrypted by the software. The time taken to complete the task depends on the amount of data and foreground I/O operations in the cluster.
  2. Disable the SED encryption. Ensure that all the disks are unprotected.
    For more information, see Enabling/Disabling Encryption (SEDs).
  3. Switch the key management server from the External KMS to Local Key Manager. For more information, see Switching between Native Key Manager and External Key Manager.

Configuring Dual Encryption

About this task

Dual Encryption protects the data on the clusters using both SED and software-only encryption. An external key manager is used to store the keys for dual encryption, the Native KMS is not supported.

To configure dual encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, check to enable both Drive-based and Software-based encryption.
  3. Click Save Encryption Type .
    Figure. Dual Encryption Click to enlarge selecting encryption types

  4. Continue with the rest of the encryption configuration, see:
    • Configuring Data-at-Rest Encryption (Software Only)
    • Configuring Data-at-Rest Encryption (SEDs)

Backing up Keys

About this task

You can take a backup of encryption keys:

  • when you enable Software-only Encryption for the first time
  • after you regenerate the keys

Backing up encryption keys is critical in the very unlikely situation in which keys get corrupted.

You can download key backup file for a cluster on a PE or all clusters on a PC. To download key backup file for all clusters, see Taking a Consolidated Backup of Keys (Prism Central) .

To download the key backup file for a cluster, do the following:

Procedure

  1. Log on to the Prism Element web console.
  2. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  3. In the Cluster Encryption page, select Manage Keys .
  4. Enter and confirm the password.
  5. Click the Download Key Backup button.

    The backup file is saved in the default download location on your local machine.

    Note: Ensure you move the backup key file to a safe location.

Taking a Consolidated Backup of Keys (Prism Central)

If you are using the Native KMS option with software encryption for your clusters, you can take a consolidated backup of all the keys from Prism Central.

About this task

To take a consolidated backup of keys for software encryption-enabled clusters (Native KMS-only), do the following:

Procedure

  1. Log on to the Prism Central web console.
  2. Click the hamburger icon, then select Clusters > List view.
  3. Select a cluster, go to Actions , then select Manage & Backup Keys .
  4. Download the backup keys:
    1. In Password , enter your password.
    2. In Confirm Password , reenter your password.
    3. To change the encryption key, select the Rekey Encryption Key (KEK) box .
    4. To download the backup key, click Backup Key .
    Note: Ensure that you move the backup key file to a safe location.

Importing Keys

You can import the encryption keys from backup. You must note the specific commands in this topic if you backed up your keys to an external key manager (EKM)

About this task

Note: Nutanix recommends that you contact Nutanix Support for this operation. Extended cluster downtime might result if you perform this task incorrectly.

Procedure

  1. Log on to any Controller VM in the cluster with SSH.
  2. Retrieve the encryption keys stored on the cluster and verify that all the keys you want to retrieve are listed.
    In this example, the password is Nutanix.123 . date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path=/home/nutanix/encryption_key_backup_date \
    --password=Nutanix.123 --list_key_ids=true 
  3. Import the keys into the cluster.
    mantle_recovery_util --backup_file_path=/home/nutanix/key_backup \
    --password=Nutanix.123 --interactive_mode 
  4. If you are using an external key manager such as IBM Security Key Lifecycle Manager, Gemalto Safenet, or Vormetric Data Security Manager, use the --store_kek_remotely option to import the keys into the cluster.
    In this example, date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path path/encryption_key_backup_date \
     --password key_password --store_kek_remotely

Securing Traffic Through Network Segmentation

Network segmentation enhances security, resilience, and cluster performance by isolating a subset of traffic to its own network.

You can achieve traffic isolation in one or more of the following ways:

Isolating Backplane Traffic by using VLANs (Logical Segmentation)
You can separate management traffic from storage replication (or backplane) traffic by creating a separate network segment (LAN) for storage replication. For more information about the types of traffic seen on the management plane and the backplane, see Traffic Types In a Segmented Network.

To enable the CVMs in a cluster to communicate over these separated networks, the CVMs are multihomed. Multihoming is facilitated by the addition of a virtual network interface card (vNIC) to the Controller VM and placing the new interface on the backplane network. Additionally, the hypervisor is assigned an interface on the backplane network.

The traffic associated with the CVM interfaces and host interfaces on the backplane network can be secured further by placing those interfaces on a separate VLAN.

In this type of segmentation, both network segments continue to use the same external bridge and therefore use the same set of physical uplinks. For physical separation, see Isolating the Backplane Traffic Physically on an Existing Cluster.

Isolating backplane traffic from management traffic requires minimal configuration through the Prism web console. No manual host (hypervisor) configuration steps are required.

For information about isolating backplane traffic, see Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only).

Isolating Backplane Traffic Physically (Physical Segementation)

You can physically isolate the backplane traffic (intra cluster traffic) from the management traffic (Prism, SSH, SNMP) in to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation of the backplane traffic from the management traffic.

You can use Prism to configure the vNIC on the CVM and configure the backplane traffic to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation.

For more information about physically isolating backplane traffic, see Isolating the Backplane Traffic Physically on an Existing Cluster.

Isolating service-specific traffic
You can also secure traffic associated with a service (for example, Nutanix Volumes) by confining its traffic to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation for service-specific traffic.

You can use Prism to create the vNIC on the CVM and configure the service to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation. You need one virtual network for each service you want to isolate. For a list of the services whose traffic you can isolate in the current release, see Cluster Services That Support Traffic Isolation.

For information about isolating service-specific traffic, see Isolating Service-Specific Traffic.

Isolating Stargate-to-Stargate traffic over RDMA
Some Nutanix platforms support remote direct memory access (RDMA) for Stargate-to-Stargate service communication. You can create a separate virtual network for RDMA-enabled network interface cards. If a node has RDMA-enabled NICs, Foundation passes the NICs through to the CVMs during imaging. The CVMs use only the first of the two RDMA-enabled NICs for Stargate-to-Stargate communications. The virtual NIC on the CVM is named rdma0. Foundation does not configure the RDMA LAN. After creating a cluster, you need to enable RDMA by creating an RDMA LAN from the Prism web console. For more information about RDMA support, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

For information about isolating backplane traffic on an RDMA cluster, see Isolating the Backplane Traffic on an Existing RDMA Cluster.

Traffic Types In a Segmented Network

The traffic entering and leaving a Nutanix cluster can be broadly classified into the following types:

Backplane traffic
Backplane traffic is intra-cluster traffic that is necessary for the cluster to function, and it comprises traffic between CVMs and traffic between CVMs and hosts for functions such as storage RF replication, host management, high availability, and so on. This traffic uses eth2 on the CVM. In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured. For nodes that have RDMA-enabled NICs, the CVMs use a separate RDMA LAN for Stargate-to-Stargate communications.
Management traffic
Management traffic is administrative traffic, or traffic associated with Prism and SSH connections, remote logging, SNMP, and so on. The current implementation simplifies the definition of management traffic to be any traffic that is not on the backplane network, and therefore also includes communications between user VMs and CVMs. This traffic uses eth0 on the CVM.

Traffic on the management plane can be further isolated per service or feature. An example of this type of traffic is the traffic that the cluster receives from external iSCSI initiators (Nutanix Volumes iSCSI traffic). For a list of services supported in the current release, see Cluster Services That Support Traffic Isolation.

Segmented and Unsegmented Networks

In the default unsegmented network in a Nutanix cluster (ESXi and AHV), the Controller VM has two virtual network interfaces—eth0 and eth1.

Interface eth0 is connected to the default external virtual switch, which is in turn connected to the external network through a bond or NIC team that contains the host physical uplinks.

Interface eth1 is connected to an internal network that enables the CVM to communicate with the hypervisor.

In the below unsegmented network (see Unsegmented Network - ESXi Cluster , and Unsegmented Network - AHV Cluster images) all external CVM traffic, whether backplane or management traffic, uses interface eth0. These interfaces are on the default VLAN on the default virtual switch.

Figure. Unsegmented Network- ESXi Cluster Click to enlarge

This figure shows an unsegmented network (AHV cluster).

In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured.

Figure. Unsegmented Network- AHV Cluster Click to enlarge

If you further isolate service-specific traffic, additional vNICs are created on the CVM. Each service requiring isolation is assigned a dedicated virtual NIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

Network with Segmentation

In a segmented network, management traffic uses CVM interface eth0 and additional services can be isolated to different VLANs or virtual switches. In backplane segmentation, the backplane traffic uses interface eth2. The backplane network uses either the default VLAN or, optionally, a separate VLAN that you specify when segmenting the network. In ESXi, you must select a port group for the new vmkernel interface. In AHV this internal interface is created automatically in the selected virtual switch. For physical separation of the backplane network, create this new port group on a separate virtual switch in ESXi, or select the desired virtual switch in the AHV GUI.

If you want to isolate service-specific traffic such as Volumes or Disaster Recovery as well as backplane traffic, then additional vNICs are needed on the CVM, but no new vmkernel adapters or internal interfaces are required. AOS creates additional vNICs on the CVM. Each service that requires isolation is assigned a dedicated vNIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

You can choose to perform backplane segmentation alone, with no other forms of segmentation. You can also choose to use one or more types of service specific segmentation with or without backplane segmentation. In all of these cases, you can choose to segment any service to either the existing, or a new virtual switch for further physical traffic isolation. The combination selected is driven by the security and networking requirements of the deployment. In most cases, the default configuration with no segmentation of any kind is recommended due to simplicity and ease of deployment.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on ESXi hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an ESXi Cluster Click to enlarge

Here are the CVM to ESXi hypervisor connection details:

  • The eth0 vNIC on the CVM and vmk0 on the host are carrying management traffic and connected to the hypervisor through the existing PGm (portgroup) on vSwitch0.
  • The eth2 vNIC on the CVM and vmk2 on the host are carrying backplane traffic and connected to the hypervisor through a new user created PGb on the existing vSwitch.
  • The ntnx0 vNIC on the CVM is carrying iSCSI traffic and connected to the hypervisor through PGi on the vSwitch1. No new vmkernel adapter is required.
  • The ntnx1 vNIC on the CVM is carrying DR traffic and connected to the hypervisor through PGd on the vSwitch2. Here as well, there is no new vmkernel adapter required.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on an AHV hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an AHV Cluster Click to enlarge

Here are the CVM to AHV hypervisor connection details:

  • The eth0 vNIC on the CVM is carrying management traffic and connected to the hypervisor through the existing vnet0.
  • Other vNICs such as eth2, ntnx0, and ntnx1 are connected to the hypervisor through the auto created interfaces on either the existing or new vSwitch.
Note: In the above figure the interface name 'br0-bp' is read as 'br0-backplane'.

The following table describes the vNIC, port group (PG), VM kernel (vmk), virtual network (vnet) and virtual switch connections for CVM and hypervisor in different implementation scenarios. The tables capture information for ESXi and AHV hypervisors:

Table 1.
Implementation Scenarios vNICs on CVM Connected to ESXi Hypervisor Connected to AHV Hypervisor
Backplane Segmentation with 1 vSwitch

eth0:

DR, iSCSI, andManagement traffic

vmk0 via existing PGm on vSwitch Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on bridge br0
Backplane Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0 Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch
Service Specific Segmentation for Volumes with 1 vSwitch

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on vSwitch0

Auto created interface on existing br0
Service Specific Segmentation for Volumes with 2 vSwitches

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on new vSwitch

Auto created interface on new virtual switch
Service Specific Segmentation for DR with 1 vSwitch

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on existing br0

Service Specific Segmentation for DR with 2 vSwitches

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on new vSwitch

Auto created interface on new virtual switch

Backplane and Service Specific Segmentation with 1 vSwitch

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on br0

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch0 Auto created interface on br0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on br0
Backplane and Service Specific Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch1

No new user defined vmkernel adapter is required.

Auto created interface on new virtual switch

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch2.

No new user defined vmkernel adapter is required.

Auto created interface in new virtual switch

Implementation Considerations

Supported Environment

Network segmentation is supported in the following environment:

  • The hypervisor must be one of the following:
    • For network segmentation by traffic type (separating backplane traffic from management traffic):
      • AHV
      • ESXi
      • Hyper-V
        Note: Only logical segmentation (or VLAN-based segmentation) is supported on Hyper-V. Physical network segmentation is not supported on Hyper-V.
    • For service-specific traffic isolation:
      • AHV
      • ESXi
  • For logical network segmentation, AOS version must be 5.5 or later. For physical segmentation and service-specific traffic isolation, the AOS version must be 5.11 or later.
  • From the 5.11.1 release, you cannot enable network segmentation on mixed-hypervisor clusters. However, if you enable network segmentation on a mixed-hypervisor cluster running a release earlier than 5.11.1 and you upgrade that cluster to 5.11.1 or later, network segmentation continues to work seamlessly.
  • RDMA requirements:
    • Network segmentation is supported with RDMA for AHV and ESXi hypervisors only.
    • For more information about RDMA, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

Prerequisites

For Nutanix Volumes

Stargate does not monitor the health of a segmented network. If physical network segmentation is configured, network failures or connectivity issues are not tolerated. To overcome this issue, configure redundancy in the network. That is, use two or more uplinks in a fault tolerant configuration, connected to two separate physical switches.

For Disaster Recovery

  • Ensure that the VLAN and subnet that you plan to use for the network segment are routable.
  • Make sure that you have a pool of IP addresses to specify when configuring segmentation. For each cluster, you need n+1 IP addresses, where n is the number of nodes in the cluster. The additional IP address is for the virtual IP address requirement.
  • Enable network segmentation for disaster recovery at both sites (local and remote) before configuring remote sites at those sites.

Limitations

For Nutanix Volumes

  • If network segmentation is enabled for Volumes, volume group attachments are not recovered during VM recovery.
  • Nutanix service VMs such as Files and Buckets continue to communicate with the CVM eth0 interface when using Volumes for iSCSI traffic. Other external clients use the new service-specific CVM interface.

For Disaster Recovery

The system does not support configuring a Leap DR and DR service specific traffic isolation together.

Cluster Services That Support Traffic Isolation

In this release, you can isolate traffic associated with the following services to its own virtual network:

  • Management (The default network that cannot be moved from CVM eth0)

  • Backplane

  • RDMA

  • Service Specific Disaster Recovery

  • Service Specific Volumes

Configurations in Which Network Segmentation Is Not Supported

Network segmentation is not supported in the following configurations:

  • Clusters on which the CVMs have a manually created eth2 interface.
  • Clusters on which the eth2 interface on one or more CVMs have been assigned an IP address manually. During an upgrade to an AOS release that supports network segmentation, an eth2 interface is created on each CVM in the cluster. Even though the cluster does not use these interfaces until you configure network segmentation, you must not manually configure these interfaces in any way.
Caution:

Nutanix has deprecated support for manual multi-homed CVM network interfaces from AOS version 5.15 and later. Such a manual configuration can lead to unexpected issues on these releases. If you have configured an eth2 interface on the CVM manually, refer to the KB-9479 and Nutanix Field Advisory #78 for details on how to remove the eth2 interface.

Configuring the Network on an AHV Host

These steps describe how to configure host networking for physical and service-specific network segmentation on an AHV host. These steps are prerequisites for physical and service-specific network segmentation and you must perform these steps before you perform physical or service-specific traffic isolation. If you are configuring networking on an ESXi host, perform the equivalent steps by referring to the ESXi documentation. On ESXi, you create vSwitches and port groups to achieve the same results.

About this task

For information about the procedures to create, update and delete a virtual switch in Prism Element Web Console, see Configuring a Virtual Network for Guest VMs in the Prism Web Console Guide .

Note: The term unconfigured node in this procedure refers to a node that is not part of a cluster and is being prepared for cluster expansion.

To configure host networking for physical and service-specific network segmentation, do the following:

Note: If you are segmenting traffic on nodes that are already part of a cluster, perform the first step. If you are segmenting traffic on an unconfigured node that is not part of a cluster, perform the second step directly.

Procedure

  1. If you are segmenting traffic on nodes that are already part of a cluster, do the following:
    1. From the default virtual switch vs0, remove the uplinks that you want to add to the virtual switch you created by updating the default virtual switch.

      For information about updating the default virtual switch vs0 to remove the uplinks, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

    2. Create a virtual switch for the backplane traffic or service whose traffic you want to isolate.
      Add the uplinks to the new virtual switch.

      For information about creating a new virtual switch, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

  2. If you are segmenting traffic on an unconfigured node (new host) that is not part of a cluster, do the following:
    1. Create a bridge for the backplane traffic or service whose traffic you want to isolate by logging on to the new host.
      ovs-vsctl add-br br1
    2. From the default bridge br0, log on to the host CVM and keep only eth0 and eth1 in br0.
      manage_ovs --bridge_name br0 --interfaces eth0,eth1 --bond_name br0-up --bond_mode active-backup update_uplinks
    3. Log on to the host CVM and then add eth2 and eth3 to the uplink bond of br1
      manage_ovs --bridge_name br1 --interfaces eth2,eth3 --bond_name br1-up --bond_mode active-backup update_uplinks
      Note: If this step is not done correctly, a network loop can be created that causes a network outage. Ensure that no other uplink interfaces exist on this bridge before adding the new interfaces, and always add interfaces into a bond.

What to do next

Prism can configure a VLAN only on AHV hosts. Therefore, if the hypervisor is ESXi, in addition to configuring the VLAN on the physical switch, make sure to configure the VLAN on the port group.

If you are performing physical network segmentation, see Isolating the Backplane Traffic Physically on an Existing Cluster.

If you are performing service-specific traffic isolation, see Service-Specific Traffic Isolation.

Network Segmentation for Traffic Types (Backplane, Management, and RDMA)

You can segment the network on a Nutanix cluster in the following ways:

  • You can segment the network on an existing cluster by using the Prism web console.
  • You can segment the network when creating a cluster by using Nutanix Foundation 3.11.2 or higher versions.

The following topics describe network segmentation procedures for existing clusters and changes during AOS upgrade and cluster expansion. For more information about segmenting the network when creating a cluster, see the Field Installation Guide.

Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only)

You can segment the network on an existing cluster by using the Prism web console. You must configure a separate VLAN for the backplane network to achieve logical segmentation. The network segmentation process creates a separate network for backplane communications on the existing default virtual switch. The process then places the eth2 interfaces (that the process creates on the CVMs during upgrade) and the host interfaces on the newly created network. This method allows you to achieve logical segmentation of traffic over the selected VLAN. From the specified subnet, assign IP addresses to each new interface. You, therefore, need two IP addresses per node. When you specify the VLAN ID, AHV places the newly created interfaces on the specified VLAN.

Before you begin

If your cluster has RDMA-enabled NICs, follow the procedure in Isolating the Backplane Traffic on an Existing RDMA Cluster.

  • For ESXi clusters, it is mandatory to create and manage port groups that networking uses for CVM and backplane networking. Therefore, ensure that you create port groups on the default virtual switch vs0 for the ESXi hosts and CVMs.

    Since backplane traffic segmentation is logical, it is based on the VLAN that is tagged for the port groups. Therefore, while creating the port groups ensure that you tag the new port groups created for the ESXi hosts and CVMs with the appropriate VLAN ID. Consult your networking team to acquire the necessary VLANs for use with Nutanix nodes.

  • For new backplane networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network segmentation.

About this task

You need separate VLANs for Management network and Backplane network. For example, configure VLAN 100 as Management network VLAN and VLAN 200 as Backplane network VLAN on the Ethernet links that connect the Nutanix nodes to the physical switch.
Note: Nutanix does not control these VLAN IDs. Consult your networking team to acquire VLANs for the Management and Backplane networks.

To segment the network on an existing cluster for a backplane LAN, do the following:

Note:

In this method, for AHV nodes, logical segmentation (VLAN-based segmentation) is done on the default bridge. The process creates the host backplane interface on the Backplane Network port group on ESXi or br0-backplane (interface) on br0 bridge in case of AHV. The eth2 interface on the CVM is on CVM Backplane Network by default.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box appears.
  2. In the Network Configuration > Controller VM Interfaces > Backplane LAN row, click Configure .
    The Create Interface dialog box appears.
  3. In the Create Interface dialog box, provide the necessary information.
    • In Subnet IP , specify a non-routable subnet.

      Ensure that the subnet has sufficient IP addresses. The segmentation process requires two IP addresses per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

    • In Netmask , specify the netmask.
    • If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.

      Nutanix recommends that you use a VLAN. If you do not specify a VLAN ID, the default VLAN on the virtual switch is used.

  4. Click Verify and Save .
    The network segmentation process creates the backplane network if the network settings that you specified pass validation.

Isolating the Backplane Traffic on an Existing RDMA Cluster

Segment the network on an existing RDMA cluster by using the Prism web console.

About this task

The network segmentation process creates a separate network for RDMA communications on the existing default virtual switch and places the rdma0 interface (created on the CVMs during upgrade) and the host interfaces on the newly created network. From the specified subnet, IP addresses are assigned to each new interface. Two IP addresses are therefore required per node. If you specify the optional VLAN ID, the newly created interfaces are placed on the VLAN. A separate VLAN is highly recommended for the RDMA network to achieve true segmentation.

Before you begin

  • For new RDMA networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network alone.
  • If you plan to specify a VLAN for the RDMA network, make sure that the VLAN is configured on the physical switch ports to which the nodes are connected.
  • Configure the switch interface as a Trunk port.
  • Ensure that this cluster is configured to support RDMA during installation using the Foundation.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box is displayed.
  2. Click the Internal Interfaces tab.
  3. Click Configure in the RDMA row.
    Ensure that you have configured the switch interface as a trunk port.
    The Create Interface dialog box is displayed.
    Figure. Create Interface Dialog Box Click to enlarge

  4. In the Create Interface dialog box, do the following:
    1. In Subnet IP and Netmask , specify a non-routable subnet and netmask, respectively. Make sure that the subnet can accommodate cluster expansion in the future.
    2. In VLAN , specify a VLAN ID for the RDMA LAN.
      A VLAN ID is optional but highly recommended for true network segmentation and enhanced security.
    3. c. From the PFC list, select the priority flow control value configured on the physical switch port.
  5. Click Verify and Save .
  6. Click Close .

Isolating the Backplane Traffic Physically on an Existing Cluster

By using the Prism web console, you can configure the eth2 interface on a separate vSwitch (ESXi) or bridge (AHV). The network segmentation process creates a separate network for backplane communications on the new bridge or vSwitch and places the eth2 interfaces (that are created on the CVMs during upgrade) and the host interfaces on the newly created network. From the specified subnet, IP addresses are assigned to each new interface. Two IP addresses are therefore required per node. If you specify the optional VLAN ID, the newly created interfaces are placed on the VLAN. A separate VLAN is highly recommended for the backplane network to achieve true segmentation.

Before you begin

The physical isolation of backplane traffic is supported from the AOS 5.11.1 release.

Before you enable physical isolation of the backplane traffic, configure the network (port groups or bridges) on the hosts and associate the network with the required physical NICs.

Note: Nutanix does not support physical network segmentation on Hyper-V.

On the AHV hosts, do the following:

  1. Create a bridge for the backplane traffic.
  2. From the default bridge br0, remove the uplinks (physical NICs) that you want to add to the bridge you created for the backplane traffic.
  3. Add the uplinks to a new bond.
  4. Add the bond to the new bridge.

See Configuring the Network on an AHV Host for instructions about how to perform these tasks on a host.

On the ESXi hosts, do the following:

  1. Create a vSwitch for the backplane traffic.
  2. From vSwitch0, remove the uplinks (physical NICs) that you want to add to the vSwitch you created for the backplane traffic.
  3. On the backplane vSwitch, create one port group for the CVM and another for the host. Ensure that at least one uplink is present in the Active Adaptors list for each port group if you have overridden the failover order.

See the ESXi documentation for instructions about how to perform these tasks.

Note: Before you perform the following procedure, ensure that the uplinks you added to the vSwitch or bridge are in the UP state.

About this task

Perform the following procedure to physically segment the backplane traffic.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
  2. On the Controller VM Interfaces tab, in the Backplane LAN row, click Configure .
  3. In the Backplane LAN dialog box, do the following:
    1. In Subnet IP , specify a non-routable subnet.
      Make sure that the subnet has a sufficient number of IP addresses. Two IP addresses are required per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.
    2. In Netmask , specify the netmask.
    3. If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.
      A VLAN is strongly recommended. If you do not specify a VLAN ID, the default VLAN on the virtual switch is implied.
    4. (AHV only) In the Host Node list, select the bridge you created for the backplane traffic.
    5. (ESXi only) In the Host Port Group list, select the port group you created for the host.
    6. (ESXi only) In the CVM Port Group list, select the port group you created for the CVM.
    Note:

    Nutanix clusters support both vSphere Standard Switches and vSphere Distributed Switches. However, you must mandatorily configure only one type of virtual switches in one cluster. Configure all the backplane and management traffic in one cluster on either vSphere Standard Switches or vSphere Distributed Switches. Do not mix Standard and Distributed vSwitches on a single cluster.

  4. Click Verify and Save .
    If the network settings you specified pass validation, the backplane network is created and the CVMs perform a reboot in a rolling fashion (one at a time), after which the services use the new backplane network. The progress of this operation can be tracked on the Prism tasks page.
    Note: Segmenting backplane traffic can involve up to two rolling reboots of the CVMs. The first rolling reboot is done to move the backplane interface (eth2) of the CVM to the selected port group or bridge. This is done only for CVM(s) whose backplane interface is not already connected to the selected port group or bridge. The second rolling reboot is done to migrate the cluster services to the newly configured backplane interface.
  5. Restart the Acropolis service on all the nodes in the cluster.
    Note: Perform this step only if your AOS version is 5.17.0.x. This step is not required if your AOS version is 5.17.1 or later.
    1. Log on to any CVM in the cluster with SSH.
    2. Stop the Acropolis service.
      nutanix@cvm$ allssh genesis stop acropolis
      Note: You cannot manage your guest VMs after the Acropolis service is stopped.
    3. Verify if the Acropolis service is DOWN on all the CVMs.
      nutanix@cvm$ cluster status | grep -v UP 

      An output similar to the following is displayed:

      
      nutanix@cvm$ cluster status | grep -v UP 
      
      2019-09-04 14:43:18 INFO zookeeper_session.py:143 cluster is attempting to connect to Zookeeper 
      
      2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1, X.X.X.2, X.X.X.3 
      
      The state of the cluster: start 
      
      Lockdown mode: Disabled 
              CVM: X.X.X.1 Up 
                                 Acropolis DOWN       [] 
              CVM: X.X.X.2 Up, ZeusLeader 
                                 Acropolis DOWN       [] 
              CVM: X.X.X.3 Maintenance
    4. From any CVM in the cluster, start the Acropolis service.
      nutanix@cvm$ cluster start 

Reconfiguring the Backplane Network

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Caution: At the end of this procedure, the cluster stops and restarts, even if only the VLAN is changed, and therefore involves cluster downtime.

To reconfigure the cluster, do the following:

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Reconfigure the backplane network.
    nutanix@cvm$ backplane_ip_reconfig [--backplane_vlan=vlan-id] \
    [--backplane_subnet=subnet_ip_address --backplane_netmask=netmask]

    Replace vlan-id with the new VLAN ID, subnet_ip_address with the new subnet IP address, and netmask with the new netmask.

    For example, reconfigure the backplane network to use VLAN ID 10 and subnet 172.30.25.0 with netmask 255.255.255.0 .

    nutanix@cvm$ backplane_ip_reconfig --backplane_vlan=10 \
    --backplane_subnet=172.30.25.0 --backplane_netmask=255.255.255.0

    Output similar to the following is displayed:

    This operation will do a 'cluster stop', resulting in disruption of 
    cluster services. Do you still want to continue? (Type "yes" (without quotes) 
    to continue)
    Type yes to confirm that you want to reconfigure the backplane network.
    Caution: During the reconfiguration process, you might receive an error message similar to the following.
    Failed to reach a node.
    You can safely ignore this error message and therefore do not stop the script manually.
    Note: The backplane_ip_reconfig command is not supported on ESXi clusters with vSphere Distributed Switches. To reconfigure the backplane network on a vSphere Distributed Switch setup, disable the backplane network (see Disabling Network Segmentation on an ESXi and Hyper-V Cluster) and enable again with a different subnet or VLAN.
  3. Type yes to confirm that you want to reconfigure the backplane network.
    The reconfiguration procedure takes a few minutes and includes a cluster restart. If you type anything other than yes , network reconfiguration is aborted.
  4. After the process completes, verify that the backplane was reconfigured.
    1. Verify that the IP addresses of the eth2 interfaces on the CVM are set correctly.
      nutanix@cvm$ svmips -b
      Output similar to the following is displayed:
      172.30.25.1 172.30.25.3 172.30.25.5
    2. Verify that the IP addresses of the backplane interfaces of the hosts are set correctly.
      nutanix@cvm$ hostips -b
      Output similar to the following is displayed:
      172.30.25.2 172.30.25.4 172.30.25.6
    The svmips and hostips commands, when used with the option b , display the IP addresses assigned to the interfaces on the backplane.

Disabling Network Segmentation on an ESXi and Hyper-V Cluster

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Disable the backplane network.
    • Use this CLI to disable network segmentation on an ESXi and Hyper-V cluster:
      nutanix@cvm$ network_segmentation --backplane_network --disable

      Output similar to the following appears:

      Operation type : Disable
      Network type : kBackplane
      Params : {}
      Please enter [Y/y] to confirm or any other key to cancel the operation

      Type Y/y to confirm that you want to reconfigure the backplane network.

      If you type Y/y, network segmentation is disabled and the cluster restarts in a rolling manner, one CVM at a time. If you type anything other than Y/y, network segmentation is not disabled.

  3. Verify that network segmentation was successfully disabled. You can verify this in one of two ways:
    • Verify that the backplane is disabled.
      nutanix@cvm$ network_segment_status

      Output similar to the following is displayed:

      2017-11-23 06:18:23 INFO zookeeper_session.py:110 network_segment_status is attempting to connect to Zookeeper

      Network segmentation is disabled

    • Verify that the commands to show the backplane IP addresses of the CVMs and hosts list the management IP addresses (run the svmips and hostips commands once without the b option and once with the b option, and then compare the IP addresses shown in the output).
      Important:
      nutanix@cvm$ svmips
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ svmips -b
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ hostips
      192.127.3.5 192.127.3.6 192.127.3.7
      nutanix@cvm$ hostips -b
      192.127.3.5 192.127.3.6 192.127.3.7

      In the example above, the outputs of the svmips and hostips commands with and without the b option are the same, indicating that the backplane network segmentation is disabled.

Disabling Network Segmentation on an AHV Cluster

About this task

You perform backplane network reconfiguration procedure on any one of the CVMs in the cluster. The change propagates to the remaining CVMs.

Procedure

  1. Shut down all the guest VMs in the cluster from within the guest OS or use the Prism Element web console.
  2. Place all nodes of a cluster into the maintenance mode.
    1. Use SSH to log on to a Controller VM in the cluster
    2. Determine the IP address of the node you want to put into the maintenance mode:
      nutanix@cvm$ acli host.list
      Note the value of Hypervisor IP for the node you want to put in the maintenance mode.
    3. Put the node into the maintenance mode:
      nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true | false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
      Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is recommended to shutdown user VMs before proceeding with disruptive changes.

      Replace host-IP-address with either the IP address or host name of the AHV host you want to shut down.

      The following are optional parameters for running the acli host.enter_maintenance_mode command:

      • wait
      • non_migratable_vm_action

      Do not continue if the host has failed to enter the maintenance mode.

    4. Verify if the host is in the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to EnteredMaintenanceMode and schedulable equals to False .

  3. Disable backplane network segmentation from the Prism Web Console.
    1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration under the Settings .
    2. In the Internal Interfaces tab, in the Backplane LAN row, click Disable .
      Figure. Disable Network Configuration Click to enlarge

    3. Click Yes to disable Backplane LAN.

      This involves a rolling reboot of CVMs to migrate the cluster services back to the external interface.

  4. Log on to a CVM in the cluster with SSH and stop Acropolis cluster-wide:
    nutanix@cvm$ allssh genesis stop acropolis 
  5. Restart Acropolis cluster-wide:
    nutanix@cvm$ cluster start 
  6. Remove all nodes from the maintenance mode.
    1. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance mode:
      nutanix@cvm$ acli host.exit_maintenance_mode host-ip

      Replace host-ip with the new IP address of the host.

      This command migrates (live migration) all the VMs that were previously running on the host back to the host.

    2. Verify if the host has exited the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to kAcropolisNormal or AcropolisNormal and schedulable equals to True .

  7. Power on the guest VMs from the Prism Element web console.

Service-Specific Traffic Isolation

Isolating the traffic associated with a specific service is a two-step process. The process is as follows:

  • Configure the networks and uplinks on each host manually. Prism only creates the VNIC that the service requires, and it places that VNIC on the bridge or port group that you specify. Therefore, you must manually create the bridge or /port group on each host and add the required physical NICs as uplinks to that bridge or port group.
  • Configure network segmentation for the service by using Prism. Create an extra VNIC for the service, specify any additional parameters that are required (for example, IP address pools), and the bridge or port group that you want to dedicate to the service.

Isolating Service-Specific Traffic

Before you begin

  • Ensure to configure each host as described in Configuring the Network on an AHV Host.
  • Review Prerequisites.

About this task

To isolate a service to a separate virtual network, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. In the details pane, on the Internal Interfaces tab, click Create New Interface .
    The Create New Interface dialog box is displayed.
  4. On the Interface Details tab, do the following:
    1. Specify a descriptive name for the network segment.
    2. (On AHV) Optionally, in VLAN ID , specify a VLAN ID.
      Make sure that the VLAN ID is configured on the physical switch.
    3. In Bridge (on AHV) or CVM Port Group (on ESXi), select the bridge or port group that you created for the network segment.
    4. To specify an IP address pool for the network segment, click Create New IP Pool , and then, in the IP Pool dialog box, do the following:
      • In Name , specify a name for the pool.
      • In Netmask , specify the network mask for the pool.
      • Click Add an IP Range , specify the start and end IP addresses in the IP Range dialog box that is displayed.
      • Use Add an IP Range to add as many IP address ranges as you need.
        Note: Add at least n+1 IP addresses in an IP range considering n is the number of nodes in the cluster.
      • Click Save .
      • Use Add an IP Pool to add more IP address pools. You can use only one IP address pool at any given time.
      • Select the IP address pool that you want to use, and then click Next .
        Note: You can also use an existing unused IP address pool.
  5. On the Feature Selection tab, do the following:
    You cannot enable network segmentation for multiple services at the same time. Complete the configuration for one service before you enable network segmentation for another service.
    1. Select the service whose traffic you want to isolate.
    2. Configure the settings for the selected service.
      The settings on this page depend on the service you select. For information about service-specific settings, see Service-Specific Settings and Configurations.
    3. Click Save .
  6. In the Create Interface dialog box, click Save .
    The CVMs are rebooted multiple times, one after another. This procedure might trigger more tasks on the cluster. For example, if you configure network segmentation for disaster recovery, the firewall rules are added on the CVM to allow traffic on the specified ports through the new CVM interface and updated when a new recovery cluster is added or an existing cluster is modified.

What to do next

See Service-Specific Settings and Configurations for any additional tasks that are required after you segment the network for a service.

Modifying Network Segmentation Configured for a Service

To modify network segmentation configured for a service, you must first disable network segmentation for that service and then create the network interface again for that service with the new IP address pool and VLAN.

About this task

For example, if the interface of the service you want to modify is ntnx0, after the reconfiguration, the same interface (ntnx0) is assigned to that service if that interface is not assigned to any other service. If ntnx0 is assigned to another service, a new interface (for example ntnx1) is created and assigned to that service.

Perform the following to reconfigure network segmentation configured for a service.

Procedure

  1. Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  2. Create the network again by following the instructions in Isolating Service-Specific Traffic.

Disabling Network Segmentation Configured for a Service

To disable network segmentation configured for a service, you must disable the dedicated VNIC. Disabling network segmentation frees up the name of the VNIC. Disabling network segmentation frees up the vNIC’s name. The free name is reused in a subsequent network segmentation configuration.

About this task

At the end of this procedure, the cluster performs a rolling restart. Disabling network segmentation might also disrupt the functioning of the associated service. To restore normal operations, you might have to perform other tasks immediately after the cluster has completed the rolling restart. For information about the follow-up tasks, see Service-Specific Settings and Configurations.

To disable the network segmentation configured for a service, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. On the Internal Interfaces tab, for the interface that you want to disable, click Disable .
    Note: The defined IP address pool is available even after disabling the network segmentation.

Deleting a vNIC Configured for a Service

If you disable network segmentation for a service, the vNIC for that service is not deleted. AOS reuses the vNIC if you enable network segmentation again. However, you can manually delete a vNIC by logging into any CVM in the cluster with SSH.

Before you begin

Ensure that the following prerequisites are met before you delete the vNIC configured for a Service:
  • Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  • Observe the Limitation specified in Limitation for vNIC Hot-Unplugging topic in AHV Admin Guide .
you

About this task

Perform the following to delete a vNIC.

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Delete the vNIC.
    nutanix@cvm$ network_segmentation --service_network --interface="interface-name" --delete

    Replace interface-name with the name of the interface you want to delete. For example, ntnx0.

Service-Specific Settings and Configurations

The following sections describe the settings required by the services that support network segmentation.

Nutanix Volumes

Network segmentation for Volumes also requires you to migrate iSCSI client connections to the new segmented network. If you no longer require segmentation for Volumes traffic, you must also migrate connections back to eth0 after disabling the vNIC used for Volumes traffic.

You can create two different networks for Nutanix Volumes with different IP pools, VLANs, and data services IP addresses. For example, you can create two iSCSI networks for production and non-production traffic on the same Nutanix cluster.

Follow the instructions in Isolating Service-Specific Traffic again to create the second network for Volumes after you create the first network.

Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Client Subnet The network (in CIDR notation) that hosts the iSCSI clients. Required If the vNIC created for the service on the CVM is not on the same network as the clients.
Gateway Gateway to the subnetwork that hosts the iSCSI clients. Required If you specify the client subnet.
Migrating iSCSI Connections to the Segmented Network

After you enable network segmentation for Volumes, you must manually migrate connections from existing iSCSI clients to the newly segmented network.

Before you begin

Make sure that the task for enabling network segmentation for the service succeeds.

About this task

Note: Even though support is available to run iSCSI traffic on both the segmented and management networks at the same time, Nutanix recommends that you move the iSCSI traffic for guest VMs to the segmented network to achieve true isolation.

To migrate iSCSI connections to the segmented network, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets that are using CVM eth0 or the Data Service IP address.
  2. Optionally, remove all the discovery records for the Data Services IP address (DSIP) on eth0.
  3. If the clients are allowlisted by their IP address, remove the client IP address that is on the management network from the allowlist, and then add the client IP address on the new network to the allowlist.
    nutanix@cvm$ acli vg.detach_external vg_name initiator_network_id=old_vm_IP
    nutanix@cvm$ acli vg.attach_external vg_name initiator_network_id=new_vm_IP
    

    Replace vg_name with the name of the volume group and old_vm_IP and new_vm_IP with the old and new client IP addresses, respectively.

  4. Discover the virtual IP address specified for Volumes.
  5. Connect to the iSCSI targets from the client.
Migrating Existing iSCSI Connections to the Management Network (Controller VM eth0)

About this task

To migrate existing iSCSI connections to eth0, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets using the CVM vNIC dedicated to Volumes.
  2. Remove all the discovery records for the DSIP on the new interface.
  3. Discover the DSIP for eth0.
  4. Connect the clients to the iSCSI targets.
Disaster Recovery with Protection Domains

The settings for configuring network segmentation for disaster recovery apply to all Asynchronous, NearSync, and Metro Availability replication schedules. You can use disaster recovery with Asynchronous, NearSync, and Metro Availability replications only if both the primary site and the recovery site is configured with Network Segmentation. Before enabling or disabling the network segmentation on a host, disable all the disaster recovery replication schedules running on that host.

Note: Network segmentation does not support disaster recovery with Leap.
Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Note: Virtual IP address is different from the external IP address and the data services IP address of the cluster.
Gateway Gateway to the subnetwork.
Remote Site Configuration

After configuring network segmentation for disaster recovery, configure remote sites at both locations. You also need to reconfigure remote sites if you disable network segmentation.

For information about configuring remote sites, see Remote Site Configuration in the Data Protection and Recovery with Prism Element Guide.

Segmenting a Stretched Layer 2 Network for Disaster Recovery

A stretched Layer 2 network configuration allows the source and remote metro clusters to be in the same broadcast domain and communicate without a gateway.

About this task

You can enable network segmentation for disaster recovery on a stretched Layer 2 network that does not have a gateway. A stretched Layer 2 network is usually configured across the physically remote clusters such as a metro availability cluster deployment. A stretched Layer 2 network allows the source and remote clusters to be configured in the same broadcast domain without the usual gateway.

See AOS 5.19.2 Release Notes for minimum AOS version required to configure a stretched Layer 2 network between metro clusters.

To configure a network segment as a stretched L2 network, do the following.

Procedure

Run the following command:
network_segmentation --service_network --service_name=kDR --ip_pool="DR-ip-pool-name" --service_vlan="DR-vlan-id" --desc_name="Description" --host_physical_network='"portgroup/bridge"' --stretched_metro

Replace the following: (See Isolating Service-Specific Traffic for the information)

  • DR-ip-pool-name with the name of the IP Pool created for the DR service or any existing unused IP address pool.
  • DR-vlan-id with the VLAN ID being used for the DR service.
  • Description with a suitable description of this stretched L2 network segment.
  • portgroup/bridge with the details of Bridge or CVM Port Group used for the DR service.

For more information about the network_segmentation command, see the Command Reference guide.

Network Segmentation During Cluster Expansion

When you expand a cluster on which service-specific traffic isolation is configured, ensure to meet the following prerequisites before adding new (unconfigured) nodes to the cluster:

  • Manually configure bridges for segmented networks on the hypervisor host of the new nodes. The bridges used for the segmented network should be identical to the other nodes. For information about the steps to perform on an unconfigured node, see Configuring the Network on a Host . The steps involve logging on to the host by using SSH and running the ovs-vsctl commands. For instructions about how to add nodes to your Nutanix cluster, see Expanding a Cluster in the Prism Web Console Guide .
  • Configure the network settings on the physical switch to which the new nodes connect are identical to the other nodes in the cluster. New nodes should be able to communicate with current nodes using the same VLAN ID for segmented networks.
  • Prepare new nodes imaged with AOS and hypervisor identical to the other nodes of the cluster. Reimaging the nodes as part of the cluster expansion is not supported if the cluster network is segmented.
Note: For ESXi clusters with vSphere Distributed Switches (DVS):
  • Before you expand the cluster, ensure that the node you want to add is part of the same vCenter cluster, the same DVS as the other nodes in the cluster, and is not in a disconnected state.

  • Ensure that the nodes that you add have more than 20 GB of memory.

Network Segmentation–Related Changes During an AOS Upgrade

When you upgrade from an AOS version which does not support network segmentation to an AOS version that does, the eth2 interface (used to segregate backplane traffic) is automatically created on each CVM. However, the network remains unsegmented, and the cluster services on the CVM continue to use eth0 until you configure network segmentation.

The vNICs ntnx0, ntnx1, and so on, are not created during an upgrade to a release that supports service-specific traffic isolation. They are created when you configure traffic isolation for a service.

Note:

Do not delete the eth2 interface that is created on the CVMs, even if you are not using the network segmentation feature.

Firewall Requirements

Ports and Protocols describes detailed port information (like protocol, service description, source, destination, and associated service) for Nutanix products and services. It includes port and protocol information for 1-click upgrades and LCM updates.

Log management

This chapter describes how to configure cluster-wide setting for log-forwarding and documenting the log fingerprint.

Log Forwarding

The Nutanix Controller VM provides a method for log integrity by using a cluster-wide setting to forward all the logs to a central log host. Due to the appliance form factor of the Controller VM, system and audit logs does not support local log retention periods as a significant increase in log traffic can be used to orchestrate a distributed denial of service attack (DDoS).

Nutanix recommends deploying a central log host in the management enclave to adhere to any compliance or internal policy requirement for log retention. In case of any system compromise, a central log host serves as a defense mechanism to preserve log integrity.

Note: The audit in the Controller VM uses the audisp plugin by default to ship all the audit logs to the rsyslog daemon (stored in /home/log/messages ). Searching for audispd in the central log host provides the entire content of the audit logs from the Controller VM. The audit daemon is configured with a rules engine that adheres to the auditing requirements of the Operating System Security Requirements Guide (OS SRG), and is embedded as part of the Controller VM STIG.

Use the nCLI to enable forwarding of system, audit, aide, and SCMA logs of all the Controller nodes in a cluster at the required log level. For more information, see Send Logs to Remote Syslog Server in the Acropolis Advanced Administration Guide

Documenting the Log Fingerprint

For forensic analysis, non-repudiation is established by verifying the fingerprint of the public key for the log file entry.

Procedure

  1. Login to the CVM.
  2. Run the following command to document the fingerprint for each public key assigned to an individual admin.
    nutanix@cvm$ ssh-keygen -lf /<location of>/id_rsa.pub

    The fingerprint is then compared to the SSH daemon log entries and forwarded to the central log host ( /home/log/secure in the Controller VM).

    Note: After completion of the ssh public key inclusion in Prism and verification of connectivity, disable the password authentication for all the Controller VMs and AHV hosts. From the Prism main menu, de-select Cluster Lockdown configuration > Enable Remote Login with password check box from the gear icon drop-down list.

Security Management Using Prism Central (PC)

Prism Central provides several mechanisms and features to enforce security of your multi-cluster environment.

If you enable Identity and Access Management (IAM), see Security Management Using Identity and Access Management (Prism Central).

Configuring Authentication

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Prism Central supports user authentication with these authentication options:

  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.
  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts.
  • SAML authentication. Users can authenticate through a supported identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties: an identity provider (IDP) and Prism Central as the service provider.

    If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported IDP for Single Sign-on. If you enable IAM, additional IDPs are available. For more information, see Security Management Using Identity and Access Management (Prism Central) and Updating ADFS When Using SAML Authentication.

  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts .
  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.

Adding An Authentication Directory (Prism Central)

Before you begin

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.

    The Authentication Configuration window appears.

    Figure. Authentication Configuration Window Click to enlarge Authentication Configuration window main display

  2. To add an authentication directory, click the New Directory button.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • An Active Directory user name or group name containing spaces is not supported for Prism Central authentication.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project.
        Note: Prism Central uses a service account to query OpenLDAP directories for user information and does not currently support certificate-based authentication with the OpenLDAP directory.
    2. Name : Enter a directory name.

      This is a name you choose to identify this entry; it need not be the name of an actual directory.

    3. Domain : Enter the domain name.

      Enter the domain name in DNS format, for example, nutanix.com .

    4. Directory URL : Enter the URL address to the directory.

      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:

      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note:
        • When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        • Cross-forest trust between multiple AD forests is not supported.

      For the complete list of required ports, see Port Reference.
    5. [OpenLDAP only] Configure the following additional fields:
      • User Object Class : Enter the value that uniquely identifies the object class of a user.
      • User Search Base : Enter the base domain name in which the users are configured.
      • Username Attribute : Enter the attribute to uniquely identify a user.
      • Group Object Class : Enter the value that uniquely identifies the object class of a group.
      • Group Search Base : Enter the base domain name in which the groups are configured.
      • Group Member Attribute : Enter the attribute that identifies users in a group.
      • Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Depending upon the Directory type you select in step 2.a, the service account user name format as follows:
      • For Active Directory , enter the service account user name in the user_name@domain.com format.
      • For OpenLDAP , enter the service account user name in the following Distinguished Name (DN) format:

        cn=username, dc=company, dc=com

        A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

        A service account is under the Managed Service Accounts in the Active Directory and openLDAP server. An application or service uses the service account to interact with the operating system. Enter your Active Directory and openLDAP service account credentials in this (username) and the following (password) field.

        Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.

    10. Repeat this step for each authentication directory you want to add.
    Note:
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Configuring Role Mapping).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service.
    Figure. Directory List Fields Click to enlarge Directory List tab display

  3. To edit a directory entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete a directory entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Adding a SAML-based Identity Provider

About this task

If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported identity provider (IDP) for Single Sign-on and only one IDP is allowed at a time. If you enable IAM, additional IDPs are available. See Security Management Using Identity and Access Management (Prism Central) and also Updating ADFS When Using SAML Authentication.

Before you begin

  • An identity provider (typically a server or other computer) is the system that provides authentication through a SAML request. There are various implementations that can provide authentication services in line with the SAML standard.
  • If you enable IAM by enabling CMSP, you can specify other tested standard-compliant IDPs in addition to ADFS. See also the Prism Central release notes topic Identity and Access Management Software Support for specific support requirements..

    Only one identity provider is allowed at a time, so if one was already configured, the + New IDP link does not appear.

  • You must configure the identity provider to return the NameID attribute in SAML response. The NameID attribute is used by Prism Central for role mapping. See Configuring Role Mapping for details.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. To add a SAML-based identity provider, click the + New IDP link.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Configuration name : Enter a name for the identity provider. This name will appear in the log in authentication screen.
    2. Import Metadata : Click this radio button to upload a metadata file that contains the identity provider information.

      Identity providers typically provide an XML file on their website that includes metadata about that identity provider, which you can download from that site and then upload to Prism Central. Click + Import Metadata to open a search window on your local system and then select the target XML file that you downloaded previously. Click the Save button to save the configuration.

      Figure. Identity Provider Fields (metadata configuration) Click to enlarge

    This completes configuring an identity provider in Prism Central, but you must also configure the callback URL for Prism Central on the identity provider. To do this, click the Download Metadata link just below the Identity Providers table to download an XML file that describes Prism Central and then upload this metadata file to the identity provider.
  3. To edit a identity provider entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete an identity provider entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Enabling and Configuring Client Authentication

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. Click the Client tab, then do the following steps.
    1. Select the Configure Client Chain Certificate check box.

      The Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.

    2. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note:
      • Client and CAC authentication only supports RSA 2048 bit certificate.
      • Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
    3. To enable client authentication, click Enable Client Authentication .
    4. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on the local machine or by providing it through a smart card reader.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  3. To specify a service account that the Prism Central web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The Prism Central console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:

    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Central supports both certificate authentication and basic authentication in order to handle both Prism Central login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Central login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism Central role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism Central by using CAC authentication, to successfully log out of Prism Central, close the browser after you click Log Out .

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).

User Management

Managing Local User Accounts

About this task

The Prism Central admin user is created automatically, but you can add more (locally defined) users as needed. To add, update, or delete a user account, do the following:

Note:
  • To add user accounts through Active Directory, see Configuring Authentication. If you enable the Prism Self Service feature, an Active Directory is assigned as part of that process.
  • Changing the Prism Central admin user password does not impact registration (re-registering clusters is not required).

Procedure

  • Click the gear icon in the main menu and then select Local User Management in the Settings page.

    The Local User Management dialog box appears.

    Figure. User Management Window Click to enlarge displays user management window

  • To add a user account, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
    5. Password : Enter a password (maximum of 255 characters).
      Note: A second field to verify the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.

      English is selected by default. You have an option to select Simplified Chinese or Japanese . If you select either of these, the cluster locale is updated for the new user. For example, if you select Simplified Chinese , the user interface is displayed in Simplified Chinese when the new user logs in.

    7. Roles : Assign a role to this user.

      There are three options:

      • Checking the User Admin box allows the user to view information, perform any administrative task, and create or modify user accounts.
      • Checking the Prism Central Admin (formerly "Cluster Admin") box allows the user to view information and perform any administrative task, but it does not provide permission to manage (create or modify) other user accounts.
      • Leaving both boxes unchecked allows the user to view information, but it does not provide permission to perform any administrative tasks or manage other user accounts.
    8. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the dialog box with the new user appearing in the list.

    Figure. Create User Window Click to enlarge displats create user window

  • To modify a user account, click the pencil icon for that user and update one or more of the values as desired in the Update User window.
    Figure. Update User Window Click to enlarge displays update user window

  • To disable login access for a user account, click the Yes value in the Enabled field for that user; to enable the account, click the No value.

    A Yes value means the login is enabled; a No value means it is disabled. A user account is enabled (login access activated) by default.

  • To delete a user account, click the X icon for that user.
    A window prompt appears to verify the action; click the OK button. The user account is removed and the user no longer appears in the list.

Updating My Account

About this task

To update your account credentials (that is, credentials for the user you are currently logged in as), do the following:

Procedure

  1. To update your password, select Change Password from the user icon pull-down list of the main menu.
    The Change Password dialog box appears. Do the following in the indicated fields:
    1. Current Password : Enter the current password.
    2. New Password : Enter a new password.
    3. Confirm Password : Re-enter the new password.
    4. When the fields are correct, click the Save button (lower right). This saves the new password and closes the window.
    Note: Password complexity requirements might appear above the fields; if they do, your new password must comply with these rules.
    Figure. Change Password Window Click to enlarge change password window

  2. To update other details of your account, select Update Profile from the user icon pull-down list.
    The Update Profile dialog box appears. Do the following in the indicated fields for any parameters you want to change:
    1. First Name : Enter a different first name.
    2. Last Name : Enter a different last name.
    3. Email Address : Enter a different valid user email address.
    4. Language : Select a different language for your account from the pull-down list.
    5. API Key : Enter a new API key.
      Note: Your keys can be managed from the API Keys page on the Nutanix support portal. Your connection will be secure without the optional public key (following field), and the public key option is provided in the event that your default public key expires.
    6. Public Key : Click the Choose File button to upload a new public key file.
    7. When all the fields are correct, click the Save button (lower right). This saves the changes and closes the window.
    Figure. Update Profile Window Click to enlarge

Resetting Password (CLI)

This procedure describes how to reset a local user’s password on the Prism Element or the Prism Central web consoles.

About this task

To reset the password using nCLI, do the following:

Note:

Only a user with admin privileges can reset a password for other users.

Procedure

  1. Access the CVM via SSH.
  2. Log in with the admin credentials.
  3. Use the ncli user reset-password command and specify the username and password of the user whose password is to be reset:
    nutanix@cvm$ ncli user reset-password user-name=xxxxx password=yyyyy
    
    • Replace user-name=xxxxx with the name of the user whose password is to be reset.

    • Replace password=yyyyy with the new password.

What to do next

You can relaunch the Prism Element or the Prism Central web console and verify the new password setting.

Deleting a Directory User Account

About this task

To delete a directory-authenticated user, do the following:

Procedure

  1. Click the Hamburger icon, and go to Administration > Projects
    The Project page appears. This page lists all existing projects.
  2. Select the project that the user is associated with and go to Actions > Update Projects
    The Edit Projects page appears.
  3. Go to Users, Groups, Roles tab.
  4. Click the X icon to delete the user.
    Figure. Edit Project Window Click to enlarge

  5. Click Save

    Prism deletes the user account and also removes the user from any associated projects.

    Repeat the same steps if the user is associated with multiple projects.

Controlling User Access (RBAC)

Prism Central supports role-based access control (RBAC) that you can configure to provide customized access permissions for users based on their assigned roles. The roles dashboard allows you to view information about all defined roles and the users and groups assigned to those roles.

  • Prism Central includes a set of predefined roles (see Built-in Role Management).
  • You can also define additional custom roles (see Custom Role Management).
  • Configuring authentication confers default user permissions that vary depending on the type of authentication (full permissions from a directory service or no permissions from an identity provider). You can configure role maps to customize these user permissions (see Configuring Role Mapping).
  • You can refine access permissions even further by assigning roles to individual users or groups that apply to a specified set of entities (see Assigning a Role).
    Note: Please note that the entities are treated as separate instances. For example, if you want to grant a user or a group the permission to manage cluster and images, an administrator must add both of these entities to the list of assignments.
  • With RBAC, user roles do not depend on the project membership. You can use RBAC and log in to Prism Central even without a project membership.
Note: Defining custom roles and assigning roles are supported on AHV only.

Built-in Role Management

The following built-in roles are defined by default. You can see a more detailed list of permissions for any of the built-in roles through the details view for that role (see Displaying Role Permissions). The Project Admin, Developer, Consumer, and Operator roles are available when assigning roles in a project.

Role Privileges
Super Admin Full administrator privileges
Prism Admin Full administrator privileges except for creating or modifying the user accounts
Prism Viewer View-only privileges
Self-Service Admin Manages all cloud-oriented resources and services
Note: This is the only cloud administration role available.
Project Admin Manages cloud objects (roles, VMs, Apps, Marketplace) belonging to a project
Note: You can specify a role for a user when you assign a user to a project, so individual users or groups can have different roles in the same project.
Developer Develops, troubleshoots, and tests applications in a project
Consumer Accesses the applications and blueprints in a project
Operator Accesses the applications in a project
Note: Previously, the Super Admin role was called User Admin , the Prism Admin role was called Prism Central Admin and Cluster Admin , and the Prism Viewer was called Viewer .

Custom Role Management

If the built-in roles are not sufficient for your needs, you can create one or more custom roles (AHV only).

Creating a Custom Role

About this task

To create a custom role, do the following:

Procedure

  1. Go to the roles dashboard (select Administration > Roles in the pull-down menu) and click the Create Role button.

    The Roles page appears. See Custom Role Permissions for a list of the permissions available for each custom role option.

    Figure. Roles Page Click to enlarge displays the roles page

  2. In the Roles page, do the following in the indicated fields:
    1. Role Name : Enter a name for the new role.
    2. Description (optional): Enter a description of the role.
      Note: All entity types are listed by default, but you can display just a subset by entering a string in the Filter Entities search field.
      Figure. Filter Entities Click to enlarge Filters the available entities

    3. App : Click the radio button for the desired application permissions ( No Access , Basic Access , or Set Custom Permissions ). If you specify custom permissions, click the Change link to display the Custom App Permissions window, check all the permissions you want to enable, and then click the Save button.
      Figure. Custom App Permissions Window Click to enlarge displays the custom app permissions window

    4. VM : Click the radio button for the desired VM permissions ( No Access , View Access , Basic Access , Edit Access , or Set Custom Permissions ). Check the Allow VM Creation box to allow this role to create VMs. If you specify custom permissions, click the Change link to display the Custom VM Permissions window, check all the permissions you want to enable, and then click the Save button.
      Figure. Custom VM Permissions Window Click to enlarge displays the custom VM permissions window

    5. Recovery Plan : Click the radio button for the desired permissions for recovery plan operations ( No Access , View Access , Test Execution Access , Full Execution Access , or Set Custom Permissions ). If you specify custom permissions, click the Change link to display the Custom Recovery Plan Permissions window, check all the permissions you want to enable (see Custom Role Permissions), and then click the Save button.
      Figure. Custom Recovery Plan Permissions Window Click to enlarge displays the custom recovery plan permissions window

    6. Blueprint : Click the radio button for the desired blueprint permissions ( No Access , View Access , Basic Access , or Set Custom Permissions ). Check the Allow Blueprint Creation box to allow this role to create blueprints. If you specify custom permissions, click the Change link to display the Custom Blueprint Permissions window, check all the permissions you want to enable, and then click the Save button.
      Figure. Custom Blueprint Permissions Window Click to enlarge displays the custom blueprint permissions window

    7. Marketplace Item : Click the radio button for the desired marketplace permissions ( No Access , View marketplace and published blueprints , View marketplace and publish new blueprints , or Set custom permissions ). If you specify custom permissions, click the Change link to display the Custom Marketplace Item Permissions window, check all the permissions you want to enable, and then click the Save button.
      Note: The permission you enable for a Marketplace Item implicitly applies to a Catalog Item entity. For example, if you select No Access permission for the Marketplace Item entity while creating the custom role, the custom role will not have access to the Catalog Item entity as well.

      Figure. Custom Marketplace Permissions Window Click to enlarge displays the custom marketplace item permissions window

    8. Report : Click the radio button for the desired report permissions ( No Access , View Only , Edit Access , or Set Custom Permissions ). If you specify custom permissions, click the Change link to display the Custom Report Permissions window, check all the permissions you want to enable, and then click the Save button.
      Figure. Custom VM Permissions Window Click to enlarge displays the custom report permissions window

    9. Cluster : Click the radio button for the desired cluster permissions ( No Access or Cluster Access ).
    10. Subnet : Click the radio button for the desired subnet permissions ( No Access or Subnet Access ).
    11. Image : Click the radio button for the desired image permissions ( No Access , View Only , or Set Custom Permissions ). If you specify custom permissions, click the Change link to display the Custom Image Permissions window, check all the permissions you want to enable, and then click the Save button.
      Figure. Custom Image Permissions Window Click to enlarge displays the custom image permissions window

  3. Click Save to add the role. The page closes and the new role appears in the Roles view list.
Modifying a Custom Role

About this task

Perform the following procedure to modify or delete a custom role.

Procedure

  1. Go to the roles dashboard and select (check the box for) the desired role from the list.
  2. Do one of the following:
    • To modify the role, select Update Role from the Actions pull-down list. The Roles page for that role appears. Update the field values as desired and then click Save . See Creating a Custom Role for field descriptions.
    • To delete the role, select Delete from the Action pull-down list. A confirmation message is displayed. Click OK to delete and remove the role from the list.
Custom Role Permissions

A selection of permission options are available when creating a custom role.

The following table lists the permissions you can grant when creating or modifying a custom role. When you select an option for an entity, the permissions listed for that option are granted. If you select Set custom permissions , a complete list of available permissions for that entity appears. Select the desired permissions from that list.

Entity Option Permissions
App (application) No Access (none)
Basic Access Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create AWS VM, Create Image, Create VM, Delete AWS VM, Delete VM, Download App Runlog, Update AWS VM, Update VM, View App, View AWS VM, View VM
Set Custom Permissions (select from list) Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create App, Create AWS VM, Create Image, Create VM, Delete App, Delete AWS VM, Delete VM, Download App Runlog, Update App, Update AWS VM, Update VM, View App, View AWS VM, View VM
VM Recovery Point No Access (none)
View Only View VM Recovery Point
Full Access Delete VM Recovery Point, Restore VM Recovery Point, Snapshot VM, Update VM Recovery Point, View VM Recovery Point, Allow VM Recovery Point creation
Set Custom Permissions (Change) Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create App, Create AWS VM, Create Image, Create VM, Delete App, Delete AWS VM, Delete VM, Download App Runlog, Update App, Update AWS VM, Update VM, View App, View AWS VM, View VM
Note:

You can assign permissions for the VM Recovery Point entity to users or user groups in the following two ways.

  • Manually assign permission for each VM where the recovery point is created.
  • Assign permission using Categories in the Role Assignment workflow.
Tip: When a recovery point is created, it is associated with the same category as the VM.
VM No Access (none)
View Access Access Console VM, View VM
Basic Access Access Console VM, Update VM Power State, View VM
Edit Access Access Console VM, Update VM, View Subnet, View VM
Set Custom Permissions (select from list) Access Console VM, Clone VM, Create VM, Delete VM, Update VM, Update VM Boot Config, Update VM CPU, Update VM Categories, Update VM Disk List, Update VM GPU List, Update VM Memory, Update VM NIC List, Update VM Owner, Update VM Power State, Update VM Project, View Cluster, View Subnet, View VM
Allow VM creation (additional option) (n/a)
Blueprint No Access (none)
View Access View Account, View AWS Availability Zone, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet
Basic Access Access Console VM, Clone VM, Create App,Create Image, Create VM, Delete VM, Launch Blueprint, Update VM, View Account, View App, View AWS Availability Zone, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet, View VM
Set Custom Permissions (select from list) Access Console VM, Clone VM, Create App, Create Blueprint, Create Image, Create VM, Delete Blueprint, Delete VM, Download Blueprint, Export Blueprint, Import Blueprint, Launch Blueprint, Render Blueprint, Update Blueprint, Update VM, Upload Blueprint, View Account, View App, View AWS Availability Zone, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet, View VM
Marketplace Item No Access (none)
View marketplace and published blueprints View Marketplace Item
View marketplace and publish new blueprints Update Marketplace Item, View Marketplace Item
Set Custom Permissions (select from list) Config Marketplace Item, Create Marketplace Item, Delete Marketplace Item, Render Marketplace Item, Update Marketplace Item, View Marketplace Item
Report No Access (none)
View Only Notify Report Instance, View Common Report Config, View Report Config, View Report Instance
Edit Access Create Common Report Config, Create Report Config, Create Report Instance, Delete Common Report Config, Delete Report Config, Delete Report Instance, Notify Report Instance, Update Common Report Config, Update Report Config, View Common Report Config, View Report Config, View Report Instance
Set Custom Permissions (select from list) Create Common Report Config, Create Report Config, Create Report Instance, Delete Common Report Config, Delete Report Config, Delete Report Instance, Notify Report Instance, Update Common Report Config, Update Report Config, View Common Report Config, View Report Config, View Report Instance
Cluster No Access (none)
View Access View Cluster
Subnet No Access (none)
View Access View Subnet
Image No Access (none)
View Only View Image
Set Custom Permissions (select from list) Copy Image Remote, Create Image, Delete Image, Migrate Image, Update Image, View Image

The following table describe the permissions.

Note: By default, assigning certain permissions to a user role might implicitly assign more permissions to that role. However, the implicitly assigned permissions will not be displayed in the details page for that role. These permissions are displayed only if you manually assign them to that role.
Permission Description Assigned Implicilty By
Create App Allows to create an application.
Delete App Allows to delete an application.
View App Allows to view an application.
Action Run App Allows to run action on an application.
Download App Runlog Allows to download an application runlog.
Abort App Runlog Allows to abort an application runlog.
Access Console VM Allows to access the console of a virtual machine.
Create VM Allows to create a virtual machine.
View VM Allows to view a virtual machine.
Clone VM Allows to clone a virtual machine.
Delete VM Allows to delete a virtual machine.
Export VM Allows to export a virtual machine
Snapshot VM Allows to snapshot a virtual machine.
View VM Recovery Point Allows to view a vm_recovery_point.
Update VM Recovery Point Allows to update a vm_recovery_point.
Delete VM Recovery Point Allows to delete a vm_recovery_point.
Restore VM Recovery Point Allows to restore a vm_recovery_point.
Update VM Allows to update a virtual machine.
Update VM Boot Config Allows to update a virtual machine's boot configuration. Update VM
Update VM CPU Allows to update a virtual machine's CPU configuration. Update VM
Update VM Categories Allows to update a virtual machine's categories. Update VM
Update VM Description Allows to update a virtual machine's description. Update VM
Update VM GPU List Allows to update a virtual machine's GPUs. Update VM
Update VM NIC List Allows to update a virtual machine's NICs. Update VM
Update VM Owner Allows to update a virtual machine's owner. Update VM
Update VM Project Allows to update a virtual machine's project. Update VM
Update VM NGT Config Allows updates to a virtual machine's Nutanix Guest Tools configuration. Update VM
Update VM Power State Allows updates to a virtual machine's power state. Update VM
Update VM Disk List Allows to update a virtual machine's disks. Update VM
Update VM Memory Allows to update a virtual machine's memory configuration. Update VM
Update VM Power State Mechanism Allows updates to a virtual machine's power state mechanism. Update VM or Update VM Power State
Allow VM Power Off Allows power off and shutdown operations on a virtual machine. Update VM or Update VM Power State
Allow VM Power On Allows power on operation on a virtual machine. Update VM or Update VM Power State
Allow VM Reboot Allows reboot operation on a virtual machine. Update VM or Update VM Power State
Expand VM Disk Size Allows to expand a virtual machine's disk size. Update VM or Update VM Disk List
Mount VM CDROM Allows to mount an ISO to virtual machine's CDROM. Update VM or Update VM Disk List
Unmount VM CDROM Allows to unmount ISO from virtual machine's CDROM. Update VM or Update VM Disk List
Update VM Memory Overcommit Allows to update a virtual machine's memory overcommit configuration. Update VM or Update VM Memory
Allow VM Reset Allows reset (hard reboot) operation on a virtual machine. Update VM, Update VM Power State, or Allow VM Reboot
View Cluster Allows to view a cluster.
Update Cluster Allows to update a cluster.
Create Image Allows to create an image.
View Image Allows to view a image.
Copy Image Remote Allows to copy an image from local PC to remote PC.
Delete Image Allows to delete an image.
Migrate Image Allows to migrate an image from PE to PC.
Update Image Allows to update a image.
Create Image Placement Policy Allows to create an image placement policy.
View Image Placement Policy Allows to view an image placement policy.
Delete Image Placement Policy Allows to delete an image placement policy.
Update Image Placement Policy Allows to update an image placement policy.
Create AWS VM Allows to create an AWS virtual machine.
View AWS VM Allows to view an AWS virtual machine.
Update AWS VM Allows to update an AWS virtual machine.
Delete AWS VM Allows to delete an AWS virtual machine.
View AWS AZ Allows to view AWS Availability Zones.
View AWS Elastic IP Allows to view an AWS Elastic IP.
View AWS Image Allows to view an AWS image.
View AWS Key Pair Allows to view AWS keypairs.
View AWS Machine Type Allows to view AWS machine types.
View AWS Region Allows to view AWS regions.
View AWS Role Allows to view AWS roles.
View AWS Security Group Allows to view an AWS security group.
View AWS Subnet Allows to view an AWS subnet.
View AWS Volume Type Allows to view AWS volume types.
View AWS VPC Allows to view an AWS VPC.
Create Subnet Allows to create a subnet.
View Subnet Allows to view a subnet.
Update Subnet Allows to update a subnet.
Delete Subnet Allows to delete a subnet.
Create Blueprint Allows to create the blueprint of an application.
View Blueprint Allows to view the blueprint of an application.
Launch Blueprint Allows to launch the blueprint of an application.
Clone Blueprint Allows to clone the blueprint of an application.
Delete Blueprint Allows to delete the blueprint of an application.
Download Blueprint Allows to download the blueprint of an application.
Export Blueprint Allows to export the blueprint of an application.
Import Blueprint Allows to import the blueprint of an application.
Render Blueprint Allows to render the blueprint of an application.
Update Blueprint Allows to update the blueprint of an application.
Upload Blueprint Allows to upload the blueprint of an application.
Create OVA Allows to create an OVA.
View OVA Allows to view an OVA.
Update OVA Allows to update an OVA.
Delete OVA Allows to delete an OVA.
Create Marketplace Item Allows to create a marketplace item.
View Marketplace Item Allows to view a marketplace item.
Update Marketplace Item Allows to update a marketplace item.
Config Marketplace Item Allows to configure a marketplace item.
Render Marketplace Item Allows to render a marketplace item.
Delete Marketplace Item Allows to delete a marketplace item.
Create Report Config Allows to create a report_config.
View Report Config Allows to view a report_config.
Run Report Config Allows to run a report_config.
Share Report Config Allows to share a report_config.
Update Report Config Allows to update a report_config.
Delete Report Config Allows to delete a report_config.
Create Common Report Config Allows to create a common report_config.
View Common Report Config Allows to view a common report_config.
Update Common Report Config Allows to update a common report_config.
Delete Common Report Config Allows to delete a common report_config.
Create Report Instance Allows to create a report_instance.
View Report Instance Allows to view a report_instance.
Notify Report Instance Allows to notify a report_instance.
Notify Report Instance Allows to notify a report_instance.
Share Report Instance Allows to share a report_instance.
Delete Report Instance Allows to delete a report_instance.
View Account Allows to view an account.
View Project Allows to view a project.
View User Allows to view a user.
View User Group Allows to view a user group.
View Name Category Allows to view a category's name.
View Value Category Allows to view a category's value.
View Virtual Switch Allows to view a virtual switch.
Granting Restore Permission to Project User

About this task

By default, only a self service admin or a cluster admin can view and restore the recovery points. However, a self service admin or cluster admin can grant permission to the project user to restore the VM from a recovery point.

To grant restore Permission to a project user, do the following:

Procedure

  1. Log on to Prism Central with cluster admin or self service admin credentials.
  2. Go to the roles dashboard (select Administration > Roles in the pull-down menu) and do one of the following:
    • Click the Create Role button.
    • Select an existing role of a project user and then select Duplicate from the Actions drop-down menu. To modify the duplicate role, select Update Role from the Actions pull-down list.
  3. The Roles page for that role appears. In the Roles page, do the following in the indicated fields:
    1. Role Name : Enter a name for the new role.
    2. Description (optional): Enter a description of the role.
    3. Expand VM Recovery Point and do one of the following:
      • Select Full Access and then select Allow VM recovery point creation .
      • Click Change next to Set Custom Permissions to customize the permissions. Enable Restore VM Recovery Point permission. This permission also grants the permission to view the VM created from the restore process.
    4. Click Save to add the role. The page closes and the new role appears in the Roles view list.
  4. In the Roles view, select the newly created role and click Manage Assignment to assign the user to this role.
  5. In the Add New dialog, do the following:
    • Under Select Users or User Groups or OUs , enter the target user name. The search box displays the matched records. Select the required listing from the records.
    • Under Entities , select VM Recovery Point , select Individual Entry from the drop-down list, and then select All VM Recovery Points.
    • Click Save to finish.

Configuring Role Mapping

About this task

After user authentication is configured (see Configuring Authentication), the users or the authorized directories are not assigned the permissions by default. The required permissions must be explicitly assigned to users, authorized directories, or organizational units using role mapping.

You can refine the authentication process by assigning a role with associated permissions to users, groups, and organizational units. This procedure allows you to map and assign users to the predefined roles in Prism Central such as, User Admin , Cluster Admin , and Viewer . To assign roles, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Role Mapping from the Settings page.

    The Role Mapping window appears.

    Figure. Role Mapping Window Click to enlarge displays annotated role mapping window

  2. To create a role mapping, click the New Mapping button.

    The Create Role Mapping window appears. Enter the required information in the following fields.

    Figure. Create Role Mapping Window Click to enlarge displays create role mapping window

  3. Directory or Provider : Select the target directory or identity provider from the pull-down list.
    Only directories and identity providers previously configured in the authentication settings are available. If the desired directory or provider does not appear in the list, add that directory or provider, and then return to this procedure.
  4. Type : Select the desired LDAP entity type from the pull-down list.
    This field appears only if you have selected a directory from the Directory or Provider pull-down list. The following entity types are available:
    • User : A named user. For example, dev_user_1.
    • Group : A group of users. For example, dev_grp1, dev_grp2, sr_dev_1, and staff_dev_1.
    • OU : organizational units with one or more users, groups, and even other organizational units. For example, all_dev, consists of user dev_user_1 and groups dev_grp1, dev_grp2, sr_dev_1, and staff_dev_1.
  5. Role : Select a user role from the pull-down list.
    You can choose one of the following roles:
    • Viewer : Allows users with view-only access to the information and hence cannot perform any administrative tasks.
    • Cluster Admin (Formerly Prism Central Admin): Allows users to view and perform all administrative tasks except creating or modifying user accounts.
    • User Admin : Allows users to view information, perform administrative tasks, and to create and modify user accounts.
  6. Values : Enter the entity names. The entity names are assigned with the respective roles that you have selected.
    The entity names are case sensitive. If you need to provide more than one entity name, then the entity names should be separated by a comma (,) without any spaces in between them.

    LDAP-based authentication

    • For AD

      Enter the actual names used by the organizational units (it applies to all users and groups in those OUs), groups (all users in those groups), or users (each named user) used in LDAP in the Values field.

      For example, entering sr_dev_1,staff_dev_1 in the Values field when the LDAP type is Group and the role is Cluster Admin, implies that all users in the sr_dev_1 and staff_dev_1 groups are assigned the administrative role for the cluster.

      Do not include the domain name in the value. For example, enter all_dev , and not all_dev@<domain_name> . However, when users log in to Cluster Admin, include the domain along with the username.

      User : Enter the sAMAccountName or userPrincipalName in the values field.

      Group : Enter common name (cn) or name.

      OU : Enter name.

    • For OpenLDAP

      User : Use the username attribute (that was configured while adding the directory) value.

      Group : Use the group name attribute (cn) value.

      OU : Use the OU attribute (ou) value.

    SAML-based authentication:

    You must configure the NameID attribute in the identity provider. You can enter the NameID returned in the SAML response in the Values field.

    For SAML, only User type is supported. Other types such as, Group and OU, are not supported.

    If you enable Identity and Access Management, see Security Management Using Identity and Access Management (Prism Central)

  7. Click Save .

    The role mapping configurations are saved, and the new role is listed in the Role Mapping window.

    You can create a role map for each authorized directory. You can also create multiple role maps that apply to a single directory. When there are multiple maps for a directory, the most specific rule for a user applies.

    For example, adding a Group map set to Cluster Admin and a User map set to Viewer for a few specific users in that group means all users in the group have administrator permission except those few specific users who have only viewing permission.

  8. To edit a role map entry, click the pencil icon for that entry.
    After clicking the pencil icon, the Edit Role Mapping window appears which is similar to the Create Role Mapping window. Edit the required information in the required fields and click the Save button to update the changes.
  9. To delete a role map entry, click the X icon for that entry and click the OK button to confirm the role map entry deletion.
    The role map entry is removed from the list.

Assigning a Role

About this task

In addition to configuring basic role maps (see Configuring Role Mapping), you can configure more precise role assignments (AHV only). To assign a role to selected users or groups that applies just to a specified set of entities, do the following:

Procedure

  1. Select the desired role in the roles dashboard and then click the Role Assignment button in the details page.
  2. Click the New Users button and enter the user or group name you want assigned to this role.

    Entering text in the field displays a list of users from which you can select, and you can enter multiple user names in this field.

  3. Click the New Entities button, select the entity type from the pull-down list, and then enter the entity name in the field.

    Entering text in the field displays a list of entities from which you can select, and you can enter multiple entity names in the field. You can choose from the following entity types:

    • AHV VM —allows management of VMs including create and edit
    • Category —custom role permissons
    • AHV Subnet —allows user to view subnet details
    • AHV Cluster —allows user to view cluster details and manage cluster details per permissions assigned
  4. Repeat for any combination of users/entities you want to define.

    You can specify various user/entity relationships when configuring the role assignment. To illustrate, in the following example the first line assigns the my_custom_role to a single user (ssp_admin) for two VMs (normal_vm and test_andrey). The second line assigns the role to two users (locus1 and locus2) for a single category (4gcC1Z). The third line again assigns the role to the user locus1 but this time for all subnets.

    Note: To allow users to create certain entities like a VM, you may also need to grant them access to related entities like clusters, networks, and images that the VM requires.
    Figure. Role Assignment Page Click to enlarge example role assignment page

  5. Click the Save button (lower right) to save the role assignments.

Displaying Role Permissions

About this task

Do the following to display the privileges associated with a role.

Procedure

  1. Go to the roles dashboard and select the desired role from the list.

    For example, if you click the Consumer role, the details page for that role appears, and you can view all the privileges associated with the Consumer role.

    Figure. Role Summary Tab Click to enlarge

  2. Click the Users tab to display the users that are assigned this role.
    Figure. Role Users Tab Click to enlarge

  3. Click the User Groups tab to display the groups that are assigned this role.
  4. Click the Role Assignment tab to display the user/entity pairs assigned this role (see Assigning a Role).

Installing an SSL Certificate

About this task

Prism Central supports SSL certificate-based authentication for console access. To install a self-signed or custom SSL certificate, do the following:
Important: Ensure that SSL certificates are not password protected.
Note: Nutanix recommends that you replace the default self-signed certificate with a CA signed certificate.

Procedure

  1. Click the gear icon in the main menu and then select SSL Certificate in the Settings page.
  2. To replace (or install) a certificate, click the Replace Certificate button.
    Figure. SSL Certificate Window
    Click to enlarge

  3. To create a new self-signed certificate, click the Replace Certificate option and then click the Apply button.

    A dialog box appears to verify the action; click the OK button. This generates and applies a new RSA 2048-bit self-signed certificate for Prism Central.

    Figure. SSL Certificate Window: Regenerate
    Click to enlarge

  4. To apply a custom certificate that you provide, do the following:
    1. Click the Import Key and Certificate option and then click the Next button.
      Figure. SSL Certificate Window: Import Click to enlarge
    2. Do the following in the indicated fields, and then click the Import Files button.
      Note:
      • All the three imported files for the custom certificate must be PEM encoded.
      • Ensure that the private key does not have any extra data (or custom attributes) before the beginning (-----BEGIN CERTIFICATE-----) or after the end (-----END CERTIFICATE-----) of the private key block.
      • Private Key Type : Select the appropriate type for the signed certificate from the pull-down list (RSA 4096 bit, RSA 2048 bit, EC DSA 256 bit, or EC DSA 384 bit).
      • Private Key : Click the Browse button and select the private key associated with the certificate to be imported.
      • Public Certificate : Click the Browse button and select the signed public portion of the server certificate corresponding to the private key.
      • CA Certificate/Chain : Click the Browse button and select the certificate or chain of the signing authority for the public certificate.
      Figure. SSL Certificate Window: Select Files
      Click to enlarge

      In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. Refer to the following table to ensure the proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
      Note: Prism does not have any specific requirement or enforcement logic for the subject name of the certificates (subject alternative names (SAN)) or wildcard certificates.
      Table 1. Supported Key Configurations
      Key Type Size/Curve Signature Algorithm
      RSA 4096 SHA256-with-RSAEncryption
      RSA 2048 SHA256-with-RSAEncryption
      EC DSA 256 prime256v1 ecdsa-with-sha256
      EC DSA 384 secp384r1 ecdsa-with-sha384
      EC DSA 521 secp521r1 ecdsa-with-sha512
      You can use the cat command to concatenate a list of CA certificates into a chain file.
      $ cat signer.crt inter.crt root.crt > server.cert
      Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

Results

After generating or uploading the new certificate, the interface gateway restarts. If the certificate and credentials are valid, the interface gateway uses the new certificate immediately, which means your browser session (and all other open browser sessions) will be invalid until you reload the page and accept the new certificate. If anything is wrong with the certificate (such as a corrupted file or wrong certificate type), the new certificate is discarded, and the system reverts back to the original default certificate provided by Nutanix.

Note: The system holds only one custom SSL certificate. If a new certificate is uploaded, it replaces the existing certificate. The previous certificate is discarded.

Controlling Remote (SSH) Access

About this task

Nutanix supports key-based SSH access to Prism Central. Enabling key-based SSH access ensures that password authentication is disabled and only the keys you have provided can be used to access the Prism Central (only for nutanix/admin users). Thus making the Prism Central more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to Prism Central, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.

    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.

    Figure. Cluster Lockdown Window Click to enlarge displays cluster lockdown window

  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.

    Remote login access is enabled by default.

  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    3. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.

    There are no public keys available by default, but you can add any number of public keys.

  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Password Retry Lockout

For enhanced security, Prism Central and Prim Element locks out the default 'admin' account for a period of 15 minutes after five unsuccessful login attempts. Once the account is locked out, the following message is displayed at the login screen.

Account locked due to too many failed attempts

You can attempt entering the password after the 15 minutes lockout period, or contact Nutanix Support in case you have forgotten your password.

Security Policies using Flow

Nutanix Flow includes a policy-driven security framework that inspects traffic within the data center. For more information, see the Flow Microsegmentation Guide.

Security Management Using Identity and Access Management (Prism Central)

Enabled and administered from Prism Central, Identity and Access Management (IAM) is an authentication and authorization feature that uses attribute-based access control (ABAC). It is disabled by default. This section describes Prism Central IAM prerequisites, enablement, and SAML-based standard-compliant identity provider (IDP) configuration.

After you enable the Micro Services Infrastructure (CMSP) on Prism Central, IAM is automatically enabled. You can configure a wider selection of identity providers, including Security Assertion Markup Language (SAML) based identity providers. The Prism Central web console presents an updated sign-on/authentication page

The enable process migrates existing directory, identity provider, and user configurations, including Common Access Card (CAC) client authentication configurations. After enabling IAM, if you want to enable a client to authenticate by using certificates, you must also enable CAC authentication. For more information, see Identity and Access Management Prerequisites and Considerations. Also, see the Identity and Access Management Software Support topic in the Prism Central Release Notes for specific support requirements.

The work flows for creating authentication configurations and providing user and role access described in Configuring Authentication are the same whether IAM is enabled or not.

IAM Features

Highly Scalable Architecture

Based on the Kubernetes open source platform, IAM uses independent pods for authentication (AuthN), authorization (AuthZ), and IAM data storage and replication.

  • Each pod automatically scales independently of Prism Central when required. No user intervention or control is required.
  • When new features or functions are available, you can update IAM pods independently of Prism Central updates through Life Cycle Manager (LCM).
  • IAM uses a rolling upgrade method to help ensure zero downtime.
Secure by Design
  • Mutual TLS authentication (mTLS) secures IAM component communication.
  • The Micro Services infrastructure (CMSP) on Prism Central provisions certificates for mTLS.
More SAML Identity Providers (IDP)

Without enabling CMSP/IAM on Prism Central, Active Directory Federation Services (ADFS) is the only supported IDP for Single Sign-on. After you enable it, IAM supports more IDPs. Nutanix has tested these IDPs when SAML IDP authentication is configured for Prism Central.

  • Active Directory Federation Services (ADFS)
  • Azure Active Directory Federation Services (Azure ADFS)
  • Okta
  • PingOne
  • Shibboleth
  • KeyCloak

Users can log on from the Prism Central web console only. IDP-initiated authentication work flows are not supported. That is, logging on or signing on from an IDP web page or site is not supported.

Updated Authentication Page

After enabling IAM, the Prism Central login page is updated depending on your configuration. For example, if you have configured local user account and Active Directory authentication, this default page appears for directory users as follows. To log in as a local user, click the Log In with your Nutanix Local Account link.

Figure. Sample Default Prism Central IAM Logon Page, Active Directory And Local User Authentication Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

In another example, if you have configured SAML authentication instances named Shibboleth and AD2, Prism Central displays this page.

Figure. Sample Prism Central IAM Logon Page, Active Directory , Identity Provider, And Local User Authentication Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

Note: After upgrade to pc.2022.9 if the Security Assertion Markup Language (SAML) IDP is configured, you need to download the Prism Central metadata and re-configure the SAML IDP to recognize Prism Central as the service provider. See Updating ADFS When Using SAML Authentication to create the required rules for ADFS.

Identity and Access Management Prerequisites and Considerations

IAM Prerequisites

For specific minimum software support and requirements for IAM, see the Prism Central release notes.

For microservices infrastructure requirements, see Enabling Microservices Infrastructure in the Prism Central Guide .

Prism Central
  • Ensure that Prism Central is hosted on an AOS cluster running AHV.
  • Ensure that you have created a Virtual IP address (VIP) for Prism Central. The Acropolis Upgrade Guide describes how to set the VIP for the Prism Central VM. Once set, do not change this address.
  • Ensure that you have created a fully qualified domain name (FQDN) for Prism Central. Once the Prism Central FQDN is set, do not change it. For more information about how to set the FQDN in the Cluster Details window, see Managing Prism Central in the Prism Central Guide .
  • When microservices infrastructure is enabled on a Prism Central scale-out three-node deployment, reconfiguring the IP address and gateway of the Prism Central VMs is not supported.
  • Ensure connectivity between Prism Central and its managed Prism Element clusters.
  • Enable Microservices Infrastructure on Prism Central (CMSP) first to enable and use IAM. For more information, see Enabling Microservices Infrastructure in the Prism Central Guide .
  • IAM supports small or large single PC VM deployments. However, you cannot expand the single VM deployment to a scale-out three-node deployment.
  • IAM supports scale-out three-node PC VMs deployments. Reverting this deployment to a single PC VM deployment is not supported.
  • Make sure Prism Central is managing at least one Prism Element cluster. For more information about how to register a cluster, see Register (Unregister) Cluster with Prism Central in the Prism Central Guide .
  • You cannot unregister the Prism Element cluster that is hosting the Prism Central deployment where you have enabled CMSP and IAM. You can unregister other clusters being managed by this Prism Central deployment.
Prism Element Clusters

Ensure that you have configured the following cluster settings. For more information, see Modifying Cluster Details in Prism Web Console Guide .

  • Virtual IP address (VIP). Once set, do not change this address
  • iSCSI data services IP address (DSIP). Once set, do not change this address
  • NTP server
  • Name server

IAM Considerations

Existing Authentication and Authorization Migrated After Enabling IAM
  • When you enable IAM by enabling CMSP, IAM migrates existing authentication and authorization configurations, including Common Access Card client authentication configurations.
Upgrading Prism Central After Enabling IAM
  • After you upgrade Prism Central, if CMSP (and therefore IAM) was previously enabled, both the services are enabled by default. You must contact Nutanix Support for any custom requirement.
Note: After upgrade to pc.2022.9 if the Security Assertion Markup Language (SAML) IDP is configured, you need to download the Prism Central metadata and re-configure the SAML IDP to recognize Prism Central as the service provider. See Updating ADFS When Using SAML Authentication to create the required rules for ADFS.
User Session Lifetime
  • Each session has a maximum lifetime of 8 hours
  • Session idle time is 15 minutes. After 15 minutes, a user or client is logged out and must re-authenticate.
Client Authentication and Common Access Card (CAC) Support
  • IAM supports deployments where CAC authentication and client authentication are enabled on Prism Central. After enabling IAM, however, Prism Central supports client authentication only if CAC authentication is also enabled. You can enable client authentication if you also enable CAC authentication.
  • Ensure that port 9441 is open in your firewall if you are using CAC client authentication.
Hypervisor Support
  • You can deploy IAM on an on-premise Prism Central (PC) deployment hosted on an AOS cluster running AHV. Clusters running other hypervisors are not supported.

Enabling IAM

Before you begin

  • IAM on Prism Central is disabled by default. When you enable the Micro Services Infrastructure on Prism Central, IAM is automatically enabled.
  • See Enabling Microservices Infrastructure in the Prism Central Guide .
  • See Identity and Access Management Prerequisites and Considerations and also the Identity and Access Management Software Support topic in the Prism Central release notes for specific support requirements.

Procedure

  1. Enable Micro Services Infrastructure on Prism Central as described in Enabling Micro Services Infrastructure in the Prism Central Guide .
  2. To view task status:
    1. Open a web browser and log in to the Prism Central web console.
    2. Go to the Activity > Tasks dashboard and find the IAM Migration & Bootstrap task.
    The task takes up to 60 minutes to complete. Part of the task is migrating existing authentication configurations.
  3. After the enablement tasks are completed, including the IAM Migration & Bootstrap task, log out of Prism Central. Wait at least 15 minutes before logging on to Prism Central.

    The Prism Central web console shows a new log in page as shown below. This confirms that IAM is enabled.

    Note:

    Depending on your existing authentication configuration, the log in page might look different.

    Also, you can go to Settings > Prism Central Management page to verify if Prism Central on Microservices Infrastructure (CMSP) is enabled. CMSP and IAM enablement happen together.

    Figure. Sample Prism Central IAM Logon Page Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

What to do next

Configure authentication and access. If you are implementing SAML authentication with Active Directory Federated Services (ADFS), see Updating ADFS When Using SAML Authentication.

Configuring Authentication

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Prism Central supports user authentication with these authentication options:

  • SAML authentication. Users can authenticate through a supported identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties: an identity provider (IDP) and Prism Central as the service provider.

    With IAM, in addition to ADFS, other IDPs are available. For more information, see Security Management Using Identity and Access Management (Prism Central) and Updating ADFS When Using SAML Authentication.

  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts .
  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.

Enabling and Configuring Client Authentication/CAC

Before you begin

  • If you have enabled Identity and Access Management (IAM) on Prism Central as described in Enabling IAM and want to enable a client to authenticate by using certificates, you must also enable CAC authentication.
  • Ensure that port 9441 is open in your firewall if you are using CAC client authentication. After enabling CAC client authentication, your CAC logon redirects the browser to use port 9441.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. Click the Client tab, then do the following steps.
    1. Select the Configure Client Chain Certificate check box.
    2. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note: Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
    3. To enable client authentication, click Enable Client Authentication .
    4. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This deletion also disables client authentication.)
      • To enable OCSP or CRL-based certificate revocation checking, see Certificate Revocation Checking.

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server. When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on the local machine or by providing it through a smart card reader.

    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  3. To specify a service account that the Prism Central web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The Prism Central console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:

    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Central supports both certificate authentication and basic authentication in order to handle both Prism Central login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Central login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    If you map a Prism Central role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism Central by using CAC authentication, to successfully log out of Prism Central, close the browser after you click Log Out .

Updating ADFS When Using SAML Authentication

With Nutanix IAM enabled, to maintain compatibility with new and existing IDP/SAML authentication configurations, update your Active Directory Federated Services (ADFS) configuration - specifically the Prism Central Relying Party Trust settings. For these configurations, you are using SAML as the open standard for exchanging authentication and authorization data between ADFS as the identity provider (IDP) and Prism Central as the service provider.

About this task

In your ADFS Server configuration, update the Prism Central Relying Party Trust settings by creating claim rules to send the selected LDAP attribute as the SAML NameID in Email address format. For example, map the User Principal Name to NameID in the SAML assertion claims.

As an example, this topic uses UPN as the LDAP Attribute to map. You could also map the email address attribute to NameID. See the Microsoft Active Directory Federation Services documentation for details about creating a claims aware Relying Party Trust and claims rules.

Procedure

  1. In the Relying Party Trust for Prism Central, configure a claims issuance policy with two rules.
    1. One rule based on the Send LDAP Attributes as Claims template.
    2. One rule based on the Transform an Incoming Claim template
  2. For the rule using the Send LDAP Attributes as Claims template, select the LDAP Attribute as User-Principal-Name and set Outgoing Claim Type to UPN .
    For User group configuration using the Send LDAP Attributes as Claims template, select the LDAP Attribute as Token-Groups - Unqualified-Names and set Outgoing Claim Type to Group .
  3. For the rule using the Transform an Incoming Claim template:
    1. Set Incoming claim type to UPN .
    2. Set the Outgoing claim type to Name ID .
    3. Set the Outgoing name ID format to Email .
    4. Select Pass through all claim values .

Adding a SAML-based Identity Provider

About this task

If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported identity provider (IDP) for Single Sign-on and only one IDP is allowed at a time. If you enable IAM, additional IDPs are available. See Security Management Using Identity and Access Management (Prism Central) and also Updating ADFS When Using SAML Authentication.

Before you begin

  • An identity provider (typically a server or other computer) is the system that provides authentication through a SAML request. There are various implementations that can provide authentication services in line with the SAML standard.
  • If you enable IAM by enabling CMSP, you can specify other tested standard-compliant IDPs in addition to ADFS. See also the Prism Central release notes topic Identity and Access Management Software Support for specific support requirements..

    Only one identity provider is allowed at a time, so if one was already configured, the + New IDP link does not appear.

  • You must configure the identity provider to return the NameID attribute in SAML response. The NameID attribute is used by Prism Central for role mapping. See Configuring Role Mapping for details.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. To add a SAML-based identity provider, click the + New IDP link.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Configuration name : Enter a name for the identity provider. This name will appear in the log in authentication screen.
    2. Import Metadata : Click this radio button to upload a metadata file that contains the identity provider information.

      Identity providers typically provide an XML file on their website that includes metadata about that identity provider, which you can download from that site and then upload to Prism Central. Click + Import Metadata to open a search window on your local system and then select the target XML file that you downloaded previously. Click the Save button to save the configuration.

      Figure. Identity Provider Fields (metadata configuration) Click to enlarge

    This completes configuring an identity provider in Prism Central, but you must also configure the callback URL for Prism Central on the identity provider. To do this, click the Download Metadata link just below the Identity Providers table to download an XML file that describes Prism Central and then upload this metadata file to the identity provider.
  3. To edit a identity provider entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete an identity provider entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Restoring Identity and Access Management Configuration Settings

Prism Central regularly backs up the Identity and Access Management (IAM) database, typically every 15 minutes. This procedure describes how to restore a specific IAM backup instance.

About this task

The IAM restore process restores any authentication and authorization configuration settings from the IAM database. You can choose an available time-stamped backup instance when you run the shell script in this procedure.

Procedure

  1. Log in to the Prism Central VM through an SSH session as the nutanix user.
  2. Run the backup shell script restore_iamv2.sh
    nutanix@pcvm$ sh /home/nutanix/cluster/bin/restore_iamv2.sh
    The script displays a numbered list of available backups, including the backup file time-stamp.
    Enter the Backup No. from the backup list (default is 1):
  3. Select a backup by number to start the restore process.
    The script displays a series of messages indicating restore progress, similar to:
    You Selected the Backup No 1
    Stopping the IAM services
    Waiting to stop all the IAM services and to start the restore process
    Restore Process Started
    Restore Process Completed
    ...
    Restarting the IAM services
    IAM Services Restarted Successfully

    After the script runs successfully, the command shell prompt returns and your IAM configuration is restored.

  4. To validate that your settings have been restored, log on to the Prism Central web console and go to Settings > Authentication and check the settings.

Accessing a List of Open Source Software Running on a Cluster

Use this procedure to access a text file that lists all of the open source software running on a cluster.

Procedure

  1. Log on to any Controller VM in the cluster as the admin user by using SSH.
  2. Access the text file by using the following command.
    less /usr/local/nutanix/license/blackduck_version_license.txt
Read article
Security Guide

AOS Security 6.5

Product Release Date: 2022-07-25

Last updated: 2022-12-14

Audience & Purpose

This Security Guide is intended for security-minded people responsible for architecting, managing, and supporting infrastructures, especially those who want to address security without adding more human resources or additional processes to their datacenters.

This guide offers an overview of the security development life cycle (SecDL) and host of security features supported by Nutanix. It also demonstrates how Nutanix complies with security regulations to streamline infrastructure security management. In addition to this, this guide addresses the technical requirements that are site specific or compliance-standards (that should be adhered), which are not enabled by default.

Note:

Hardening of the guest OS or any applications running on top of the Nutanix infrastructure is beyond the scope of this guide. We recommend that you refer to the documentation of the products that you have deployed in your Nutanix environment.

Nutanix Security Infrastructure

Nutanix takes a holistic approach to security with a secure platform, extensive automation, and a robust partner ecosystem. The Nutanix security development life cycle (SecDL) integrates security into every step of product development, rather than applying it as an afterthought. The SecDL is a foundational part of product design. The strong pervasive culture and processes built around security harden the Enterprise Cloud Platform and eliminate zero-day vulnerabilities. Efficient one-click operations and self-healing security models easily enable automation to maintain security in an always-on hyperconverged solution.

Since traditional manual configuration and checks cannot keep up with the ever-growing list of security requirements, Nutanix conforms to RHEL 7 Security Technical Implementation Guides (STIGs) that use machine-readable code to automate compliance against rigorous common standards. With Nutanix Security Configuration Management Automation (SCMA), you can quickly and continually assess and remediate your platform to ensure that it meets or exceeds all regulatory requirements.

Nutanix has standardized the security profile of the Controller VM to a security compliance baseline that meets or exceeds the standard high-governance requirements.

The most commonly used references in United States to guide vendors to build products according to the set of technical requirements are as follows.

  • The National Institute of Standards and Technology Special Publications Security and Privacy Controls for Federal Information Systems and Organizations (NIST 800.53)
  • The US Department of Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIG)

SCMA Implementation

The Nutanix platform and all products leverage the Security Configuration Management Automation (SCMA) framework to ensure that services are constantly inspected for variance to the security policy.

Nutanix has implemented security configuration management automation (SCMA) to check multiple security entities for both Nutanix storage and AHV. Nutanix automatically reports log inconsistencies and reverts them to the baseline.

With SCMA, you can schedule the STIG to run hourly, daily, weekly, or monthly. STIG has the lowest system priority within the virtual storage controller, ensuring that security checks do not interfere with platform performance.
Note: Only the SCMA schedule can be modified. The AIDE schedule is run on a fixed weekly schedule. To change the SCMA schedule for AHV or the Controller VM, see Hardening Instructions (nCLI).

RHEL 7 STIG Implementation in Nutanix Controller VM

Nutanix leverages SaltStack and SCMA to self-heal any deviation from the security baseline configuration of the operating system and hypervisor to remain in compliance. If any component is found as non-compliant, then the component is set back to the supported security settings without any intervention. To achieve this objective, Nutanix has implemented the Controller VM to support STIG compliance with the RHEL 7 STIG as published by DISA.

The STIG rules are capable of securing the boot loader, packages, file system, booting and service control, file ownership, authentication, kernel, and logging.

Example: STIG rules for Authentication

Prohibit direct root login, lock system accounts other than root , enforce several password maintenance details, cautiously configure SSH, enable screen-locking, configure user shell defaults, and display warning banners.

Security Updates

Nutanix provides continuous fixes and updates to address threats and vulnerabilities. Nutanix Security Advisories provide detailed information on the available security fixes and updates, including the vulnerability description and affected product/version.

To see the list of security advisories or search for a specific advisory, log on to the Support Portal and select Documentation , and then Security Advisories .

Nutanix Security Landscape

This topic provides highlights on Nutanix security landscape and its highlights. The following table helps to identify the security features offered out-of-the-box in Nutanix infrastructure.

Topic Highlights
Authentication and Authorization
Network segmentation VLAN-based, data driven segmentation
Security Policy Management Implement security policies using Microsegmentation.
Data security and integrity
Hardening Instructions

Log monitoring and analysis

Flow Networking

See Flow Networking Guide

UEFI

See UEFI Support for VMs in the AHV Administration Guide

Secure Boot

See Secure Boot Support for VMs in the AHV Administration Guide

Windows Credential Guard support

See Windows Defender Credential Guard Support in AHV in the AHV Administration Guide

RBAC

See Controlling User Access (RBAC)

Hardening Instructions (nCLI)

This chapter describes how to implement security hardening features for Nutanix AHV and Controller VM.

Hardening AHV

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to AHV as described below.

Table 1. Configuration Settings to Harden the AHV
Description Command or Settings Output
Getting the cluster-wide configuration of the SCMA policy. Run the following command:
nutanix@cvm$ ncli cluster get-hypervisor-security-config
Enable Aide : false
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY
Enabling the Advanced Intrusion Detection Environment (AIDE) to run on a weekly basis. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-aide=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY 
Enabling the high-strength password policies (minlen=15, difok=8, maxclassrepeat=4). Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params \
enable-high-strength-password=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : false
Schedule : DAILY
Enabling the defense knowledge consent banner of the US department. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : DAILY
Changing the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params schedule=hourly
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Enabling the settings so that AHV can generate stack traces for any cluster issue. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-core=true
Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
Enable Aide : true
Enable Core : true
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Configuring security levels for the nutanix user for ssh login to the Nutanix Cluster. Run the following command:
nutanix@cvm$ ncli cluster edit-cvm-security-params ssh-security-level=limited
Enabling locking of the security configuration. Run the following command:
nutanix@cvm$ ncli cluster edit-cvm-security-params enable-lock-status=true
When a high governance official needs to run the hardened configuration. The settings should be as follows:
Enable Aide               : true
    Enable Core               : false
    Enable High Strength P... : true
    Enable Banner             : false
    Enable SNMPv3 Only        : true
    Schedule                  : HOURLY
    Enable Kernel Mitigations : false
    SSH Security Level        : LIMITED
    Enable Lock Status        : true
    Enable Kernel Core        : true
When a federal official needs to run the hardened configuration. The settings should be as follows:
Enable Aide               : true
    Enable Core               : false
    Enable High Strength P... : true
    Enable Banner             : true
    Enable SNMPv3 Only        : true
    Schedule                  : HOURLY
    Enable Kernel Mitigations : false
    SSH Security Level        : LIMITED
    Enable Lock Status        : true
    Enable Kernel Core        : true
Note: A banner file can be modified to support non-DoD customer banners.
Backing up the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# cp -a /etc/puppet/modules/kvm/files/issue.DoD \
/etc/puppet/modules/kvm/files/issue.DoD.bak
Important: Any changes in the banner file are not preserved across upgrades.
Modifying the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# vi /etc/puppet/modules/kvm/files/issue.DoD
Note: Repeat all the above steps on every AHV in a cluster.
Important: Any changes in the banner file are not preserved across upgrades.
Setting the banner for all nodes through nCLI. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true

The following options are configured or customized to harden the AHV:

  • Enable AIDE : Advanced Intrusion Detection Environment (AIDE) is a Linux utility that monitors a given node. After you install the AIDE package, the system will generate a database that contains all the files you selected in your configuration file by entering the aide -–init command as a root user. You can move the database to a secure location in a read-only media or on other machines. After you create the database, you can use the aide -–check command for the system to check the integrity of the files and directories by comparing the files and directories on your system with the snapshot in the database. In case there are unexpected changes, a report gets generated, which you can review. If the changes to existing files or files added are valid, you can use the aide --update command to update the database with the new changes.
  • Enable high strength password : You can run the command as shown in the table in this section to enable high-strength password policies (minlen=15, difok=8, maxclassrepeat=4).
    Note:
    • minlen is the minimum required length for a password.
    • difok is the minimum number of characters that must be different from the old password.
    • maxclassrepeat is the number of consecutive characters of same class that you can use in a password.
  • Enable Core : A core dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program gets crashed or terminated abnormally. Core dumps are used to assist in diagnosing or debugging errors in computer programs. You can enable the core for troubleshooting purposes.
  • Enable Banner : You can set a banner to display a specific message. For example, set a banner to display a warning message that the system is available to authorized users only.

Hardening Controller VM

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to the Controller VM as described below.

For the complete list of cluster security parameters, see Edit the security params of a Cluster in the Command Reference guide.

  • Run the following command to support cluster-wide configuration of the SCMA policy.

    nutanix@cvm$ ncli cluster get-cvm-security-config

    The current cluster configuration is displayed.

    Enable Aide               : false
        Enable Core               : false
        Enable High Strength P... : false
        Enable Banner             : false
        Enable SNMPv3 Only        : false
        Schedule                  : DAILY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to schedule weekly execution of Advanced Intrusion Detection Environment (AIDE).

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-aide=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : false
        Enable Banner             : false
        Enable SNMPv3 Only        : false
        Schedule                  : DAILY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to enable the strong password policy.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-high-strength-password=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : false
        Enable SNMPv3 Only        : false
        Schedule                  : DAILY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to enable the defense knowledge consent banner of the US department.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : false
        Schedule                  : DAILY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to enable the settings to allow only SNMP version 3.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-snmpv3-only=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : DAILY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to change the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly.

    nutanix@cvm$ ncli cluster edit-cvm-security-params schedule=hourly

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : false
  • Run the following command to enable the settings so that Controller VM can generate stack traces for any cluster issue.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-core=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : DEFAULT
        Enable Lock Status        : false
        Enable Kernel Core        : true
    Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
  • Run the following command to configure security levels for the nutanix user for ssh login to the Nutanix Cluster.

    nutanix@cvm$ ncli cluster edit-cvm-security-params ssh-security-level=limited

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : LIMITED
        Enable Lock Status        : true
        Enable Kernel Core        : true
  • Run the following command to enable to locking of the security configuration.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-lock-status=true

    The following output is displayed.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : LIMITED
        Enable Lock Status        : true
        Enable Kernel Core        : true
    Note: If set true, the configuration settings can not be edited by the user and a support call will need to be made to unlock this configuration.

Scenario-Based Hardening

  • When a high governance official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : false
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : LIMITED
        Enable Lock Status        : true
        Enable Kernel Core        : true
  • When a federal official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide               : true
        Enable Core               : false
        Enable High Strength P... : true
        Enable Banner             : true
        Enable SNMPv3 Only        : true
        Schedule                  : HOURLY
        Enable Kernel Mitigations : false
        SSH Security Level        : LIMITED
        Enable Lock Status        : true
        Enable Kernel Core        : true

DoD Banner Configuration

  • Note: A banner file can be modified to support non-DoD customer banners.
  • Run the following command to backup the DoD banner file.

    nutanix@cvm$ sudo cp -a /srv/salt/security/CVM/sshd/DODbanner \
    /srv/salt/security/CVM/sshd/DODbannerbak
  • Run the following command to modify DoD banner file.

    nutanix@cvm$ sudo vi /srv/salt/security/CVM/sshd/DODbanner
    Note: Repeat all the above steps on every Controller VM in a cluster.
  • Run the following command to backup the DoD banner file of the Prism Central VM.

    nutanix@pcvm$ sudo cp -a /srv/salt/security/PC/sshd/DODbanner \
    /srv/salt/security/PC/sshd/DODbannerbak
  • Run the following command to modify DoD banner file of the Prism Central VM.

    nutanix@pcvm$ sudo vi /srv/salt/security/PC/sshd/DODbanner
  • Run the following command to set the banner for all nodes through nCLI.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

TCP Wrapper Integration

Nutanix Controller VM uses the tcp_wrappers package to allow TCP supported daemons to control the network subnets which can access the libwrapped daemons. By default, SCMA controls the /etc/hosts.allow file in /srv/salt/security/CVM/network/hosts.allow and contains a generic entry to allow access to NFS, secure shell, and SNMP.

sshd: ALL : ALLOW
rpcbind: ALL : ALLOW
snmpd: ALL : ALLOW
snmptrapd: ALL : ALLOW

Nutanix recommends that the above configuration is changed to include only the localhost entries and the management network subnet for the restricted operations; this applies to both production and high governance compliance environments. This ensures that all subnets used to communicate with the CVMs are included in the /etc/hosts.allow file.

Common Criteria

Common Criteria is an international security certification that is recognized by many countries around the world. Nutanix AOS and AHV are Common Criteria certified by default and no additional configuration is required to enable the Common Criteria mode. For more information, see the Nutanix Trust website.
Note: Nutanix uses FIPS-validated cryptography by default.

Securing AHV VMs with Virtual Trusted Platform Module (vTPM)

Overview

A Trusted Platform Module (TPM) is used to manage cryptographic keys for security services like encryption and hardware (and software) integrity protection. AHV vTPM is software-based emulation of the TPM 2.0 specification that works as a virtual device.
Note: AHV vTPM does NOT require OR use a hardware TPM.
You can use the AHV vTPM feature to secure virtual machines running on AHV.
Note: You can enable vTPM using aCLI only.

vTPM Use Cases

AHV vTPM provides virtualization-based security support for the following primary use cases.

  • Support for storing cryptographic keys and certificates for Microsoft Windows BitLocker
  • TPM protection for storing VBS encryption keys for Windows Defender Credential Guard
See Microsoft Documentation for details on Microsoft Windows Defender Credential Guard and Microsoft Windows BitLocker.
Tip: Windows 11 installation requires TPM 2.0, see Microsoft website for Windows 11 specs, features, and computer requirements.

Considerations for Enabling vTPM in AHV VMs

Requirements

Supported Software Versions:

  • AHV version 20220304.242 or above
  • AOS version 6.5.1 or above

VM Requirements:

  • You must enable UEFI on the VM on which you want to enable vTPM, see UEFI Support for VM.
  • You must enable Secure Boot (applicable if using Microsoft Windows BitLocker), see Secure Boot Support for VMs.

Limitations

  • All Secure Boot limitations apply to vTPM VM.
  • Disaster recovery is not supported for vTPM VMs.

Creating AHV VMs with vTPM (aCLI)

About this task

You can create a virtual machine with the vTPM configuration enabled using the following aCLI procedure.

Procedure

  1. Log on to any Controller VM in the cluster with SSH.
  2. At the CVM prompt, type acli to enter the Acropolis CLI mode.
  3. Create a VM with the required configuration using one of the following methods.
    • Create a VM using Prism Element or Prism Central web console. If you choose to create the VM using Prism Element or Prism Central, proceed to Step 4 .
      Note: For simplicity, it is recommended to use Prism Element or Prism Central web console to create VMs, see Creating a VM.
    • Create a VM using aCLI. You can enable vTPM at the time of creating a VM. To enable vTPM during VM creation, do the following and proceed to step 5 (skip step 4 ).

      Use the "vm.create" command with required arguments to create a VM. For details on VM creation command ("vm.create") and supported arguments using aCLI, see "vm" in the Command Reference Guide.

      acli> vm.create <vm-name> machine_type=q35 uefi_boot=true secure_boot=true virtual_tpm=true <argument(s)>

      Replace <vm-name> with the name of the VM and <argument(s)> with one or more arguments as needed for your VM.

  4. Enable vTPM.
    acli> vm.update <vm-name> virtual_tpm=true

    In the above command, replace "<vm-name>" with name of the newly created VM.

  5. Start the VM.
    acli> vm.on <vm-name>
    Replace <vm-name> with the name of the VM.
    vTPM is enabled on the VM.

Enabling vTPM for Existing AHV VMs (aCLI)

About this task

You can update the settings of an existing virtual machine (that satisfies vTPM requirements) to enable vTPM using the following aCLI procedure.

Procedure

  1. Log on to any Controller VM in the cluster with SSH.
  2. At the CVM prompt, type acli to enter the acropolis CLI mode.
  3. Shut down the VM to enforce an update on the VM.
    acli> vm.shutdown <vm-name>
    Replace <vm-name> with the name of the VM.
  4. Enable vTPM.
    acli> vm.update <vm-name> virtual_tpm=true

    Replace <vm-name> with the name of the VM.

  5. Start the VM.
    acli> vm.on <vm-name>
    Replace <vm-name> with the name of the VM.
    vTPM is enabled on the VM.

Security Management Using Prism Element (PE)

Nutanix provides several mechanisms to maintain security in a cluster using Prism Element.

Configuring Authentication

About this task

Nutanix supports user authentication. To configure authentication types and directories and to enable client authentication or to enable client authentication only, do the following:
Caution: The web console (and nCLI) does not allow the use of the not secure SSLv2 and SSLv3 ciphers. There is a possibility of an SSL Fallback situation in some browsers which denies access to the web console. To eliminate this, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. Click the gear icon in the main menu and then select Authentication in the Settings page.
    The Authentication Configuration window appears.
    Note: The following steps combine three distinct procedures, enabling authentication (step 2), configuring one or more directories for LDAP/S authentication (steps 3-5), and enabling client authentication (step 6). Perform the steps for the procedures you need. For example, perform step 6 only if you intend to enforce client authentication.
  2. To enable server authentication, click the Authentication Types tab and then check the box for either Local or Directory Service (or both). After selecting the authentication types, click the Save button.
    The Local setting uses the local authentication provided by Nutanix (see User Management) . This method is employed when a user enters just a login name without specifying a domain (for example, user1 instead of user1@nutanix.com ). The Directory Service setting validates user@domain entries and validates against the directory specified in the Directory List tab. Therefore, you need to configure an authentication directory if you select Directory Service in this field.
    Figure. Authentication Types Tab Click to enlarge
    Note: The Nutanix admin user can log on to the management interfaces, including the web console, even if the Local authentication type is disabled.
  3. To add an authentication directory, click the Directory List tab and then click the New Directory option.
    A set of fields is displayed. Do the following in the indicated fields:
    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to the web console (or nCLI). Ensure users with this attribute first login to a domain workstation and change their password prior to accessing the web console. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • An Active Directory user name or group name containing spaces is not supported for Prism Element authentication.
        • Active Directory domain created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII text in Active Directory configuration, see the Internationalization (i18n) section.
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project. Nutanix currently supports the OpenLDAP 2.4 release running on CentOS distributions only.
    2. Name : Enter a directory name.
      This is a name you choose to identify this entry; it need not be the name of an actual directory.
    3. Domain : Enter the domain name.
      Enter the domain name in DNS format, for example, nutanix.com .
    4. Directory URL : Enter the URL address to the directory.
      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:
      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
        Note: The LDAP server SSL certificate must include a Subject Alternative Name (SAN) that matches the URL provided during the LDAPS setup.
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note: When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        Note: When querying the global catalog, the users sAMAccountName field must be unique across the AD forest. If the sAMAccountName field is not unique across the subdomains, authentication may fail intermittently or consistently.
      Note: For the complete list of required ports, see Port Reference .
    5. (OpenLDAP only) Configure the following additional fields:
      1. User Object Class : Enter the value that uniquely identifies the object class of a user.
      2. User Search Base : Enter the base domain name in which the users are configured.
      3. Username Attribute : Enter the attribute to uniquely identify a user.
      4. Group Object Class : Enter the value that uniquely identifies the object class of a group.
      5. Group Search Base : Enter the base domain name in which the groups are configured.
      6. Group Member Attribute : Enter the attribute that identifies users in a group.
      7. Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Enter the service account user name in the user_name@domain.com format that you want the web console to use to log in to the Active Directory.

      A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

      A service account is under the Managed Service Accounts in the Active Directory server. An application or service uses the service account to interact with the operating system. Enter your Active Directory service account credentials in this (username) and the following (password) field.

      Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).
      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.
    10. Repeat this step for each authentication directory you want to add.
    Note:
    • The Controller VMs need access to the Active Directory server, so open the standard Active Directory ports to each Controller VM in the cluster (and the virtual IP if one is configured).
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Assigning Role Permissions).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service. Additionally, for successful Prism Element authentication, the users must also have search or read privileges.
    Figure. Directory List Tab Click to enlarge
  4. To edit a directory entry, click the Directory List tab and then click the pencil icon for that entry.
    After clicking the pencil icon, the Directory List fields reappear (see step 3). Enter the new information in the appropriate fields and then click the Save button.
  5. To delete a directory entry, click the Directory List tab and then click the X icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  6. To enable client authentication, do the following:
    1. Click the Client tab.
    2. Select the Configure Client Chain Certificate check box.
      Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.
      Note: To authenticate on the PE with Client Chain Certificate the 'Subject name’ field must be present. The subject name should match the userPrincipalName (UPN) in the AD. The UPN is a username with domain address. For example user1@nutanix.com .
      Figure. Client Tab (1) Click to enlarge
    3. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note: Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
      Figure. Client Tab (2) Click to enlarge
    4. To enable client authentication, click Enable Client Authentication .
    5. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.
      Figure. Authentication Window: Client Tab (3) Click to enlarge

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on their local machine or by providing it through a smart card reader. Providing a valid certificate enables user login from a client machine with the relevant user certificate without utilizing user name and password. If the user is required to login from a client machine which does not have the certificate installed, then authentication using user name and password is still available.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  7. To specify a service account that the web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    Figure. Common Access Card Authentication Click to enlarge
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The web console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:
    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Element supports both certificate authentication and basic authentication in order to handle both Prism Element login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Element login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism by using CAC authentication, to successfully log out of Prism, close the browser after you click Log Out .
  8. Click the Close button to close the Authentication Configuration dialog box.

Assigning Role Permissions

About this task

When user authentication is enabled for a directory service (see Configuring Authentication), the directory users do not have any permissions by default. To grant permissions to the directory users, you must specify roles for the users (with associated permissions) to organizational units (OUs), groups, or individuals within a directory.

If you are using Active Directory, you must also assign roles to entities or users, especially before upgrading from a previous AOS version.

To assign roles, do the following:

Procedure

  1. In the web console, click the gear icon in the main menu and then select Role Mapping in the Settings page.
    The Role Mapping window appears.
    Figure. Role Mapping Window Click to enlarge
  2. To create a role mapping, click the New Mapping button.

    The Create Role Mapping window appears. Do the following in the indicated fields:

    1. Directory : Select the target directory from the pull-down list.

      Only directories previously defined when configuring authentication appear in this list. If the desired directory does not appear, add that directory to the directory list (see Configuring Authentication) and then return to this procedure.

    2. LDAP Type : Select the desired LDAP entity type from the pull-down list.

      The entity types are GROUP , USER , and OU .

    3. Role : Select the user role from the pull-down list.
      There are three roles from which to choose:
      • Viewer : This role allows a user to view information only. It does not provide permission to perform any administrative tasks.
      • Cluster Admin : This role allows a user to view information and perform any administrative task (but not create or modify user accounts).
      • User Admin : This role allows the user to view information, perform any administrative task, and create or modify user accounts.
    4. Values : Enter the case-sensitive entity names (in a comma separated list with no spaces) that should be assigned this role.
      The values are the actual names of the organizational units (meaning it applies to all users in those OUs), groups (all users in those groups), or users (each named user) assigned this role. For example, entering value " admin-gp,support-gp " when the LDAP type is GROUP and the role is Cluster Admin means all users in the admin-gp and support-gp groups should be assigned the cluster administrator role.
      Note:
      • Do not include a domain in the value, for example enter just admin-gp , not admin-gp@nutanix.com . However, when users log into the web console, they need to include the domain in their user name.
      • The AD user UPN must be in the user@domain_name format.
      • When an admin defines user role mapping using an AD with forest setup, the admin can map to the user with the same name from any domain in the forest setup. To avoid this case, set up the user-role mapping with AD that has a specific domain setup.
    5. When all the fields are correct, click Save .
      This saves the configuration and redisplays the Role Mapping window. The new role map now appears in the list.
      Note: All users in an authorized service directory have full administrator permissions when role mapping is not defined for that directory. However, after creating a role map, any users in that directory that are not explicitly granted permissions through the role mapping are denied access (no permissions).
    6. Repeat this step for each role map you want to add.
      You can create a role map for each authorized directory. You can also create multiple maps that apply to a single directory. When there are multiple maps for a directory, the most specific rule for a user applies. For example, adding a GROUP map set to Cluster Admin and a USER map set to Viewer for select users in that group means all users in the group have administrator permission except those specified users who have viewing permission only.
    Figure. Create Role Mapping Window Click to enlarge
  3. To edit a role map entry, click the pencil icon for that entry.
    After clicking the pencil icon, the Edit Role Mapping window appears, which contains the same fields as the Create Role Mapping window (see step 2). Enter the new information in the appropriate fields and then click the Save button.
  4. To delete a role map entry, click the "X" icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  5. Click the Close button to close the Role Mapping window.

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds> set-crl-expiration-interval=<expiration interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).
  • The periodically updated CRLs are cached in-memory for the duration specified by value of set-crl-expiration-interval and expired after the duration, in case a particular CRL distribution point is not reachable. This duration is configured for the entire list of CRL distribution points. The default value for this is 604800 seconds (7 days).

Authentication Best Practices

The authentication best practices listed here are guidance to secure the Nutanix platform by using the most common authentication security measures.

Emergency Local Account Usage

You must use the admin account as a local emergency account. The admin account ensures that both the Prism Web Console and the Controller VM are available when the external services such as Active Directory is unavailable.

Note: Local emergency account usage does not support any external access mechanisms, specifically for the external application authentication or external Rest API authentication.

For all the external authentication, you must configure the cluster to use an external IAM service such as Active Directory. You must create service accounts on the IAM and the accounts must have access grants to the cluster through Prism web console user account management configuration for authentication.

Modifying Default Passwords

You must change the default Controller VM password for nutanix user account by adhering to the password complexity requirements.

Procedure

  1. SSH to the Controller VM.
  2. Change the "nutanix" user account password.
    nutanix@cvm$ passwd nutanix
  3. Respond to the prompts and provide the current and new root password.
    Changing password for nutanix.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    Note:
    • Changing the user account password on one of the Controller VMs is applied to all Controller VMs in the cluster.
    • Ensure that you preserve the modified nutanix user password, since the local authentication (PAM) module requires the previous password of the nutanix user to successfully start the password reset process.
    • For the root account, both the console and SSH direct login is disabled.
    • It is recommended to use the admin user as the administrative emergency account.

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Setup Admin Session Timeout

By default, the users are logged out automatically after being idle for 15 minutes. You can change the session timeout for users and configure to override the session timeout by following the steps shown below.

Procedure

  1. Click the gear icon in the main menu and then select UI Settings in the Settings page.
  2. Select the session timeout for the current user from the Session Timeout For Current User drop-down list.
    Figure. Session Timeout Settings Click to enlarge displays the window for setting an idle logout value and for disabling the logon background animation

  3. Select the appropriate option from the Session Timeout Override drop-down list to override the session timeout.

Password Retry Lockout

For enhanced security, Prism Element locks out the admin account for a period of 15 minutes after a default number of unsuccessful login attempts. Once the account is locked out, the following message is displayed at the logon screen.

Account locked due to too many failed attempts

You can attempt entering the password after the 15 minutes lockout period, or contact Nutanix Support in case you have forgotten your password.

Internationalization (i18n)

The following table lists all the supported and unsupported entities in UTF-8 encoding.

Table 1. Internationalization Support
Supported Entities Unsupported Entities
Cluster name Acropolis file server
Storage Container name Share path
Storage pool Internationalized domain names
VM name E-mail IDs
Snapshot name Hostnames
Volume group name Integers
Protection domain name Password fields
Remote site name Any Hardware related names ( for example, vSwitch, iSCSCI initiator, vLAN name)
User management
Chart name
Caution: The creation of none of the above entities are supported on Hyper-V because of the DR limitations.

Entities Support (ASCII or non-ASCII) for the Active Directory Server

  • In the New Directory Configuration, Name field is supported in non-ASCII.
  • In the New Directory Configuration, Domain field is not supported in non-ASCII.
  • In Role mapping, Values field is supported in non-ASCII.
  • User names and group names are supported in non-ASCII.

User Management

Nutanix user accounts can be created or updated as needed using the Prism web console.

  • The web console allows you to add (see Creating a User Account), edit (see Updating a User Account), or delete (see Deleting a User Account (Local)) local user accounts at any time.
  • You can reset the local user account password using nCLI if you are locked out and cannot login to the Prism Element or Prism Central web console ( see Resetting Password (CLI)).
  • You can also configure user accounts through Active Directory and LDAP (see Configuring Authentication). Active Directory domain created by using non-ASCII text may not be supported.
Note: In addition to the Nutanix user account, there are IPMI, Controller VM, and hypervisor host users. Passwords for these accounts cannot be changed through the web console.

Creating a User Account

About this task

The admin user is created automatically when you get a Nutanix system, but you can add more users as needed. Note that you cannot delete the admin user. To create a user, do the following:
Note: You can also configure user accounts through Active Directory (AD) and LDAP (see Configuring Authentication).

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. To add a user, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
      Note: AOS uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Password : Enter a password (maximum of 255 characters).
      A second field to verify that the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.
      By default English is selected. You can select Simplified Chinese or Japanese . Depending on the language that you select here, the cluster locale is be updated for the new user. For example, if you select Simplified Chinese , the next time that the new user logs on to the web console, the user interface is displayed in Simplified Chinese.
    7. Roles : Assign a role to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user tasks.

        Note: Backup admin user is designed for Nutanix Mine integrations as of AOS version 5.19 and has minimal functionality in cluster management. This role has restricted access to the Nutanix Mine cluster.
        • Health , Analysis , and Tasks features are available in read-only mode.
        • The File server and Data Protection options in the web console are not available for this user.
        • The following features are available for Backup Admin users with limited functionality.
            • Home - The user cannot a register a cluster with Prism Central. The registration widget is disabled. Other read-only data is displayed and available.
            • Alerts - Alerts and events are displayed. However, the user cannot resolve or acknowledge any alert or event. The user cannot configure Alert Policy or Email configuration .
            • Hardware - The user cannot expand the cluster or remove hosts from the cluster. Read-only data is displayed and available.
            • Network - Networking data or configuration is displayed but configuration options are not available.
            • Settings - The user can only upload a new image using the Settings page.
            • VM - The user cannot configure options like Create VM and Network Configuration in the VM page. The following options are available for the user in the VM page:
              • Launch console
              • Power On
              • Power Off
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user tasks.
    8. When all the fields are correct, click Save .
      This saves the configuration and the web console redisplays the dialog box with the new user-administrative appearing in the list.
    Figure. Create User Window Click to enlarge

Updating a User Account

About this task

Update credentials and change the role for an existing user by using this procedure.
Note: To update your account credentials (that is, the user you are currently logged on as), see Updating My Account. Changing the password for a different user is not supported; you must log in as that user to change the password.

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
  2. Enable or disable the login access for a user by clicking the toggle text Yes (enabled) or No (disabled) in the Enabled column.
    A Yes value in the Enabled column means that the login is enabled; a No value in the Enabled column means it is disabled.
    Note: A user account is enabled (login access activated) by default.
  3. To edit the user credentials, click the pencil icon for that user and update one or more of the values in the displayed fields:
    1. Username : The username is fixed when the account is created and cannot be changed.
    2. First Name : Enter a different first name.
    3. Last Name : Enter a different last name.
    4. Email : Enter a different valid email address.
      Note: AOS Prism uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Roles : Change the role assigned to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user administrative tasks.
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user-administrative administrative tasks.
    6. Reset Password : Change the password of this user.
      Enter the new password for Password and Confirm Password fields. Click the info icon to view the password complexity requirements.
    7. When all the fields are correct, click Save .
      This saves the configuration and redisplays the dialog box with the new user appearing in the list.
    Figure. Update User Window Click to enlarge

Updating My Account

About this task

To update your account credentials (that is, credentials for the user you are currently logged in as), do the following:

Procedure

  1. To update your password, select Change Password from the user icon pull-down list in the web console.
    The Change Password dialog box appears. Do the following in the indicated fields:
    1. Current Password : Enter the current password.
    2. New Password : Enter a new password.
    3. Confirm Password : Re-enter the new password.
    4. When the fields are correct, click the Save button (lower right). This saves the new password and closes the window.
    Note: You can change the password for the "admin" account only once per day. Please contact Nutanix support if you need to update the password multiple times in one day
    Figure. Change Password Window Click to enlarge
  2. To update other details of your account, select Update Profile from the user icon pull-down list.
    The Update Profile dialog box appears. Update (as desired) one or more of the following fields:
    1. First Name : Enter a different first name.
    2. Last Name : Enter a different last name.
    3. Email : Enter a different valid user email address.
    4. Language : Select a language for your account.
    5. API Key : Enter the key value to use a new API key.
    6. Public Key : Click the Choose File button to upload a new public key file.
    7. When all the fields are correct, click the Save button (lower right). This saves the changes and closes the window.
    Figure. Update Profile Window Click to enlarge

Resetting Password (CLI)

This procedure describes how to reset a local user's password on the Prism Element or the Prism Central web consoles.

About this task

To reset the password using nCLI, do the following:

Note:

Only a user with admin privileges can reset a password for other users.

Procedure

  1. Access the CVM via SSH.
  2. Log in with the admin credentials.
  3. Use the ncli user reset-password command and specify the username and password of the user whose password is to be reset:
    nutanix@cvm$ ncli user reset-password user-name=xxxxx password=yyyyy
    
    • Replace user-name=xxxxx with the name of the user whose password is to be reset.

    • Replace password=yyyyy with the new password.

What to do next

You can relaunch the Prism Element or the Prism Central web console and verify the new password setting.

Exporting an SSL Certificate for Third-party Backup Applications

Nutanix allows you to export an SSL certificate for Prism Element on a Nutanix cluster and use it with third-party backup applications.

Procedure

  1. Log on to a Controller VM in the cluster using SSH.
  2. Run the following command to obtain the virtual IP address of the cluster:
    nutanix@cvm$ ncli cluster info

    The current cluster configuration is displayed.

        Cluster Id           : 0001ab12-abcd-efgh-0123-012345678m89::123456
        Cluster Uuid         : 0001ab12-abcd-efgh-0123-012345678m89
        Cluster Name         : three
        Cluster Version      : 6.0
        Cluster Full Version : el7.3-release-fraser-6.0-a0b1c2345d6789ie123456fg789h1212i34jk5lm6
        External IP address  : 10.10.10.10
        Node Count           : 3
        Block Count          : 1
        . . . . .
    Note: The external IP address in the output is the virtual IP address of the cluster.
  3. Run the following command to enter into the Python prompt:
    nutanix@cvm$ python

    The Python prompt appears.

  4. Run the following command to import the SSL library.
    $ import ssl
  5. From the Python console, run the following command to print the SSL certificate.
    $ print ssl.get_server_certificate(('virtual_IP_address',9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    Example: Refer to the following example where virtual_IP_address value is replaced by 10.10.10.10.
    $ print ssl.get_server_certificate(('10.10.10.10', 9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    The SSL certificate is displayed on the console.
    -----BEGIN CERTIFICATE-----
    0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01
    23456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123
    456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345
    6789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567
    89ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789AB
    CDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCD
    EFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEF
    GHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGH
    IJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJ
    KLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL
    MNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMN
    OPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOP
    QRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQR
    STUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRST
    UVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUV
    WXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWX
    YZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZab
    cdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcd
    efghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef
    ghij
    -----END CERTIFICATE-----

Deleting a User Account (Local)

About this task

To delete an existing user, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. Click the X icon for that user. Note that you cannot delete the admin user.
    A window prompt appears to verify the action; click the OK button. The user account is removed and the user no longer appears in the list.

Certificate Management

This chapter describes how to install and replace an SSL certificate for configuration and use on the Nutanix Controller VM.

Note: Nutanix recommends that you check for the validity of the certificate periodically, and replace the certificate if it is invalid.

Installing an SSL Certificate

About this task

Nutanix supports SSL certificate-based authentication for console access. To install a self-signed or custom SSL certificate, do the following:
Important: Ensure that SSL certificates are not password protected.
Note:
  • Nutanix recommends that customers replace the default self-signed certificate with a CA signed certificate.
  • SSL certificate (self-signed or signed by CA) can only be installed cluster-wide from Prism. SSL certificates can not be customized for individual Controller VM.

Procedure

  1. Click the gear icon in the main menu and then select SSL Certificate in the Settings page.
    The SSL Certificate dialog box appears.
    Figure. SSL Certificate Window Click to enlarge
  2. To replace (or install) a certificate, click the Replace Certificate button.
  3. To create a new self-signed certificate, click the Regenerate Self Signed Certificate option and then click the Apply button.
    A dialog box appears to verify the action; click the OK button. This generates and applies a new RSA 2048-bit self-signed certificate for the Prism user interface.
    Figure. SSL Certificate Window: Regenerate Click to enlarge
  4. To apply a custom certificate that you provide, do the following:
    1. Click the Import Key and Certificate option and then click the Next button.
      Figure. SSL Certificate Window: Import Click to enlarge
    2. Do the following in the indicated fields, and then click the Import Files button.
      Note:
      • All the three imported files for the custom certificate must be PEM encoded.
      • Ensure that the private key does not have any extra data (or custom attributes) before the beginning (-----BEGIN CERTIFICATE-----) or after the end (-----END CERTIFICATE-----) of the private key block.
      • See Recommended Key Configurations to ensure proper set of key types, sizes/curves, and signature algorithms.
      • Private Key Type : Select the appropriate type for the signed certificate from the pull-down list (RSA 4096 bit, RSA 2048 bit, EC DSA 256 bit, or EC DSA 384 bit).
      • Private Key : Click the Browse button and select the private key associated with the certificate to be imported.
      • Public Certificate : Click the Browse button and select the signed public portion of the server certificate corresponding to the private key.
      • CA Certificate/Chain : Click the Browse button and select the certificate or chain of the signing authority for the public certificate.
      Figure. SSL Certificate Window: Select Files Click to enlarge
      In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. See Recommended Key Configurations to ensure proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
      Note: There is no specific requirement for the subject name of the certificates (subject alternative names (SAN) or wildcard certificates are supported in Prism).
      You can use the cat command to concatenate a list of CA certificates into a chain file.
      $ cat signer.crt inter.crt root.crt > server.cert
      Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

Results

After generating or uploading the new certificate, the interface gateway restarts. If the certificate and credentials are valid, the interface gateway uses the new certificate immediately, which means your browser session (and all other open browser sessions) will be invalid until you reload the page and accept the new certificate. If anything is wrong with the certificate (such as a corrupted file or wrong certificate type), the new certificate is discarded, and the system reverts back to the original default certificate provided by Nutanix.
Note: The system holds only one custom SSL certificate. If a new certificate is uploaded, it replaces the existing certificate. The previous certificate is discarded.

Recommended Key Configurations

This table provides the Nutanix recommended set of key types, sizes/curves, and signature algorithms.

Note:
  • Client and CAC authentication only supports RSA 2048 bit certificate.
  • RSA 4096 bit certificates might not work with certain AOS and Prism Central releases. Please see the release notes for your AOS and Prism Central versions. Specifying an RSA 4096 bit certificate might cause multiple cluster services to restart frequently. To work around the issue, see KB 12775.
  • Certificate import fails if you attempt to upload SHA-1 certificate (including root CA).

Replacing a Certificate

Nutanix simplifies the process of certificate replacement to support the need of Certificate Authority (CA) based chains of trust. Nutanix recommends you to replace the default supplied self-signed certificate with a CA signed certificate.

Procedure

  1. Login to the Prism and click the gear icon.
  2. Click SSL Certificate .
  3. Select Replace Certificate to replace the certificate.
  4. Do one of the following.
    • Select Regenerate self signed certificate to generate a new self-signed certificate.
      Note:
      • This automatically generates and applies a certificate.
    • Select Import key and certificate to import the custom key and certificate.

    The following files are required and should be PEM encoded to import the keys and certificate.

    • The private key associated with the certificate. The below section describes generating a private key in detail.
    • The signed public portion of the server certificate corresponding to the private key.
    • The CA certificate or chain of the signing authority for the certificate.
    Note:

    You must obtain the Public Certificate and CA Certificate/Chain from the certificate authority.

    Figure. Importing Certificate Click to enlarge

    Generating an RSA 4096 and RSA 2048 private key

    Tip: You can run the OpenSSL commands for generating private key and CSR on a Linux client with OpenSSL installed.
    Note: Some OpenSSL command parameters might not be supported on older OpenSSL versions and require OpenSSL version 1.1.1 or above to work.
    • Run the following OpenSSL command to generate a RSA 4096 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:4096
              -nodes -sha256 -keyout server.key
    • Run the following OpenSSL command to generate an RSA 2048 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:2048
              -nodes -sha256 -keyout server.key

      After executing the openssl command, the system prompts you to provide more details that will be incorporated into your certificate. The mandatory fields are - Country Name, State or Province Name, and Organization Name. The optional fields are - Locality Name, Organizational Unit Name, Email Address, and Challenge Password.

    Nutanix recommends including a DNS name for all CVMs in the certificate using the Subject Alternative Name (SAN) extension. This avoids SSL certificate errors when you access a CVM by direct DNS instead of the shared cluster IP. This example shows how to include a DNS name while generating an RSA 4096 private key:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example.com" 
    -keyout server.key 

    For a 3-node cluster you can provide DNS name for all three nodes in a single command. For example:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example1.com,DNS:example2.com,DNS:example3.com" 
    -keyout server.key 

    If you have added a SAN ( subjectAltName ) extension to your certificate, then every time you add or remove a node from the cluster, you must add the DNS name when you generate or sign a new certificate.

    Generating an EC DSA 256 and EC DSA 384 private key

    • Run the following OpenSSL command to generate a EC DSA 256 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name prime256v1 –genkey 
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes -sha256 
    • Run the following OpenSSL command to generate a EC DSA 384 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name secp384r1 –genkey
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes –sha384 
      
    In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. See Recommended Key Configurations to ensure proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
    Note: There is no specific requirement for the subject name of the certificates (subject alternative names (SAN) or wildcard certificates are supported in Prism).
    You can use the cat command to concatenate a list of CA certificates into a chain file. $ cat signer.crt inter.crt root.crt > server.cert

    Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

  5. If the CA chain certificate provided by the certificate authority is not in a single file, then run the following command to concatenate the list of CA certificates into a chain file.
    cat signer.crt inter.crt root.crt > server.cert
    Note: The chain should start with the certificate of the signer and ends with the root CA certificate.
  6. Browse and add the Private Key, Public Certificate, and CA Certificate/Chain.
  7. Click Import Files .

What to do next

Prism restarts and you must login to use the application.

Exporting an SSL Certificate for Third-party Backup Applications

Nutanix allows you to export an SSL certificate for Prism Element on a Nutanix cluster and use it with third-party backup applications.

Procedure

  1. Log on to a Controller VM in the cluster using SSH.
  2. Run the following command to obtain the virtual IP address of the cluster:
    nutanix@cvm$ ncli cluster info

    The current cluster configuration is displayed.

        Cluster Id           : 0001ab12-abcd-efgh-0123-012345678m89::123456
        Cluster Uuid         : 0001ab12-abcd-efgh-0123-012345678m89
        Cluster Name         : three
        Cluster Version      : 6.0
        Cluster Full Version : el7.3-release-fraser-6.0-a0b1c2345d6789ie123456fg789h1212i34jk5lm6
        External IP address  : 10.10.10.10
        Node Count           : 3
        Block Count          : 1
        . . . . .
    Note: The external IP address in the output is the virtual IP address of the cluster.
  3. Run the following command to enter into the Python prompt:
    nutanix@cvm$ python

    The Python prompt appears.

  4. Run the following command to import the SSL library.
    $ import ssl
  5. From the Python console, run the following command to print the SSL certificate.
    $ print ssl.get_server_certificate(('virtual_IP_address',9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    Example: Refer to the following example where virtual_IP_address value is replaced by 10.10.10.10.
    $ print ssl.get_server_certificate(('10.10.10.10', 9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    The SSL certificate is displayed on the console.
    -----BEGIN CERTIFICATE-----
    0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01
    23456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123
    456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345
    6789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567
    89ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789AB
    CDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCD
    EFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEF
    GHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGH
    IJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJ
    KLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL
    MNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMN
    OPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOP
    QRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQR
    STUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRST
    UVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUV
    WXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWX
    YZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZab
    cdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcd
    efghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef
    ghij
    -----END CERTIFICATE-----

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Data-at-Rest Encryption

Nutanix provides an option to secure data while it is at rest using either self-encrypted drives or software-only encryption and key-based access management (cluster's native or external KMS for software-only encryption).

Encryption Methods

Nutanix provides you with the following options to secure your data.

  • Self Encrypting Drives (SED) Encryption - You can use a combination of SEDs and an external KMS to secure your data while it is at rest.
  • Software-only Encryption - Nutanix AOS uses the AES-256 encryption standard to encrypt your data. Once enabled, software-only data-at-rest encryption cannot be disabled, thus protecting against accidental data leaks due to human errors. Software-only encryption supports both Nutanix Native Key Manager (local and remote) and External KMS to secure your keys.

Note the following points regarding data-at-rest encryption.

  • Encryption is supported for AHV, ESXi, and Hyper-V.
    • For ESXi and Hyper-V, software-only encryption can be implemented at a cluster level or container level. For AHV, encryption can be implemented at the cluster level only.
  • Nutanix recommends using cluster-level encryption. With the cluster-level encryption, the administrative overhead of selecting different containers for the data storage gets eliminated.
  • Encryption cannot be disabled once it is enabled at a cluster level or container level.
  • Encryption can be implemented on an existing cluster with data that exists. If encryption is enabled on an existing cluster (AHV, ESXi, or Hyper-V), the unencrypted data is transformed into an encrypted format in a low priority background task that is designed not to interfere with other workload running in the cluster.
  • Data can be encrypted using either self-encrypted drives (SEDs) or software-only encryption. You can change the encryption method from SEDs to software-only. You can perform the following configurations.
    • For ESXi and Hyper-V clusters, you can switch from SEDs and External Key Management (EKM) combination to software-only encryption and EKM combination. First, you must disable the encryption in the cluster where you want to change the encryption method. Then, select the cluster and enable encryption to transform the unencrypted data into an encrypted format in the background.
    • For AHV, background encryption is supported.
  • Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
  • In the case of mixed clusters with ESXi and AHV nodes, where the AHV nodes are used for storage only, the encryption policies consider the cluster as an ESXi cluster. So, the cluster-level and container-level encryption are available.
  • You can use a combination of SED and non-SED drives in a cluster. After you encrypt a cluster using the software-only encryption, all the drives are considered as unencrypted drives. In case you switch from the SED encryption to the software-only encryption, you can add SED or non-SED drives to the cluster.
  • Data is not encrypted when it is replicated to another cluster. You must enable the encryption for each cluster. Data is encrypted as a part of the write operation and decrypted as a part of the read operation. During the replication process, the system reads, decrypts, and then sends the data over to the other cluster. You can use a third-party network solution if there a requirement to encrypt the data during the transmission to another cluster.
  • Software-only encryption does not impact the data efficiency features such as deduplication, compression, erasure coding, zero block suppression, and so on. The software encryption is the last data transformation performed. For example, during the write operation, compression is performed first, followed by encryption.

Key Management

Nutanix supports a Native Key Management Server, also called Local Key Manager (LKM), thus avoiding the dependency on an External Key Manager (EKM). Cluster localised Key Management Service support requires a minimum of 3-node in a cluster and is supported only for software-only encryption. So, 1-node and 2-node clusters can use either the Native KMS (remote) option or an EKM. .

The following types of keys are used for encryption.

  • Data Encryption Key (DEK) - A symmetric key, such as AES-256, that is used to encrypt the data.
  • Key Encryption Key (KEK) - This key is used to encrypt or decrypt the DEK.

Note the following points regarding the key management.

  • Nutanix does not support the use of the Local Key Manager with a third party External Key Manager.
  • Dual encryption (both SED and software-only encryption) requires an EKM. For more information, see Configuring Dual Encryption.
  • You can switch from an EKM to LKM, and inversely. For more information, see Switching between Native Key Manager and External Key Manager.
  • Rekey of keys stored in the Native KMS is supported for the Leader Keys. For more information, see Changing Key Encryption Keys (SEDs) and Changing Key Encryption Keys (Software Only).
  • You must back up the keys stored in the Native KMS. For more information, see Backing up Keys.
  • You must backup the encryption keys whenever you create a new container or remove an existing container. Nutanix Cluster Check (NCC) checks the status of the backup and sends an alert if you do not take a backup at the time of creating or removing a container.

Data-at-Rest Encryption (SEDs)

For customers who require enhanced data security, Nutanix provides a data-at-rest security option using Self Encrypting Drives (SEDs) included in the Ultimate license.

Note: If you are running the AOS Pro License on G6 platforms and above, you can use SED encryption by installing an add-on license.

Following features are supported:

  • Data is encrypted on all drives at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • A key authorization method allows password rotation at arbitrary times.
  • Protection can be enabled or disabled at any time.
  • No performance penalty is incurred despite encrypting all data.
  • Re-key of the leader encryption key (MEK) at arbitrary times is supported.
Note: If an SED cluster is present, then while executing the data-at-rest encryption, you will get an option to either select data-at-rest encryption using SEDs or data-at-rest encryption using AOS.
Figure. SED and AOS Options Click to enlarge

Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish these goals, Nutanix implements a data security configuration that uses SEDs with keys maintained through a separate key management device. Nutanix uses open standards (TCG and KMIP protocols) and FIPS validated SED drives for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge Graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  1. The security implementation begins by installing SEDs for all data drives in a cluster.

    The drives are FIPS 140-2 validated and use FIPS 140-2 validated cryptographic modules.

    Creating a new cluster that includes SEDs only is straightforward, but an existing cluster can be converted to support data-at-rest encryption by replacing the existing drives with SEDs (after migrating all the VMs/vDisks off of the cluster while the drives are being replaced).

    Note: Contact Nutanix customer support for assistance before attempting to convert an existing cluster. A non-protected cluster can contain both SED and standard drives, but Nutanix does not support a mixed cluster when protection is enabled. All the disks in a protected cluster must be SED drives.
  2. Data on the drives is always encrypted but read or write access to that data is open. By default, the access to data on the drives is protected by the in-built manufacturer key. However, when data protection for the cluster is enabled, the Controller VM must provide the proper key to access data on a SED. The Controller VM communicates with the SEDs through a Trusted Computing Group (TCG) Security Subsystem Class (SSC) Enterprise protocol.

    A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to the drive controller and never leaves the physical subsystem, so there is no way to access the data directly from the drive.

    Another key, known as a key encryption key (KEK), is used to encrypt/decrypt the DEK and authenticate to the drive. (Some vendors call this the authentication key or PIN.)

    Each drive has a separate KEK that is generated through the FIPS compliant random number generator present in the drive controller. The KEK is 32 bytes long to resist brute force attacks. The KEKs are sent to the key management server for secure storage and later retrieval; they are not stored locally on the node (even though they are generated locally).

    In addition to the above, the leader encryption key (MEK) is used to encrypt the KEKs.

    Each node maintains a set of certificates and keys in order to establish a secure connection with the external key management server.

  3. Keys are stored in a key management server that is outside the cluster, and the Controller VM communicates with the key management server using the Key Management Interoperability Protocol (KMIP) to upload and retrieve drive keys.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

  4. When a node experiences a full power off and power on (and cluster protection is enabled), the controller VM retrieves the drive keys from the key management server and uses them to unlock the drives.

    If the Controller VM cannot get the correct keys from the key management server, it cannot access data on the drives.

    If a drive is re-seated, it becomes locked.

    If a drive is stolen, the data is inaccessible without the KEK (which cannot be obtained from the drive). If a node is stolen, the key management server can revoke the node certificates to ensure they cannot be used to access data on any of the drives.

Preparing for Data-at-Rest Encryption (External KMS for SEDs and Software Only)

About this task

Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!

Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

If you are using an external KMS for encryption using AOS, preparation steps outside the web console are required. The information in this section is applicable if you choose to use an external KMS for configuring encryption.

You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.

The system accesses the EKM under the following conditions:

  • Starting a cluster

  • Regenerating a key (key regeneration occurs automatically every year by default)

  • Adding or removing a node (only when Self Encrypting Drives is used for encryption)

  • Switching between Native to EKM or EKM to Native

  • Starting, and restarting a service (only if Software-based encryption is used)

  • Upgrading AOS (only if Software-based encryption is used)

  • NCC heartbeat check if EKM is alive

Procedure

  1. Configure a key management server.

    The key management server devices must be configured into the network so the cluster has access to those devices. For redundant protection, it is recommended that you employ at least two key management server devices, either in active-active cluster mode or stand-alone.

    Note: The key management server must support KMIP version 1.0 or later.
    • SafeNet

      Ensure that Security > High Security > Key Security > Disable Creation and Use of Global Keys is checked.

    • Vormetric

      Set the appliance to compatibility mode. Suite B mode causes the SSL handshake to fail.

  2. Generate a certificate signing request (CSR) for each node in the cluster.
    • The Common Name field of the CSR is populated automatically with unique_node_identifier .nutanix.com to identify the node associated with the certificate.
      Tip: After generating the certificate from Prism, (if required) you can update the custom common name (CN) setting by running the following command using nCLI.
      ncli data-at-rest-encryption-certificate update-csr-information domain-name=abcd.test.com

      In the above command example, replace "abcd.test.com" with the actual domain name.

    • A UID field is populated with a value of Nutanix . This can be useful when configuring a Nutanix group for access control within a key management server, since it is based on fields within the client certificates.
    Note: Some vendors when doing client certificate authentication expect the client username to be a field in the CSR. While the CN and UID are pre-generated, many of the user populated fields can be used instead if desired. If a node-unique field such as CN is chosen, users must be created on a per node basis for access control. If a cluster-unique field is chosen, customers must create a user for each cluster.
  3. Send the CSRs to a certificate authority (CA) and get them signed.
    • Safenet

      The SafeNet KeySecure key management server includes a local CA option to generate signed certificates, or you can use other third-party vendors to create the signed certificates.

      To enable FIPS compliance, add user nutanix to the CA that signed the CSR. Under Security > High Security > FIPS Compliance click Set FIPS Compliant .

    Note: Some CAs strip the UID field when returning a signed certificate.
    To comply with FIPS, Nutanix does not support the creation of global keys.

    In the SafeNet KeySecure management console, go to Device > Key Server > Key Server > KMIP Properties > Authentication Settings .

    Then do the following:

    • Set the Username Field in Client Certificate option to UID (User ID) .
    • Set the Client Certificate Authentication option to Used for SSL session and username .

    If you do not perform these settings, the KMS creates global keys and fails to encrypt the clusters or containers using the software only method.

  4. Upload the signed SSL certificates (one for each node) and the certificate for the CA to the cluster. These certificates are used to authenticate with the key management server.
  5. Generate keys (KEKs) for the SED drives and upload those keys to the key management server.

Configuring Data-at-Rest Encryption (SEDs)

Nutanix offers an option to use self-encrypting drives (SEDs) to store data in a cluster. When SEDs are used, there are several configuration steps that must be performed to support data-at-rest encryption in the cluster.

Before you begin

A separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity (see step 5) that is resilient to a single failure.

About this task

To configure cluster encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page.
  3. Select the Encryption Type as Drive-based Encryption . This option is displayed only when SEDs are detected.
  4. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 6.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  5. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a leader/follower (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  6. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 4c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  7. Go to the Key Management Server section (see step 5) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    1. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  8. When the configuration is complete, click the Protect button on the opening page to enable encryption protection for the cluster.
    A clear key icon appears on the page.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    The key turns gold when cluster encryption is enabled.
    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must rekey the disks for the modification to take full effect (see Changing Key Encryption Keys (SEDs)).
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Enabling/Disabling Encryption (SEDs)

Data on a self encrypting drive (SED) is always encrypted, but enabling/disabling data-at-rest encryption for the cluster determines whether a separate (and secured) key is required to access that data.

About this task

To enable or disable data-at-rest encryption after it has been configured for the cluster (see Configuring Data-at-Rest Encryption (SEDs)), do the following:
Note: The key management server must be accessible to disable encryption.

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, do one of the following:
    • If cluster encryption is enabled currently, click the Unprotect button to disable it.
    • If cluster encryption is disabled currently, click the Protect button to enable it.
    Enabling cluster encryption enforces the use of secured keys to access data on the SEDs in the cluster; disabling cluster encryption means the data can be accessed without providing a key.

Changing Key Encryption Keys (SEDs)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey All Disks button under Hardware Encryption .
    Rekeying a cluster under heavy workloads may result in higher-than-normal IO latency, and some data may become temporarily unavailable. To continue with the rekey operation, click Confirm Rekey .
    This step resets the KEKs for all the self encrypting disks in the cluster.
    Note:
    • The Rekey All Disks button appears only when cluster protection is active.
    • If the cluster is already protected and a new key management server is added, you must press the Rekey All Disks button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

Destroying Data (SEDs)

Data on a self encrypting drive (SED) is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the drive controller. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by having the controller change the DEK. This is known as a crypto-erase.

About this task

To crypto-erase a SED, do the following:

Procedure

  1. In the web console, go to the Hardware dashboard and select the Diagram tab.
  2. Select the target disk in the diagram (upper section of screen) and then click the Remove Disk button (at the bottom right of the following diagram).

    As part of the disk removal process, the DEK for that disk is automatically cycled on the drive controller. The previous DEK is lost and all new disk reads are indecipherable. The key encryption key (KEK) is unchanged, and the new DEK is protected using the current KEK.

    Note: When a node is removed, all SEDs in that node are crypto-erased automatically as part of the node removal process.
    Figure. Removing a Disk Click to enlarge screen shot of the diagram tab of the hardware dashboard demonstrating how to remove a disk

Data-at-Rest Encryption (Software Only)

For customers who require enhanced data security, Nutanix provides a software-only encryption option for data-at-rest security (SEDs not required) included in the Ultimate license.
Note: On G6 platforms running the AOS Pro license, you can use software encryption by installing an add-on license.
Software encryption using a local key manager (LKM) supports the following features:
  • For AHV, the data can be encrypted on a cluster level. This is applicable to an empty cluster or a cluster with existing data.
  • For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
  • Data is encrypted at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • Re-key of the leader encryption key at arbitrary times is supported.
  • Cluster’s native KMS is supported.
Note: In case of mixed hypervisors, only the following combinations are supported.
  • ESXi and AHV
  • Hyper-V and AHV
Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish the above mentioned goals, Nutanix implements a data security configuration that uses AOS functionality along with the cluster’s native or an external key management server. Nutanix uses open standards (KMIP protocols) for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  • For software encryption, data protection must be enabled for the cluster before any data is encrypted. Also, the Controller VM must provide the proper key to access the data.
  • A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to AOS, so there is no way to access the data directly from the drive.
  • In case of an external KMS:

    Each node maintains a set of certificates and keys in order to establish a secure connection with the key management server.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

Configuring Data-at-Rest Encryption (Software Only)

Nutanix offers a software-only option to perform data-at-rest encryption in a cluster or container.

Before you begin

  • Nutanix provides the option to choose the KMS type as the Native KMS (local), Native KMS (remote), or External KMS.
  • Cluster Localised Key Management Service (Native KMS (local)) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
  • Software encryption using Native KMS is supported for remote office/branch office (ROBO) deployments using the Native KMS (remote) KMS type.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
    Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!!

    Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

    Note: You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.
  • This feature requires an Ultimate license, or as an Add-On to the PRO license (for the latest generation of products). Ensure that you have procure the add-on license key to use the data-at-rest encryption using AOS, contact Sales team to procure the license.
  • Caution: For security, you can't disable software-only data-at-rest encryption once it is enabled.

About this task

To configure cluster or container encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page
  3. Select the Encryption Type as Encrypt the entire cluster or Encrypt storage containers . Then click Save Encryption Type .
    Caution: You can enable encryption for the entire cluster or just the container. However, if you enable encryption on a container; and there are any encryption key issue like loss of encryption key, you can encounter the following:
    • The entire cluster data is affected, not just the encrypted container.
    • All the user VMs of the cluster will not able to access the data.
    The hardware option is displayed only when SEDs are detected. Else, software based encryption type will be used by default.
    Figure. Select encryption type Click to enlarge select KMS type

    Note: For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
    To enable encryption for every new storage container, do the following:
    1. In the web console, select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage Container tabs.
    2. To enable encryption, select the target storage container and then click the Update link.
      The Update Storage Container window appears.
    3. In the Advanced Settings area, select the Enable check box to enable encryption for the storage container you selected.
      Figure. Update storage container Click to enlarge Selecting encryption check box

    4. Click Save to complete.
  4. Select the Key Management Service.
    To keep the keys safe with the native KMS, select Native KMS (local) or Native KMS (remote) and click Save KMS type . If you select this option, skip to step 9 to complete the configuration.
    Note:
    • Cluster Localised Key Management Service ( Native KMS (local) ) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
    • For enhanced security of ROBO environments (typically, 1 or 2 node clusters), select the Native KMS (remote) for software based encryption of ROBO clusters managed by Prism Central.
      Note: This is option is available only if the cluster is registered to Prism Central.
    For external KMS type, select the External KMS option and click Save KMS type . Continue to step 5 for further configuration.
    Figure. Select KMS Type Click to enlarge section of the data-at-rest encryption window for selecting KMS type

    Note: You can switch between the KMS types at a later stage if the specific KMS prerequisites are met, see Switching between Native Key Manager and External Key Manager.
  5. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 5.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  6. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a master/slave (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  7. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 3c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  8. Go to the Key Management Server section (see step 4) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    4. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  9. When the configuration is complete, click the Enable Encryption button.
    Enable Encryption window is displayed.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    Caution: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.
  10. Enter ENCRYPT .
  11. Click Encrypt button.
    The data-at-rest encryption is enabled. To view the status of the encrypted cluster or container, go to Data at Rest Encryption in the Settings menu.

    When you enable encryption, a low priority background task runs to encrypt all the unencrypted data. This task is designed to take advantage of any available CPU space to encrypt the unencrypted data within a reasonable time. If the system is occupied with other workloads, the background task consumes less CPU space. Depending on the amount of data in the cluster, the background task can take 24 to 36 hours to complete.

    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must do the rekey operation for the modification to take full effect. In case of EKM, rekey to change the KEKs stored in the EKM. In case of LKM, rekey to change the leader key used by native key manager, see Changing Key Encryption Keys (Software Only)) for details.
    Note: Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Switching between Native Key Manager and External Key Manager

After Software Encryption has been established, Nutanix supports the ability to switch the KMS type from the External Key Manager to the Native Key Manager or from the Native Key Manager to an External Key Manager, without any down time.

Note:
  • The Native KMS requires a minimum of 3-node cluster.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
  • It is recommended that you backup and save the encryption keys with identifiable names before and after changing the KMS type. For backing up keys, see Backing up Keys.
To change the KMS type, change the KMS selection by editing the encryption configuration. For details, see step 3 in Configuring Data-at-Rest Encryption (Software Only) section.
Figure. Select KMS type Click to enlarge select KMS type

After you change the KMS type and save the configuration, the encryption keys are re-generated on the selected KMS storage medium and data is re-encrypted with the new keys. The old keys are destroyed.
Note: This operation completes in a few minutes, depending on the number of encrypted objects and network speed.

Changing Key Encryption Keys (Software Only)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey button under Software Encryption .
    Note: The Rekey button appears only when cluster protection is active.
    Note: If the cluster is already protected and a new key management server is added, you must press the Rekey button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

    Note: The system automatically regenerates the leader key yearly.

Destroying Data (Software Only)

Data on the AOS cluster is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the AOS. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by deleting the container or cluster. This is known as a crypto-erase.

About this task

Note: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.

To crypto-erase the container or cluster, do the following:

Procedure

  1. Delete the storage container or destroy the cluster.
    • For information on how to delete a storage container, see Modifying a Storage Container in the Prism Web Console Guide .
    • For information on how to destroy a cluster, see Destroying a Cluster in the Acropolis Advanced Administration Guide .
    Note:

    When you delete a storage container, the Curator scans and deletes the DEK and KEK keys automatically.

    When you destroy a cluster, then:

    • the Native Key Manager (local) destroys the master key shares and the encrypted DEKs/KEKs.
    • the Native Key Manager (remote) retains the root key on the PC if the cluster is still registered to a PC when it is destroyed. You must unregister a cluster from the PC and then destroy the cluster to delete the root key.
    • the External Key Manager deletes the encrypted DEKs. However, the KEKs remain on the EKM. You must use an external key manager UI to delete the KEKs.
  2. Delete the key backup files, if any.

Switching from SED-EKM to Software-LKM

This section describes the steps to switch from SED and External KMS combination to software-only and LKM combination.

About this task

To switch from SED-EKM to Software-LKM, do the following.

Procedure

  1. Perform the steps for the software-only encryption with External KMS. For more information, see Configuring Data-at-Rest Encryption (Software Only).
    After the background task completes, all the data gets encrypted by the software. The time taken to complete the task depends on the amount of data and foreground I/O operations in the cluster.
  2. Disable the SED encryption. Ensure that all the disks are unprotected.
    For more information, see Enabling/Disabling Encryption (SEDs).
  3. Switch the key management server from the External KMS to Local Key Manager. For more information, see Switching between Native Key Manager and External Key Manager.

Configuring Dual Encryption

About this task

Dual Encryption protects the data on the clusters using both SED and software-only encryption. An external key manager is used to store the keys for dual encryption, the Native KMS is not supported.

To configure dual encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, check to enable both Drive-based and Software-based encryption.
  3. Click Save Encryption Type .
    Figure. Dual Encryption Click to enlarge selecting encryption types

  4. Continue with the rest of the encryption configuration, see:
    • Configuring Data-at-Rest Encryption (Software Only)
    • Configuring Data-at-Rest Encryption (SEDs)

Backing up Keys

About this task

You can take a backup of encryption keys:

  • when you enable Software-only Encryption for the first time
  • after you regenerate the keys

Backing up encryption keys is critical in the very unlikely situation in which keys get corrupted.

You can download key backup file for a cluster on a PE or all clusters on a PC. To download key backup file for all clusters, see Taking a Consolidated Backup of Keys (Prism Central) .

To download the key backup file for a cluster, do the following:

Procedure

  1. Log on to the Prism Element web console.
  2. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  3. In the Cluster Encryption page, select Manage Keys .
  4. Enter and confirm the password.
  5. Click the Download Key Backup button.

    The backup file is saved in the default download location on your local machine.

    Note: Ensure you move the backup key file to a safe location.

Taking a Consolidated Backup of Keys (Prism Central)

If you are using the Native KMS option with software encryption for your clusters, you can take a consolidated backup of all the keys from Prism Central.

About this task

To take a consolidated backup of keys for software encryption-enabled clusters (Native KMS-only), do the following:

Procedure

  1. Log on to the Prism Central web console.
  2. Click the hamburger icon, then select Clusters > List view.
  3. Select a cluster, go to Actions , then select Manage & Backup Keys .
  4. Download the backup keys:
    1. In Password , enter your password.
    2. In Confirm Password , reenter your password.
    3. To change the encryption key, select the Rekey Encryption Key (KEK) box .
    4. To download the backup key, click Backup Key .
    Note: Ensure that you move the backup key file to a safe location.

Importing Keys

You can import the encryption keys from backup. You must note the specific commands in this topic if you backed up your keys to an external key manager (EKM)

About this task

Note: Nutanix recommends that you contact Nutanix Support for this operation. Extended cluster downtime might result if you perform this task incorrectly.

Procedure

  1. Log on to any Controller VM in the cluster with SSH.
  2. Retrieve the encryption keys stored on the cluster and verify that all the keys you want to retrieve are listed.
    In this example, the password is Nutanix.123 . date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path=/home/nutanix/encryption_key_backup_date \
    --password=Nutanix.123 --list_key_ids=true 
  3. Import the keys into the cluster.
    mantle_recovery_util --backup_file_path=/home/nutanix/key_backup \
    --password=Nutanix.123 --interactive_mode 
  4. If you are using an external key manager such as IBM Security Key Lifecycle Manager, Gemalto Safenet, or Vormetric Data Security Manager, use the --store_kek_remotely option to import the keys into the cluster.
    In this example, date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path path/encryption_key_backup_date \
     --password key_password --store_kek_remotely

Securing Traffic Through Network Segmentation

Network segmentation enhances security, resilience, and cluster performance by isolating a subset of traffic to its own network.

You can achieve traffic isolation in one or more of the following ways:

Isolating Backplane Traffic by using VLANs (Logical Segmentation)
You can separate management traffic from storage replication (or backplane) traffic by creating a separate network segment (LAN) for storage replication. For more information about the types of traffic seen on the management plane and the backplane, see Traffic Types In a Segmented Network.

To enable the CVMs in a cluster to communicate over these separated networks, the CVMs are multihomed. Multihoming is facilitated by the addition of a virtual network interface card (vNIC) to the Controller VM and placing the new interface on the backplane network. Additionally, the hypervisor is assigned an interface on the backplane network.

The traffic associated with the CVM interfaces and host interfaces on the backplane network can be secured further by placing those interfaces on a separate VLAN.

In this type of segmentation, both network segments continue to use the same external bridge and therefore use the same set of physical uplinks. For physical separation, see Physically Isolating the Backplane Traffic on an AHV Cluster.

Isolating backplane traffic from management traffic requires minimal configuration through the Prism web console. No manual host (hypervisor) configuration steps are required.

For information about isolating backplane traffic, see Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only).

Isolating Backplane Traffic Physically (Physical Segementation)

You can physically isolate the backplane traffic (intra cluster traffic) from the management traffic (Prism, SSH, SNMP) in to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation of the backplane traffic from the management traffic.

You can use Prism to configure the vNIC on the CVM and configure the backplane traffic to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation.

For more information about physically isolating backplane traffic, see Physically Isolating the Backplane Traffic on an AHV Cluster.

Isolating service-specific traffic
You can also secure traffic associated with a service (for example, Nutanix Volumes) by confining its traffic to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation for service-specific traffic.

You can use Prism to create the vNIC on the CVM and configure the service to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation. You need one virtual network for each service you want to isolate. For a list of the services whose traffic you can isolate in the current release, see Cluster Services That Support Traffic Isolation.

For information about isolating service-specific traffic, see Isolating Service-Specific Traffic.

Isolating Stargate-to-Stargate traffic over RDMA
Some Nutanix platforms support remote direct memory access (RDMA) for Stargate-to-Stargate service communication. You can create a separate virtual network for RDMA-enabled network interface cards. If a node has RDMA-enabled NICs, Foundation passes the NICs through to the CVMs during imaging. The CVMs use only the first of the two RDMA-enabled NICs for Stargate-to-Stargate communications. The virtual NIC on the CVM is named rdma0. Foundation does not configure the RDMA LAN. After creating a cluster, you need to enable RDMA by creating an RDMA LAN from the Prism web console. For more information about RDMA support, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

For information about isolating backplane traffic on an RDMA cluster, see Isolating the Backplane Traffic on an Existing RDMA Cluster.

Traffic Types In a Segmented Network

The traffic entering and leaving a Nutanix cluster can be broadly classified into the following types:

Backplane traffic
Backplane traffic is intra-cluster traffic that is necessary for the cluster to function, and it comprises traffic between CVMs and traffic between CVMs and hosts for functions such as storage RF replication, host management, high availability, and so on. This traffic uses eth2 on the CVM. In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured. For nodes that have RDMA-enabled NICs, the CVMs use a separate RDMA LAN for Stargate-to-Stargate communications.
Management traffic
Management traffic is administrative traffic, or traffic associated with Prism and SSH connections, remote logging, SNMP, and so on. The current implementation simplifies the definition of management traffic to be any traffic that is not on the backplane network, and therefore also includes communications between user VMs and CVMs. This traffic uses eth0 on the CVM.

Traffic on the management plane can be further isolated per service or feature. An example of this type of traffic is the traffic that the cluster receives from external iSCSI initiators (Nutanix Volumes iSCSI traffic). For a list of services supported in the current release, see Cluster Services That Support Traffic Isolation.

Segmented and Unsegmented Networks

In the default unsegmented network in a Nutanix cluster (ESXi and AHV), the Controller VM has two virtual network interfaces—eth0 and eth1.

Interface eth0 is connected to the default external virtual switch, which is in turn connected to the external network through a bond or NIC team that contains the host physical uplinks.

Interface eth1 is connected to an internal network that enables the CVM to communicate with the hypervisor.

In the below unsegmented network (see figure Unsegmented Network - ESXi Cluster , and Unsegmented Network - AHV Cluster ) all external CVM traffic, whether backplane or management traffic, uses interface eth0. These interfaces are on the default VLAN on the default virtual switch.

Figure. Unsegmented Network- ESXi Cluster Click to enlarge

This figure shows an unsegmented network AHV cluster.

In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured.

Figure. Unsegmented Network- AHV Cluster Click to enlarge

If you further isolate service-specific traffic, additional vNICs are created on the CVM. Each service requiring isolation is assigned a dedicated virtual NIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

Network with Segmentation

In a segmented network, management traffic uses CVM interface eth0 and additional services can be isolated to different VLANs or virtual switches. In backplane segmentation, the backplane traffic uses interface eth2. The backplane network uses either the default VLAN or, optionally, a separate VLAN that you specify when segmenting the network. In ESXi, you must select a port group for the new vmkernel interface. In AHV this internal interface is created automatically in the selected virtual switch. For physical separation of the backplane network, create this new port group on a separate virtual switch in ESXi, or select the desired virtual switch in the AHV GUI.

If you want to isolate service-specific traffic such as Volumes or Disaster Recovery as well as backplane traffic, then additional vNICs are needed on the CVM, but no new vmkernel adapters or internal interfaces are required. AOS creates additional vNICs on the CVM. Each service that requires isolation is assigned a dedicated vNIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

You can choose to perform backplane segmentation alone, with no other forms of segmentation. You can also choose to use one or more types of service specific segmentation with or without backplane segmentation. In all of these cases, you can choose to segment any service to either the existing, or a new virtual switch for further physical traffic isolation. The combination selected is driven by the security and networking requirements of the deployment. In most cases, the default configuration with no segmentation of any kind is recommended due to simplicity and ease of deployment.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on ESXi hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an ESXi Cluster Click to enlarge

Here are the CVM to ESXi hypervisor connection details:

  • The eth0 vNIC on the CVM and vmk0 on the host are carrying management traffic and connected to the hypervisor through the existing PGm (portgroup) on vSwitch0.
  • The eth2 vNIC on the CVM and vmk2 on the host are carrying backplane traffic and connected to the hypervisor through a new user created PGb on the existing vSwitch.
  • The ntnx0 vNIC on the CVM is carrying iSCSI traffic and connected to the hypervisor through PGi on the vSwitch1. No new vmkernel adapter is required.
  • The ntnx1 vNIC on the CVM is carrying DR traffic and connected to the hypervisor through PGd on the vSwitch2. Here as well, there is no new vmkernel adapter required.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on an AHV hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an AHV Cluster Click to enlarge

Here are the CVM to AHV hypervisor connection details:

  • The eth0 vNIC on the CVM is carrying management traffic and connected to the hypervisor through the existing vnet0.
  • Other vNICs such as eth2, ntnx0, and ntnx1 are connected to the hypervisor through the auto created interfaces on either the existing or new vSwitch.
Note: In the above figure the interface name 'br0-bp' is read as 'br0-backplane'.

The following table describes the vNIC, port group (PG), VM kernel (vmk), virtual network (vnet) and virtual switch connections for CVM and hypervisor in different implementation scenarios. The tables capture information for ESXi and AHV hypervisors:

Table 1.
Implementation Scenarios vNICs on CVM Connected to ESXi Hypervisor Connected to AHV Hypervisor
Backplane Segmentation with 1 vSwitch

eth0:

DR, iSCSI, andManagement traffic

vmk0 via existing PGm on vSwitch Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on bridge br0
Backplane Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0 Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch
Service Specific Segmentation for Volumes with 1 vSwitch

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on vSwitch0

Auto created interface on existing br0
Service Specific Segmentation for Volumes with 2 vSwitches

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on new vSwitch

Auto created interface on new virtual switch
Service Specific Segmentation for DR with 1 vSwitch

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on existing br0

Service Specific Segmentation for DR with 2 vSwitches

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on new vSwitch

Auto created interface on new virtual switch

Backplane and Service Specific Segmentation with 1 vSwitch

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on br0

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch0 Auto created interface on br0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on br0
Backplane and Service Specific Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch1

No new user defined vmkernel adapter is required.

Auto created interface on new virtual switch

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch2.

No new user defined vmkernel adapter is required.

Auto created interface in new virtual switch

Implementation Considerations

Supported Environment

Network segmentation is supported in the following environment:

  • The hypervisor must be one of the following:
    • For network segmentation by traffic type (separating backplane traffic from management traffic):
      • AHV
      • ESXi
      • Hyper-V
    • For service-specific traffic isolation:
      • AHV
      • ESXi
  • For logical network segmentation, AOS version must be 5.5 or later. For physical segmentation and service-specific traffic isolation, the AOS version must be 5.11 or later.
  • RDMA requirements:
    • Network segmentation is supported with RDMA for AHV and ESXi hypervisors only.
    • For more information about RDMA, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

Prerequisites

For Nutanix Volumes

Stargate does not monitor the health of a segmented network. If physical network segmentation is configured, network failures or connectivity issues are not tolerated. To overcome this issue, configure redundancy in the network. That is, use two or more uplinks in a fault tolerant configuration, connected to two separate physical switches.

For Disaster Recovery

  • Ensure that the VLAN and subnet that you plan to use for the network segment are routable.
  • Make sure that you have a pool of IP addresses to specify when configuring segmentation. For each cluster, you need n+1 IP addresses, where n is the number of nodes in the cluster. The additional IP address is for the virtual IP address requirement.
  • Enable network segmentation for disaster recovery at both sites (local and remote) before configuring remote sites at those sites.

Limitations

For Nutanix Volumes

  • If network segmentation is enabled for Volumes, volume group attachments are not recovered during VM recovery.
  • Nutanix service VMs such as Objects worker nodes continue to communicate with the CVM eth0 interface when using Volumes for iSCSI traffic. Other external clients such as Files use the new service-specific CVM interface.

For Disaster Recovery

The system does not support configuring a Leap DR and DR service specific traffic isolation together.

Cluster Services That Support Traffic Isolation

You can isolate traffic associated with the following services to its own virtual network:

  • Management (The default network that cannot be moved from CVM eth0)

  • Backplane

  • RDMA

  • Service Specific Disaster Recovery

  • Service Specific Volumes

Configurations in Which Network Segmentation Is Not Supported

Network segmentation is not supported in the following configurations:

  • Clusters on which the CVMs have a manually created eth2 interface.
  • Clusters on which the eth2 interface on one or more CVMs have been assigned an IP address manually. During an upgrade to an AOS release that supports network segmentation, an eth2 interface is created on each CVM in the cluster. Even though the cluster does not use these interfaces until you configure network segmentation, you must not manually configure these interfaces in any way.
Caution:

Nutanix has deprecated support for manual multi-homed CVM network interfaces from AOS version 5.15 and later. Such a manual configuration can lead to unexpected issues on these releases. If you have configured an eth2 interface on the CVM manually, refer to the KB-9479 and Nutanix Field Advisory #78 for details on how to remove the eth2 interface.

Configuring the Network on an AHV Host

These steps describe how to configure host networking for physical and service-specific network segmentation on an AHV host. These steps are prerequisites for physical and service-specific network segmentation and you must perform these steps before you perform physical or service-specific traffic isolation. If you are configuring networking on an ESXi host, perform the equivalent steps by referring to the ESXi documentation. On ESXi, you create vSwitches and port groups to achieve the same results.

About this task

For information about the procedures to create, update and delete a virtual switch in Prism Element Web Console, see Configuring a Virtual Network for Guest VMs in the Prism Web Console Guide .

Note: The term unconfigured node in this procedure refers to a node that is not part of a cluster and is being prepared for cluster expansion.

To configure host networking for physical and service-specific network segmentation, do the following:

Note: If you are segmenting traffic on nodes that are already part of a cluster, perform the first step. If you are segmenting traffic on an unconfigured node that is not part of a cluster, perform the second step directly.

Procedure

  1. If you are segmenting traffic on nodes that are already part of a cluster, do the following:
    1. From the default virtual switch vs0, remove the uplinks that you want to add to the virtual switch you created by updating the default virtual switch.

      For information about updating the default virtual switch vs0 to remove the uplinks, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

    2. Create a virtual switch for the backplane traffic or service whose traffic you want to isolate.
      Add the uplinks to the new virtual switch.

      For information about creating a new virtual switch, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

  2. If you are segmenting traffic on an unconfigured node (new host) that is not part of a cluster, do the following:
    1. Create a bridge for the backplane traffic or service whose traffic you want to isolate by logging on to the new AHV host.
      ovs-vsctl add-br br1
    2. From the default bridge br0, log on to the host CVM and keep only eth0 and eth1 in br0.
      manage_ovs --bridge_name br0 --interfaces eth0,eth1 --bond_name br0-up --bond_mode active-backup update_uplinks
    3. Log on to the host CVM and then add eth2 and eth3 to the uplink bond of br1.
      manage_ovs --bridge_name br1 --interfaces eth2,eth3 --bond_name br1-up --bond_mode active-backup update_uplinks
      Note: If this step is not done correctly, a network loop can be created that causes a network outage. Ensure that no other uplink interfaces exist on this bridge before adding the new interfaces, and always add interfaces into a bond.

What to do next

Prism can configure a VLAN only on AHV hosts. Therefore, if the hypervisor is ESXi, in addition to configuring the VLAN on the physical switch, make sure to configure the VLAN on the port group.

If you are performing physical network segmentation, see Physically Isolating the Backplane Traffic on an Existing Cluster.

If you are performing service-specific traffic isolation, see Service-Specific Traffic Isolation.

Network Segmentation for Traffic Types (Backplane, Management, and RDMA)

You can segment the network on a Nutanix cluster in the following ways:

  • You can segment the network on an existing cluster by using the Prism web console.
  • You can segment the network when creating a cluster by using Nutanix Foundation 3.11.2 or higher versions.

The following topics describe network segmentation procedures for existing clusters and changes during AOS upgrade and cluster expansion. For more information about segmenting the network when creating a cluster, see the Field Installation Guide.

Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only)

You can segment the network on an existing cluster by using the Prism web console. You must configure a separate VLAN for the backplane network to achieve logical segmentation. The network segmentation process creates a separate network for backplane communications on the existing default virtual switch. The process then places the eth2 interfaces (that the process creates on the CVMs during upgrade) and the host interfaces on the newly created network. This method allows you to achieve logical segmentation of traffic over the selected VLAN. From the specified subnet, assign IP addresses to each new interface. You, therefore, need two IP addresses per node. When you specify the VLAN ID, AHV places the newly created interfaces on the specified VLAN.

Before you begin

If your cluster has RDMA-enabled NICs, follow the procedure in Isolating the Backplane Traffic on an Existing RDMA Cluster.

  • For ESXi clusters, it is mandatory to create and manage port groups that networking uses for CVM and backplane networking. Therefore, ensure that you create port groups on the default virtual switch vs0 for the ESXi hosts and CVMs.

    Since backplane traffic segmentation is logical, it is based on the VLAN that is tagged for the port groups. Therefore, while creating the port groups ensure that you tag the new port groups created for the ESXi hosts and CVMs with the appropriate VLAN ID. Consult your networking team to acquire the necessary VLANs for use with Nutanix nodes.

  • For new backplane networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network segmentation. See the Configuring Backplane IP Pool topic to create an IP pool for backplane interfaces.

About this task

You need separate VLANs for Management network and Backplane network. For example, configure VLAN 100 as Management network VLAN and VLAN 200 as Backplane network VLAN on the Ethernet links that connect the Nutanix nodes to the physical switch.
Note: Nutanix does not control these VLAN IDs. Consult your networking team to acquire VLANs for the Management and Backplane networks.

To segment the network on an existing ESXi and Hyper-V clusters for a backplane LAN, do the following:

To segment the network on an existing AHV cluster for a backplane LAN, follow the procedure described in the Physically Isolating the Backplane Traffic on an AHV Cluster topic.

Note:

In this method, for AHV nodes, logical segmentation (VLAN-based segmentation) is done on the default bridge. The process creates the host backplane interface on the Backplane Network port group on ESXi or br0-backplane (interface) on br0 bridge in case of AHV. The eth2 interface on the CVM is on CVM Backplane Network by default.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box appears.
  2. In the Network Configuration > Internal Interfaces > Backplane LAN row, click Configure .
    The Create Interface dialog box appears.
  3. In the Create Interface dialog box, provide the necessary information.
    • In Subnet IP , specify a non-routable subnet.

      Ensure that the subnet has sufficient IP addresses. The segmentation process requires two IP addresses per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

    • In Netmask , specify the netmask.
    • If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.

      Nutanix recommends that you use a VLAN. If you do not specify a VLAN ID, the default VLAN on the virtual switch is used.

  4. Click Verify and Save .
    The network segmentation process creates the backplane network if the network settings that you specified pass validation.

Isolating the Backplane Traffic on an Existing RDMA Cluster

Segment the network on an existing RDMA cluster by using the Prism web console.

About this task

The network segmentation process creates a separate network for RDMA communications on the existing default virtual switch and places the rdma0 interface (created on the CVMs during upgrade) and the host interfaces on the newly created network. From the specified subnet, IP addresses are assigned to each new interface. Two IP addresses are therefore required per node. If you specify the optional VLAN ID, the newly created interfaces are placed on the VLAN. A separate VLAN is highly recommended for the RDMA network to achieve true segmentation.

Before you begin

  • For new RDMA networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network alone.
  • If you plan to specify a VLAN for the RDMA network, make sure that the VLAN is configured on the physical switch ports to which the nodes are connected.
  • Configure the switch interface as a Trunk port.
  • Ensure that this cluster is configured to support RDMA during installation using the Foundation.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box is displayed.
  2. Click the Internal Interfaces tab.
  3. Click Configure in the RDMA row.
    Ensure that you have configured the switch interface as a trunk port.
    The Create Interface dialog box is displayed.
    Figure. Create Interface Dialog Box Click to enlarge

  4. In the Create Interface dialog box, do the following:
    1. In Subnet IP and Netmask , specify a non-routable subnet and netmask, respectively. Make sure that the subnet can accommodate cluster expansion in the future.
    2. In VLAN , specify a VLAN ID for the RDMA LAN.
      A VLAN ID is optional but highly recommended for true network segmentation and enhanced security.
    3. c. From the PFC list, select the priority flow control value configured on the physical switch port.
  5. Click Verify and Save .
  6. Click Close .

Physically Isolating the Backplane Traffic on an Existing Cluster

By using the Prism web console, you can configure the eth2 interface on a separate virtual switch if you wish to isolate the backplane traffic to a separate physical network.

If you do not configure as separate virtual switch, the backplane traffic uses another VLAN in the default switch for VLAN-based traffic isolation.

A virtual switch is known as the following in different hypervisors.

Hypervisor Virtual Switch
AHV Virtual Switch
ESXi vSwitch
Hyper-V Hyper-V Virtual Switch

Network segmentation process creates a separate network for backplane communications on the new virtual switch. The segmentation process places the CVM eth2 interfaces and the host interfaces on the newly created network. Specify a subnet with a network mask and, optionally, a VLAN ID. From the specified subnet or an IP Pool assign IP addresses to each new interface in the new network. You require a minimum of two IP addresses per node.

If you specify the optional VLAN ID, the newly created interfaces are placed on VLAN.

Nutanix highly recommends a separate VLAN for the backplane network to achieve true segmentation.

Requirements and Limitations

  • Ensure that physical isolation of backplane traffic is supported by the AOS version deployed.
  • Ensure that you configure the network (port groups or bridges) on the hosts and associate the network with the required physical NICs before you enable physical isolation of the backplane traffic.

    For AHV, see Configuring the Network on an AHV Host. For ESXI and Hyper-V, see VMware and Microsoft documentation respectively.

  • Segmenting backplane traffic can involve up to two rolling reboots of the CVMs. The first rolling reboot is done to move the backplane interface (eth2) of the CVM to the selected port group, virtual switch or Hyper-V switch. This is done only for CVM(s) whose backplane interface is not already connected to the selected port group, virtual switch or Hyper-V switch. The second rolling reboot is done to migrate the cluster services to the newly configured backplane interface.
Physically Isolating the Backplane Traffic on an AHV Cluster

Before you begin

On the AHV hosts, do the following:

  1. From the default virtual switch vs0, remove the uplinks (physical NICs) that you want to add to a new virtual switch you create for the backplane traffic in the next step.
  2. Create a virtual switch for the backplane traffic.

    Add the uplinks to the new bond when you create the new virtual switch.

See Configuring the Network on an AHV Host for instructions about how to perform these tasks on a host.

Note: Before you perform the following procedure, ensure that the uplinks you added to the virtual switch are in the UP state.

About this task

Perform the following procedure to physically segment the backplane traffic on an AHV cluster.

Procedure

  1. Shut down all the guest VMs in the cluster from within the guest OS or use the Prism Element web console.
  2. Place all nodes of a cluster into the maintenance mode.
    1. Use SSH to log on to a Controller VM in the cluster
    2. Determine the IP address of the node you want to put into the maintenance mode:
      nutanix@cvm$ acli host.list
      Note the value of Hypervisor IP for the node you want to put in the maintenance mode.
    3. Put the node into the maintenance mode:
      nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true | false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
      Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is recommended to shutdown user VMs before proceeding with disruptive changes.

      Replace host-IP-address with either the IP address or host name of the AHV host you want to shut down.

      The following are optional parameters for running the acli host.enter_maintenance_mode command:

      • wait
      • non_migratable_vm_action

      Do not continue if the host has failed to enter the maintenance mode.

    4. Verify if the host is in the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to EnteredMaintenanceMode and schedulable equals to False .

  3. Enable backplane network segmentation.
    1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    2. On the Internal Interfaces tab, in the Backplane LAN row, click Configure .
    3. In the Backplane LAN dialog box, do the following:
      • In Subnet IP , specify a non-routable subnet that is different from the subnet used by the AHV host and CVMs.

        The AOS CVM default route uses the CVM eth0 interface, and there is no route on the backplane interface. Therefore, Nutanix recommends only using a non-routable subnet for the backplane network. To avoid split routing, do not use a routable subnet for the backplane network.

        Make sure that the backplane subnet has a sufficient number of IP addresses. Two IP addresses are required per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

      • In Netmask , specify the network mask.
      • If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.

        Nutanix strongly recommends configuring a separate VLAN. If you do not specify a VLAN ID, AOS applies the untagged VLAN on the virtual switch.

      • In the Virtual Switch list, select the virtual switch you created for the backplane traffic.
    4. Click Verify and Save .
      If the network settings you specified pass validation, the backplane network is created and the CVMs perform a reboot in a rolling fashion (one at a time), after which the services use the new backplane network. The progress of this operation can be tracked on the Prism tasks page.
  4. Log on to a CVM in the cluster with SSH and stop Acropolis cluster-wide:
    nutanix@cvm$ allssh genesis stop acropolis 
  5. Restart Acropolis cluster-wide:
    nutanix@cvm$ cluster start 
  6. Remove all nodes from the maintenance mode.
    1. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance mode:
      nutanix@cvm$ acli host.exit_maintenance_mode host-ip

      Replace host-ip with the new IP address of the host.

      This command migrates (live migration) all the VMs that were previously running on the host back to the host.

    2. Verify if the host has exited the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to kAcropolisNormal or AcropolisNormal and schedulable equals to True .

  7. Power on the guest VMs from the Prism Element web console.
Physically Isolating the Backplane Traffic on an ESXi Cluster

Before you begin

On the ESXi hosts, do the following:

  1. Create a vSwitch for the backplane traffic.
  2. From vSwitch0, remove the uplinks (physical NICs) that you want to add to the vSwitch you created for the backplane traffic.
  3. On the backplane vSwitch, create one port group for the CVM and another for the host. Ensure that at least one uplink is present in the Active Adaptors list for each port group if you have overridden the failover order.

See the ESXi documentation for instructions about how to perform these tasks.

Note: Before you perform the following procedure, ensure that the uplinks you added to the vSwitch are in the UP state.

About this task

Perform the following procedure to physically segment the backplane traffic.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
  2. On the Internal Interfaces tab, in the Backplane LAN row, click Configure .
  3. In the Backplane LAN dialog box, do the following:
    1. In Subnet IP , specify a non-routable subnet.
      If you do not specify a secure non-routable subnet, AHV uses the routable subnet on the default gateway. AOS does not route packets from the backplane network. Therefore, Nutanix recommends only using a secure non-routable subnet for the backplane network. Do not use a routable subnet for this purpose.

      Make sure that the subnet has a sufficient number of IP addresses. Two IP addresses are required per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

    2. In Netmask , specify the network mask.
    3. If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.
      Nutanix strongly recommends configuring a separate VLAN. If you do not specify a VLAN ID, AOS applies the default VLAN on the virtual switch.
    4. In the Host Port Group list, select the port group you created for the host.
    5. In the CVM Port Group list, select the port group you created for the CVM.
    Note:

    Nutanix clusters support both vSphere Standard Switches and vSphere Distributed Switches. However, you must mandatorily configure only one type of virtual switches in one cluster. Configure all the backplane and management traffic in one cluster on either vSphere Standard Switches or vSphere Distributed Switches. Do not mix Standard and Distributed vSwitches on a single cluster.

  4. Click Verify and Save .
    If the network settings you specified pass validation, the backplane network is created and the CVMs perform a reboot in a rolling fashion (one at a time), after which the services use the new backplane network. The progress of this operation can be tracked on the Prism tasks page.
Physically Isolating the Backplane Traffic on a Hyper-V Cluster

Before you begin

On the Hyper-V hosts, do the following:

  1. Create a Hyper-V Virtual Switch for the backplane traffic.
  2. From the default External Switch, remove the uplinks (physical NICs) that you want to add to the backplane Virtual Switch you created for the backplane traffic.
  3. On the backplane Virtual Switch, create a subnet and, optionally, assign a VLAN.

See the Hyper-V documentation on the Microsoft portal for instructions about how to perform these tasks.

Note: Before you perform the following procedure, ensure that the uplinks you added to the backplane Virtual Switch are in the UP state.

About this task

Perform the following procedure to physically segment the backplane traffic.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
  2. On the Internal Interfaces tab, in the Backplane LAN row, click Configure .
  3. In the Backplane LAN dialog box, do the following:
    1. In Subnet IP , specify a non-routable subnet.
      If you do not specify a secure non-routable subnet, AHV uses the routable subnet on the default gateway. AOS does not route packets from the backplane network. Therefore, Nutanix recommends only using a secure non-routable subnet for the backplane network. Do not use a routable subnet for this purpose.

      Make sure that the subnet has a sufficient number of IP addresses. Two IP addresses are required per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

    2. In Netmask , specify the network mask.
    3. If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.
      Nutanix strongly recommends configuring a separate VLAN. If you do not specify a VLAN ID, AOS applies the default VLAN on the virtual switch.
    4. In the Bridge list, select the Hyper-V switch you created for the backplane traffic.
  4. Click Verify and Save .
    If the network settings you specified pass validation, the backplane network is created and the CVMs perform a reboot in a rolling fashion (one at a time), after which the services use the new backplane network. The progress of this operation can be tracked on the Prism tasks page.
    Note: Segmenting backplane traffic can involve up to two rolling reboots of the CVMs. The first rolling reboot is done to move the backplane interface (eth2) of the CVM to the selected port group or virtual switch. This is done only for CVM(s) whose backplane interface is not already connected to the selected port group or bridge virtual switch. The second rolling reboot is done to migrate the cluster services to the newly configured backplane interface.

Reconfiguring the Backplane Network

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Caution: At the end of this procedure, the cluster stops and restarts, even if only the VLAN is changed, and therefore involves cluster downtime.

To reconfigure the cluster, do the following:

Procedure

  1. Log on to any CVM in the cluster using SSH.
  2. Reconfigure the backplane network.
    nutanix@cvm$ backplane_ip_reconfig [--backplane_vlan=vlan-id] \
    [--backplane_ip_pool=ip_pool_name]

    Replace vlan-id with the new VLAN ID, and ip_pool_name with the newly created backplane IP pool.

    See Configuring Backplane IP Pool to create a backplane IP pool.

    For example, reconfigure the backplane network to use VLAN ID 10 and newly created backplane IP pool.

    nutanix@cvm$ backplane_ip_reconfig --backplane_vlan=10 \
    --backplane_ip_pool=NewBackplanePool

    Output similar to the following is displayed:

    This operation will do a 'cluster stop', resulting in disruption of 
    cluster services. Do you still want to continue? (Type "yes" (without quotes) 
    to continue)
    Type yes to confirm that you want to reconfigure the backplane network.
    Caution: During the reconfiguration process, you might receive an error message similar to the following.
    Failed to reach a node.
    You can safely ignore this error message and therefore do not stop the script manually.
    Note: The backplane_ip_reconfig command is not supported on ESXi clusters with vSphere Distributed Switches. To reconfigure the backplane network on a vSphere Distributed Switch setup, disable the backplane network (see Disabling Network Segmentation on an ESXi and Hyper-V Clusters) and enable again with a different subnet or VLAN.
  3. Type yes to confirm that you want to reconfigure the backplane network.
    The reconfiguration procedure takes a few minutes and includes a cluster restart. If you type anything other than yes , network reconfiguration is aborted.
  4. After the process completes, verify that the backplane was reconfigured.
    1. Verify that the IP addresses of the eth2 interfaces on the CVM are set correctly.
      nutanix@cvm$ svmips -b
      Output similar to the following is displayed:
      172.30.25.1 172.30.25.3 172.30.25.5
    2. Verify that the IP addresses of the backplane interfaces of the hosts are set correctly.
      nutanix@cvm$ hostips -b
      Output similar to the following is displayed:
      172.30.25.2 172.30.25.4 172.30.25.6
    The svmips and hostips commands, when used with the option b , display the IP addresses assigned to the interfaces on the backplane.

Disabling Network Segmentation on an ESXi and Hyper-V Clusters

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Procedure

  1. Log on to any CVM in the cluster using SSH.
  2. Disable the backplane network.
    • Use this CLI to disable network segmentation on an ESXi and Hyper-V cluster:
      nutanix@cvm$ network_segmentation --backplane_network --disable

      Output similar to the following appears:

      Operation type : Disable
      Network type : kBackplane
      Params : {}
      Please enter [Y/y] to confirm or any other key to cancel the operation

      Type Y/y to confirm that you want to reconfigure the backplane network.

      If you type Y/y, network segmentation is disabled and the cluster restarts in a rolling manner, one CVM at a time. If you type anything other than Y/y, network segmentation is not disabled.

      This method does not involve cluster downtime.

  3. Verify that network segmentation was successfully disabled. You can verify this in one of two ways:
    • Verify that the backplane is disabled.
      nutanix@cvm$ network_segment_status

      Output similar to the following is displayed:

      2017-11-23 06:18:23 INFO zookeeper_session.py:110 network_segment_status is attempting to connect to Zookeeper

      Network segmentation is disabled

    • Verify that the commands to show the backplane IP addresses of the CVMs and hosts list the management IP addresses (run the svmips and hostips commands once without the b option and once with the b option, and then compare the IP addresses shown in the output).
      Important:
      nutanix@cvm$ svmips
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ svmips -b
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ hostips
      192.127.3.5 192.127.3.6 192.127.3.7
      nutanix@cvm$ hostips -b
      192.127.3.5 192.127.3.6 192.127.3.7

      In the example above, the outputs of the svmips and hostips commands with and without the b option are the same, indicating that the backplane network segmentation is disabled.

Disabling Network Segmentation on an AHV Cluster

About this task

You perform backplane network reconfiguration procedure on any one of the CVMs in the cluster. The change propagates to the remaining CVMs.

Procedure

  1. Shut down all the guest VMs in the cluster from within the guest OS or use the Prism Element web console.
  2. Place all nodes of a cluster into the maintenance mode.
    1. Use SSH to log on to a Controller VM in the cluster
    2. Determine the IP address of the node you want to put into the maintenance mode:
      nutanix@cvm$ acli host.list
      Note the value of Hypervisor IP for the node you want to put in the maintenance mode.
    3. Put the node into the maintenance mode:
      nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true | false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
      Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is recommended to shutdown user VMs before proceeding with disruptive changes.

      Replace host-IP-address with either the IP address or host name of the AHV host you want to shut down.

      The following are optional parameters for running the acli host.enter_maintenance_mode command:

      • wait
      • non_migratable_vm_action

      Do not continue if the host has failed to enter the maintenance mode.

    4. Verify if the host is in the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to EnteredMaintenanceMode and schedulable equals to False .

  3. Disable backplane network segmentation from the Prism Web Console.
    1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration under the Settings .
    2. In the Internal Interfaces tab, in the Backplane LAN row, click Disable .
      Figure. Disable Network Configuration Click to enlarge

    3. Click Yes to disable Backplane LAN.

      This involves a rolling reboot of CVMs to migrate the cluster services back to the external interface.

  4. Log on to a CVM in the cluster with SSH and stop Acropolis cluster-wide:
    nutanix@cvm$ allssh genesis stop acropolis 
  5. Restart Acropolis cluster-wide:
    nutanix@cvm$ cluster start 
  6. Remove all nodes from the maintenance mode.
    1. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance mode:
      nutanix@cvm$ acli host.exit_maintenance_mode host-ip

      Replace host-ip with the new IP address of the host.

      This command migrates (live migration) all the VMs that were previously running on the host back to the host.

    2. Verify if the host has exited the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to kAcropolisNormal or AcropolisNormal and schedulable equals to True .

  7. Power on the guest VMs from the Prism Element web console.

Service-Specific Traffic Isolation

Isolating the traffic associated with a specific service is a two-step process. The process is as follows:

  • Configure the networks and uplinks on each host manually. Prism only creates the VNIC that the service requires, and it places that VNIC on the bridge or port group that you specify. Therefore, you must manually create the bridge or /port group on each host and add the required physical NICs as uplinks to that bridge or port group.
  • Configure network segmentation for the service by using Prism. Create an extra VNIC for the service, specify any additional parameters that are required (for example, IP address pools), and the bridge or port group that you want to dedicate to the service.

Isolating Service-Specific Traffic

Before you begin

  • Ensure to configure each host as described in Configuring the Network on an AHV Host.
  • Review Prerequisites.

About this task

To isolate a service to a separate virtual network, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. In the details pane, on the Internal Interfaces tab, click Create New Interface .
    The Create New Interface dialog box is displayed.
  4. On the Interface Details tab, do the following:
    1. Specify a descriptive name for the network segment.
    2. (On AHV) Optionally, in VLAN ID , specify a VLAN ID.
      Make sure that the VLAN ID is configured on the physical switch.
    3. In Bridge (on AHV) or CVM Port Group (on ESXi), select the bridge or port group that you created for the network segment.
    4. To specify an IP address pool for the network segment, click Create New IP Pool , and then, in the IP Pool dialog box, do the following:
      • In Name , specify a name for the pool.
      • In Netmask , specify the network mask for the pool.
      • Click Add an IP Range , specify the start and end IP addresses in the IP Range dialog box that is displayed.
      • Use Add an IP Range to add as many IP address ranges as you need.
        Note: Add at least n+1 IP addresses in an IP range considering n is the number of nodes in the cluster.
      • Click Save .
      • Use Add an IP Pool to add more IP address pools. You can use only one IP address pool at any given time.
      • Select the IP address pool that you want to use, and then click Next .
        Note: You can also use an existing unused IP address pool.
  5. On the Feature Selection tab, do the following:
    You cannot enable network segmentation for multiple services at the same time. Complete the configuration for one service before you enable network segmentation for another service.
    1. Select the service whose traffic you want to isolate.
    2. Configure the settings for the selected service.
      The settings on this page depend on the service you select. For information about service-specific settings, see Service-Specific Settings and Configurations.
    3. Click Save .
  6. In the Create Interface dialog box, click Save .
    The CVMs are rebooted multiple times, one after another. This procedure might trigger more tasks on the cluster. For example, if you configure network segmentation for disaster recovery, the firewall rules are added on the CVM to allow traffic on the specified ports through the new CVM interface and updated when a new recovery cluster is added or an existing cluster is modified.

What to do next

See Service-Specific Settings and Configurations for any additional tasks that are required after you segment the network for a service.

Modifying Network Segmentation Configured for a Service

To modify network segmentation configured for a service, you must first disable network segmentation for that service and then create the network interface again for that service with the new IP address pool and VLAN.

About this task

For example, if the interface of the service you want to modify is ntnx0, after the reconfiguration, the same interface (ntnx0) is assigned to that service if that interface is not assigned to any other service. If ntnx0 is assigned to another service, a new interface (for example ntnx1) is created and assigned to that service.

Perform the following to reconfigure network segmentation configured for a service.

Procedure

  1. Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  2. Create the network again by following the instructions in Isolating Service-Specific Traffic.

Disabling Network Segmentation Configured for a Service

To disable network segmentation configured for a service, you must disable the dedicated VNIC. Disabling network segmentation frees up the name of the VNIC. Disabling network segmentation frees up the vNIC’s name. The free name is reused in a subsequent network segmentation configuration.

About this task

At the end of this procedure, the cluster performs a rolling restart. Disabling network segmentation might also disrupt the functioning of the associated service. To restore normal operations, you might have to perform other tasks immediately after the cluster has completed the rolling restart. For information about the follow-up tasks, see Service-Specific Settings and Configurations.

To disable the network segmentation configured for a service, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. On the Internal Interfaces tab, for the interface that you want to disable, click Disable .
    Note: The defined IP address pool is available even after disabling the network segmentation.

Deleting a vNIC Configured for a Service

If you disable network segmentation for a service, the vNIC for that service is not deleted. AOS reuses the vNIC if you enable network segmentation again. However, you can manually delete a vNIC by logging into any CVM in the cluster with SSH.

Before you begin

Ensure that the following prerequisites are met before you delete the vNIC configured for a Service:
  • Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  • Observe the Limitation specified in Limitation for vNIC Hot-Unplugging topic in AHV Admin Guide .
you

About this task

Perform the following to delete a vNIC.

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Delete the vNIC.
    nutanix@cvm$ network_segmentation --service_network --interface="interface-name" --delete

    Replace interface-name with the name of the interface you want to delete. For example, ntnx0.

Service-Specific Settings and Configurations

The following sections describe the settings required by the services that support network segmentation.

Nutanix Volumes

Network segmentation for Volumes also requires you to migrate iSCSI client connections to the new segmented network. If you no longer require segmentation for Volumes traffic, you must also migrate connections back to eth0 after disabling the vNIC used for Volumes traffic.

You can create two different networks for Nutanix Volumes with different IP pools, VLANs, and data services IP addresses. For example, you can create two iSCSI networks for production and non-production traffic on the same Nutanix cluster.

Follow the instructions in Isolating Service-Specific Traffic again to create the second network for Volumes after you create the first network.

Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Client Subnet The network (in CIDR notation) that hosts the iSCSI clients. Required If the vNIC created for the service on the CVM is not on the same network as the clients.
Gateway Gateway to the subnetwork that hosts the iSCSI clients. Required If you specify the client subnet.
Migrating iSCSI Connections to the Segmented Network

After you enable network segmentation for Volumes, you must manually migrate connections from existing iSCSI clients to the newly segmented network.

Before you begin

Make sure that the task for enabling network segmentation for the service succeeds.

About this task

Note: Even though support is available to run iSCSI traffic on both the segmented and management networks at the same time, Nutanix recommends that you move the iSCSI traffic for guest VMs to the segmented network to achieve true isolation.

To migrate iSCSI connections to the segmented network, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets that are using CVM eth0 or the Data Service IP address.
  2. Optionally, remove all the discovery records for the Data Services IP address (DSIP) on eth0.
  3. If the clients are allowlisted by their IP address, remove the client IP address that is on the management network from the allowlist, and then add the client IP address on the new network to the allowlist.
    nutanix@cvm$ acli vg.detach_external vg_name initiator_network_id=old_vm_IP
    nutanix@cvm$ acli vg.attach_external vg_name initiator_network_id=new_vm_IP
    

    Replace vg_name with the name of the volume group and old_vm_IP and new_vm_IP with the old and new client IP addresses, respectively.

  4. Discover the virtual IP address specified for Volumes.
  5. Connect to the iSCSI targets from the client.
Migrating Existing iSCSI Connections to the Management Network (Controller VM eth0)

About this task

To migrate existing iSCSI connections to eth0, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets using the CVM vNIC dedicated to Volumes.
  2. Remove all the discovery records for the DSIP on the new interface.
  3. Discover the DSIP for eth0.
  4. Connect the clients to the iSCSI targets.
Disaster Recovery with Protection Domains

The settings for configuring network segmentation for disaster recovery apply to all Asynchronous, NearSync, and Metro Availability replication schedules. You can use disaster recovery with Asynchronous, NearSync, and Metro Availability replications only if both the primary site and the recovery site is configured with Network Segmentation. Before enabling or disabling the network segmentation on a host, disable all the disaster recovery replication schedules running on that host.

Note: Network segmentation does not support disaster recovery with Leap.
Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Note: Virtual IP address is different from the external IP address and the data services IP address of the cluster.
Gateway Gateway to the subnetwork.
Remote Site Configuration

After configuring network segmentation for disaster recovery, configure remote sites at both locations. You also need to reconfigure remote sites if you disable network segmentation.

For information about configuring remote sites, see Remote Site Configuration in the Data Protection and Recovery with Prism Element Guide.

Segmenting a Stretched Layer 2 Network for Disaster Recovery

A stretched Layer 2 network configuration allows the source and remote metro clusters to be in the same broadcast domain and communicate without a gateway.

About this task

You can enable network segmentation for disaster recovery on a stretched Layer 2 network that does not have a gateway. A stretched Layer 2 network is usually configured across the physically remote clusters such as a metro availability cluster deployment. A stretched Layer 2 network allows the source and remote clusters to be configured in the same broadcast domain without the usual gateway.

See AOS Release Notes for minimum AOS version required to configure a stretched Layer 2 network.

To configure a network segment as a stretched L2 network, do the following.

Procedure

Run the following command:
nutanix@cvm$ network_segmentation --service_network --service_name=kDR --ip_pool= DR-ip-pool-name --service_vlan= DR-vlan-id --desc_name= Description --host_physical_network= portgroup/bridge --stretched_metro

Replace the following: (See Isolating Service-Specific Traffic for the information)

  • DR-ip-pool-name with the name of the IP Pool created for the DR service or any existing unused IP address pool.
  • DR-vlan-id with the VLAN ID being used for the DR service.
  • Description with a suitable description of this stretched L2 network segment.
  • portgroup/bridge with the details of Bridge or CVM Port Group used for the DR service.

For more information about the network_segmentation command, see the Command Reference guide.

Configuring Backplane IP Pool

This procedure shows how to create an IP pool for backplane interfaces using the new CLI.

About this task

Network Segmentation for backplane traffic previously required an entire subnet even if a cluster has a small number of nodes. This resulted in inefficient use of IP addresses. The backplane IP pool feature enables you to provide a small IP pool instead of an entire subnet.

You can create an IP address pool using the new network_segmentation ip_pool command. The named IP pool includes one or more IP ranges. For example, an IP address from 172.16.1.100 to 172.16.1.105 can be one IP range and 172.16.1.120 to 172.16.1.125 can be another IP range within the same named IP pool and same IP subnet.

At present in the Prism interface, there is no option to create an IP address pool for backplane segmentation. However, the Prism interface allows creating small IP address pools for service-specific traffic such as Volumes and DR. You can use the new network_segmentation ip_pool CLI to create IP address pools for Backplane, Volumes, and DR as well. You can also manage (edit, delete, and update) IP address pools that are created for Backplane, Volumes and DR using the new CLI.

Procedure

  1. Log on to any CVM in the cluster using SSH
  2. Create a new IP pool and define the IP ranges
    nutanix@cvm$ network_segmentation --ip_pool_name=IP-Pool-name --ip_pool_netmask=netmask 
    --ip_ranges="[(‘First-IP-Address', ‘Last-IP-Address'), (‘First-IP-Address', ‘Last-IP-Address')]" 
    ip_pool create

    Replace:

    • IP-Pool-name with a user defined IP pool name
    • Netmask with a network mask in dotted decimal mask notation
    • First-IP-Address with the first IP Address in the range
    • Last-IP-Address with the last IP Address in the same range

    For example:

    nutanix@cvm$ network_segmentation --ip_pool_name=BackplanePool --ip_pool_netmask=255.255.255.0 
    --ip_ranges="[('172.16.1.100', '172.16.1.105'), ('172.16.1.120', '172.16.1.125')]" 
    ip_pool create
    
  3. Enable Network Segmentation for backplane using the new IP pool
    nutanix@cvm$ network_segmentation --backplane_network --ip_pool=BackplanePool --backplane_vlan=1234
    --host_virtual_switch=vs1

Enabling Backplane Network Segmentation on a Mixed Hypervisor Cluster

This procedure shows how to enable Backplane Network Segmentation on a mixed hypervisor cluster.

About this task

You can enable Backplane Network Segmentation on a mixed hypervisor cluster containing:

  • ESXi and AHV storage only nodes.
  • Hyper-V and AHV storage only nodes.

Procedure

  1. Log on to any CVM in the cluster using SSH.
  2. Enable network segmentation for backplane traffic

    On a cluster containing ESXi and AHV storage only nodes:

    nutanix@cvm$ network_segmentation --backplane_network 
    --ip_pool=IP-pool-name  
    --backplane_vlan=VLAN-ID  
    [--esx_host_physical_network=ESXi-host-portgroup-name ]
    [--esx_cvm_physical_network=ESXi-cvm-portgroup-name ]
    [--ahv_host_physical_network=AHV-network-name ]
    

    On a cluster containing Hyper-V and AHV storage only nodes:

    nutanix@cvm$ network_segmentation --backplane_network 
    --ip_pool=IP-pool-name  
    --backplane_vlan=VLAN-ID 
    [--hyperv_host_physical_network=HyperV-host-network-name ] 
    [--ahv_host_physical_network=AHV-network-name ]
    

    In the above command replace:

    • IP-Pool-name with a user defined IP pool name

    • VLAN-ID with a backplane VLAN ID

    • ESXi-host-portgroup-name with the ESXi host network name

    • ESXi-cvm-portgroup-name with the ESXi CVM network name

    • AHV-network-name with the AHV storage only node bridge name

    • HyperV-host-network-name with the Hyper-V switch name

    For example, enable network segmentation on a mixed hypervisor containing ESXi and AHV storage only nodes:

    nutanix@cvm$ network_segmentation --backplane_network 
    --ip_pool=BackplanePool 
    --backplane_vlan=1234 
    --esx_host_physical_network=host-pg 
    --esx_cvm_physical_network=cvm-pg 
    --ahv_host_physical_network=br1

Updating Backplane Portgroup

You can update the backplane portgroups that are assigned to CVM and host nodes. Earlier, to change a portgroup that is assigned to a CVM and host, you had to disable Network Segmentation and re-enable with new portgroups.

This feature is only supported on a cluster running ESXi hypervisor.

Updating backplane portgroups helps you to:

  • Move from one vSphere Standard Standard Switch (VSS) portgroup to another VSS portgroup within the same virtual standard switch
  • Move from one VSS portgroup to another VSS portgroup in a different Virtual Standard Switch
  • Move from a VSS portgroup to a vSphere Distributed Switch (VDS) portgroup
  • Move from a VDS portgroup to a VSS
Note:

For renaming existing VSS or VDS portgroups, you must manually perform the rename operation from the vCenter application or use the ESXi CLI and run the update operation with the new portgroup. This is to ensure the configuration stored in the Nutanix internal database is up to date.

Limitations of Updating Backplane Portgroup

Consider the following limitations before updating backplane portgroups:

  • This feature is not supported on clusters running on the AHV and Hyper-V hypervisors.
  • This feature does not support updating any other configuration such as VLAN ID, and IP address.
Note: This feature does not perform any network validation on the new portgroups. Hence the user must ensure the portgroup settings are accurate before proceeding with the portgroup update operation. If the settings are not accurate, the CVM on that node may not be able to communicate with its peers and this results in a stuck rolling reboot.

Updating Backplane Portgroup

This procedure shows how to update backplane portgroups:

About this task

You can update the backplane portgroups that are assigned to CVM and host nodes

Procedure

  1. Log on to any CVM in the cluster using SSH
  2. Update the CVM and host portgroups
    nutanix@cvm$ network_segmentation --backplane_network 
    --host_physical_network=new-host-portgroup-name 
    --cvm_physical_network=new-cvm-portgroup-name
    --update 
    

    In the above command replace:

    • new-host-portgroup-name with the new host portgroup name
    • new-cvm-portgroup-name with the new CVM portgroup name

    For example:

    nutanix@cvm$ network_segmentation --backplane_network 
    --host_physical_network=new-bp-host-pgroup 
    --cvm_physical_network=new-bp-cvm-pgroup 
    --update
    

    For creating a port group, see Creating Port Groups on the Distributed Switch in vSphere Administration Guide for Acropolis .

IP Address Customization for each CVM and Host

This procedure shows how to create custom IP addresses for each CVM and host for network segmentation.

About this task

IP Address customization for each CVM and host feature enables you to allocate an IP address manually to a CVM and host. This helps in maintaining similarity between external and segmented IP addresses. This feature is supported while configuring backplane segmentation, service specific traffic isolation for Volumes, and service specific traffic isolation for Disaster Recovery.

Procedure

  1. Create a JSON file with a mapping of IP Addresses

    You must manually define the mapping of the CVM external IP address to the new segmented IP address in a JSON file. The segmented IP addresses should belong to the same IP address pool that is created from the Prism UI or the CLI command before starting the Network Segmentation operation. You can create and save the JSON file in any CVM in the cluster.

    Here is the example of JSON file format:

    • JSON file format for Backplane Segmentation:
      {
        "svmips": {
          `cvm_external_ip1`: `cvm_backplane_ip1`,
          `cvm_external_ip2`: `cvm_backplane_ip2`,
          `cvm_external_ip3`: `cvm_backplane_ip3`
        },
        "hostips": {
          `host_external_ip1`: `host_backplane_ip1`,
          `host_external_ip2`: `host_backplane_ip2`,
          `host_external_ip3`: `host_backplane_ip3`
        }
      }
      For example:
      {
        "svmips": {
          "10.47.240.141": "172.16.10.141",
          "10.47.240.142": "172.16.10.142",
          "10.47.240.143": "172.16.10.143"
        },
        "hostips": {
          "10.47.240.137": "172.16.10.137",
          "10.47.240.138": "172.16.10.138",
          "10.47.240.139": "172.16.10.139"
        }
      }
      
    • JSON file format for Service Segmentation:
      {
        "svmips": {
          `cvm_external_ip1`: `cvm_service_ip1`,
          `cvm_external_ip2`: `cvm_service_ip2`,
          `cvm_external_ip3`: `cvm_service_ip3`
          }
      }
      For example:
      {
        "svmips": {
          "10.47.240.141": "10.47.6.141",
          "10.47.240.142": "10.47.6.142",
          "10.47.240.143": "10.47.6.143"
        }
      }
      
  2. Log on to the CVM in the cluster where the JSON file exists using SSH
  3. Enable Service Specific Traffic Isolation for Volumes using the JSON file
    nutanix@CVM:~$ network_segmentation --service_network 
    --ip_pool=pool1 --desc_name="Volumes Seg 1" 
    --service_name=kVolumes 
    --host_physical_network=dv-volumes-network-1 
    --service_vlan=151 
    --ip_map_filepath=/home/nutanix/ip_map.json
    

Enabling Physical Backplane Segmentation on Hyper-V Using CLI

This procedure shows how to enable physical backplane segmentation on a cluster containing Hyper-V node using CLI.

About this task

Physical backplane segmentation support is now available on a cluster containing Hyper-V nodes. Earlier, the support was available on AHV and ESXi nodes.

Procedure

  1. Log on to any CVM in the cluster using SSH
  2. Enable backplane segmentation on a Hyper-V node
    nutanix@CVM:~$ network_segmentation --backplane_network
    --ip_pool=IP-Pool-name 
    --backplane_vlan=VLAN-ID 
    --host_physical_network=hyperv_host_physical_network

    In the above command replace:

    • IP-Pool-name with a user defined IP pool name

    • VLAN-ID with a backplane VLAN ID

    • hyperv_host_physical_network with the Hyper-V switch name

    For example:

    nutanix@CVM:~$ network_segmentation --backplane_network 
    --ip_pool=BackplanePool 
    --backplane_vlan=1234 
    --host_physical_network=BackplaneSwitch

Network Segmentation during Cluster Expansion

When expanding a cluster:

  • If you enable the backplane network segmentation, Prism allocates two IP addresses for every new node from the backplane IP Pool.
  • If you enable service-specific traffic isolation, Prism allocates one IP address for every new node from the respective (Volumes or DR) IP pools.
  • If enough IP addresses are not available in the specified network, the Prism Element web console displays a failure message in the tasks page. To add more IP ranges to the IP pool, see Configuring Backplane IP Pool.
  • If you cannot add more IPs to the IP pool, then reconfigure that specific network segmentation. For more information about how to reconfigure the network, see Reconfiguring the Backplane Network.
  • The network settings on the physical switch to which the new nodes are connected must be identical to the other nodes in the cluster. New nodes communicate with current nodes using the same VLAN ID for segmented networks. Otherwise, the expand cluster task will fail in the network validation stage.
  • After fulfilling the earlier points, you can add nodes to the cluster. For instructions about how to add nodes to your Nutanix cluster, see Expanding a Cluster in the Prism Web Console Guide .

Network Segmentation–Related Changes During an AOS Upgrade

When you upgrade from an AOS version which does not support network segmentation to an AOS version that does, the eth2 interface (used to segregate backplane traffic) is automatically created on each CVM. However, the network remains unsegmented, and the cluster services on the CVM continue to use eth0 until you configure network segmentation.

The vNICs ntnx0, ntnx1, and so on, are not created during an upgrade to a release that supports service-specific traffic isolation. They are created when you configure traffic isolation for a service.

Note:

Do not delete the eth2 interface that is created on the Controller VMs, even if you are not using the network segmentation feature.

Firewall Requirements

Ports and Protocols describes detailed port information (like protocol, service description, source, destination, and associated service) for Nutanix products and services. It includes port and protocol information for 1-click upgrades and LCM updates.

Log management

This chapter describes how to configure cluster-wide setting for log-forwarding and documenting the log fingerprint.

Log Forwarding

The Nutanix Controller VM provides a method for log integrity by using a cluster-wide setting to forward all the logs to a central log host. Due to the appliance form factor of the Controller VM, system and audit logs does not support local log retention periods as a significant increase in log traffic can be used to orchestrate a distributed denial of service attack (DDoS).

Nutanix recommends deploying a central log host in the management enclave to adhere to any compliance or internal policy requirement for log retention. In case of any system compromise, a central log host serves as a defense mechanism to preserve log integrity.

Note: The audit in the Controller VM uses the audisp plugin by default to ship all the audit logs to the rsyslog daemon (stored in /home/log/messages ). Searching for audispd in the central log host provides the entire content of the audit logs from the Controller VM. The audit daemon is configured with a rules engine that adheres to the auditing requirements of the Operating System Security Requirements Guide (OS SRG), and is embedded as part of the Controller VM STIG.

Use the nCLI to enable forwarding of system, audit, aide, and SCMA logs of all the Controller nodes in a cluster at the required log level. For more information, see Send Logs to Remote Syslog Server in the Acropolis Advanced Administration Guide

Documenting the Log Fingerprint

For forensic analysis, non-repudiation is established by verifying the fingerprint of the public key for the log file entry.

Procedure

  1. Login to the CVM.
  2. Run the following command to document the fingerprint for each public key assigned to an individual admin.
    nutanix@cvm$ ssh-keygen -lf /<location of>/id_rsa.pub

    The fingerprint is then compared to the SSH daemon log entries and forwarded to the central log host ( /home/log/secure in the Controller VM).

    Note: After completion of the ssh public key inclusion in Prism and verification of connectivity, disable the password authentication for all the Controller VMs and AHV hosts. From the Prism main menu, de-select Cluster Lockdown configuration > Enable Remote Login with password check box from the gear icon drop-down list.

Security Management Using Prism Central (PC)

Prism Central provides several mechanisms and features to enforce security of your multi-cluster environment.

If you enable Identity and Access Management (IAM), see Security Management Using Identity and Access Management (Prism Central).

Configuring Authentication

Caution: Prism Central does not support the SSLv2 and SSLv3 ciphers. Therefore, you must disable the SSLv2 and SSLv3 options in a browser before accessing Prism Central. This avoids an SSL Fallback and access denial situations. However, you must enable TLS protocol in the browser.

Prism Central supports these user authentication options:

  • SAML authentication. Users can authenticate through a supported identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties: an identity provider (IDP) and Prism Central as the service provider.

    If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported IDP for Single Sign-on. If you enable IAM, additional IDPs are available. For more information, see Security Management Using Identity and Access Management (Prism Central) and Updating ADFS When Using SAML Authentication.

  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts .
  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.

Adding An Authentication Directory (Prism Central)

Before you begin

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.

    The Authentication Configuration window appears.

    Figure. Authentication Configuration Window Click to enlarge Authentication Configuration window main display

  2. To add an authentication directory, click the New Directory button.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • An Active Directory user name or group name containing spaces is not supported for Prism Central authentication.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project.
        Note: Prism Central uses a service account to query OpenLDAP directories for user information and does not currently support certificate-based authentication with the OpenLDAP directory.
    2. Name : Enter a directory name.

      This is a name you choose to identify this entry; it need not be the name of an actual directory.

    3. Domain : Enter the domain name.

      Enter the domain name in DNS format, for example, nutanix.com .

    4. Directory URL : Enter the URL address to the directory.

      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:

      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note:
        • When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        • Cross-forest trust between multiple AD forests is not supported.

      For the complete list of required ports, see Port Reference.
    5. [OpenLDAP only] Configure the following additional fields:
      Note:

      The value for the following variables depend on your OpenLDAP configuration.

      • User Object Class : Enter the value that uniquely identifies the object class of a user.
      • User Search Base : Enter the base domain name in which the users are configured.
      • Username Attribute : Enter the attribute to uniquely identify a user.
      • Group Object Class : Enter the value that uniquely identifies the object class of a group.
      • Group Search Base : Enter the base domain name in which the groups are configured.
      • Group Member Attribute : Enter the attribute that identifies users in a group.
      • Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .

      Here are some of the possible options for the fields:

      • User Object Class: user | person | inetOrgPerson | organizationalPerson | posixAccount
      • User Search Base: ou=<organizational unit>, dc=<domain>
      • Username Attribute: uid
      • Group Object Class: posixGroup | groupOfNames
      • Group Search Base: ou=<organizational unit>, dc=<domain>
      • Group Member Attribute: member | memberUid
      • Group Member Attribute Value: uid
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Depending upon the Directory type you select in step 2.a, the service account user name format as follows:
      • For Active Directory , enter the service account user name in the user_name@domain.com format.
      • For OpenLDAP , enter the service account user name in the following Distinguished Name (DN) format:

        cn=username, dc=company, dc=com

        A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

        A service account is under the Managed Service Accounts in the Active Directory and openLDAP server. An application or service uses the service account to interact with the operating system. Enter your Active Directory and openLDAP service account credentials in this (username) and the following (password) field.

        Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.

    10. Repeat this step for each authentication directory you want to add.
    Note:
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Configuring Role Mapping).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service.
    Figure. Directory List Fields Click to enlarge Directory List tab display

  3. To edit a directory entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete a directory entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Adding a SAML-based Identity Provider

Before you begin

  • An identity provider (typically a server or other computer) is the system that provides authentication through a SAML request. There are various implementations that can provide authentication services in line with the SAML standard.
  • You can specify other tested standard-compliant IDPs in addition to ADFS. See the Prism Central release notes topic Identity and Access Management Software Support for specific support requirements and also Security Management Using Identity and Access Management (Prism Central).

    IAM allows only one identity provider at a time, so if you already configured one, the + New IDP link does not appear.

  • You must configure the identity provider to return the NameID attribute in SAML response. Prism Central uses the NameID attribute for role mapping.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. To add a SAML-based identity provider, click the + New IDP link.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Configuration name : Enter a name for the identity provider. This name appears in the logon authentication screen.
    2. Group Attribute Name (Optional) : Optionally, enter the group attribute name such as groups . Ensure that this name matches the group attribute name provided in the IDP configuration.
    3. Group Attribute Delimiter (Optional) : Optionally, enter a delimiter that needs to be used when multiple groups are selected for the Group attribute.
    4. Import Metadata : Click this option to upload a metadata file that contains the identity provider information.

      Identity providers typically provide an XML file on their website that includes metadata about that identity provider, which you can download from that site and then upload to Prism Central. Click + Import Metadata to open a search window on your local system and then select the target XML file that you downloaded previously. Click the Save button to save the configuration.

      Figure. Identity Provider Fields (metadata configuration) Click to enlarge

    This step completes configuring an identity provider in Prism Central, but you must also configure the callback URL for Prism Central on the identity provider. To configure the callback URL, click the Download Metadata link just below the Identity Providers table to download an XML file that describes Prism Central and then upload this metadata file to the identity provider.
  3. To edit an identity provider entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete an identity provider entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Enabling and Configuring Client Authentication

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. Click the Client tab, then do the following steps.
    1. Select the Configure Client Chain Certificate check box.

      The Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.

    2. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note:
      • Client and CAC authentication only supports RSA 2048 bit certificate.
      • Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
    3. To enable client authentication, click Enable Client Authentication .
    4. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on the local machine or by providing it through a smart card reader.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  3. To specify a service account that the Prism Central web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The Prism Central console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:

    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Central supports both certificate authentication and basic authentication in order to handle both Prism Central login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Central login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism Central role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism Central by using CAC authentication, to successfully log out of Prism Central, close the browser after you click Log Out .

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds> set-crl-expiration-interval=<expiration interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).
  • The periodically updated CRLs are cached in-memory for the duration specified by value of set-crl-expiration-interval and expired after the duration, in case a particular CRL distribution point is not reachable. This duration is configured for the entire list of CRL distribution points. The default value for this is 604800 seconds (7 days).

User Management

Managing Local User Accounts

About this task

The Prism Central admin user is created automatically, but you can add more (locally defined) users as needed. To add, update, or delete a user account, do the following:

Note:
  • To add user accounts through Active Directory, see Configuring Authentication. If you enable the Prism Self Service feature, an Active Directory is assigned as part of that process.
  • Changing the Prism Central admin user password does not impact registration (re-registering clusters is not required).

Procedure

  • Click the gear icon in the main menu and then select Local User Management in the Settings page.

    The Local User Management dialog box appears.

    Figure. User Management Window Click to enlarge displays user management window

  • To add a user account, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
    5. Password : Enter a password (maximum of 255 characters).
      Note: A second field to verify the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.

      English is selected by default. You have an option to select Simplified Chinese or Japanese . If you select either of these, the cluster locale is updated for the new user. For example, if you select Simplified Chinese , the user interface is displayed in Simplified Chinese when the new user logs in.

    7. Roles : Assign a role to this user.

      There are three options:

      • Checking the User Admin box allows the user to view information, perform any administrative task, and create or modify user accounts.
      • Checking the Prism Central Admin (formerly "Cluster Admin") box allows the user to view information and perform any administrative task, but it does not provide permission to manage (create or modify) other user accounts.
      • Leaving both boxes unchecked allows the user to view information, but it does not provide permission to perform any administrative tasks or manage other user accounts.
    8. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the dialog box with the new user appearing in the list.

    Figure. Create User Window Click to enlarge displats create user window

  • To modify a user account, click the pencil icon for that user and update one or more of the values as desired in the Update User window.
    Figure. Update User Window Click to enlarge displays update user window

  • To disable login access for a user account, click the Yes value in the Enabled field for that user; to enable the account, click the No value.

    A Yes value means the login is enabled; a No value means it is disabled. A user account is enabled (login access activated) by default.

  • To delete a user account, click the X icon for that user.
    A window prompt appears to verify the action; click the OK button. The user account is removed and the user no longer appears in the list.

Updating My Account

About this task

To update your account credentials (that is, credentials for the user you are currently logged in as), do the following:

Procedure

  1. To update your password, select Change Password from the user icon pull-down list of the main menu.
    The Change Password dialog box appears. Do the following in the indicated fields:
    1. Current Password : Enter the current password.
    2. New Password : Enter a new password.
    3. Confirm Password : Re-enter the new password.
    4. When the fields are correct, click the Save button (lower right). This saves the new password and closes the window.
    Note: Password complexity requirements might appear above the fields; if they do, your new password must comply with these rules.
    Figure. Change Password Window Click to enlarge change password window

  2. To update other details of your account, select Update Profile from the user icon pull-down list.
    The Update Profile dialog box appears. Do the following in the indicated fields for any parameters you want to change:
    1. First Name : Enter a different first name.
    2. Last Name : Enter a different last name.
    3. Email Address : Enter a different valid user email address.
    4. Language : Select a different language for your account from the pull-down list.
    5. API Key : Enter a new API key.
      Note: Your keys can be managed from the API Keys page on the Nutanix support portal (see Licensing) . Your connection will be secure without the optional public key (following field), and the public key option is provided in the event that your default public key expires.
    6. Public Key : Click the Choose File button to upload a new public key file.
    7. When all the fields are correct, click the Save button (lower right). This saves the changes and closes the window.
    Figure. Update Profile Window Click to enlarge

Resetting Password (CLI)

This procedure describes how to reset a local user's password on the Prism Element or the Prism Central web consoles.

About this task

To reset the password using nCLI, do the following:

Note:

Only a user with admin privileges can reset a password for other users.

Procedure

  1. Access the CVM via SSH.
  2. Log in with the admin credentials.
  3. Use the ncli user reset-password command and specify the username and password of the user whose password is to be reset:
    nutanix@cvm$ ncli user reset-password user-name=xxxxx password=yyyyy
    
    • Replace user-name=xxxxx with the name of the user whose password is to be reset.

    • Replace password=yyyyy with the new password.

What to do next

You can relaunch the Prism Element or the Prism Central web console and verify the new password setting.

Deleting a Directory User Account

About this task

To delete a directory-authenticated user, do the following:

Procedure

  1. Click the Hamburger icon, and go to Administration > Projects
    The Project page appears. This page lists all existing projects.
  2. Select the project that the user is associated with and go to Actions > Update Projects
    The Edit Projects page appears.
  3. Go to Users, Groups, Roles tab.
  4. Click the X icon to delete the user.
    Figure. Edit Project Window Click to enlarge

  5. Click Save

    Prism deletes the user account and also removes the user from any associated projects.

    Repeat the same steps if the user is associated with multiple projects.

Controlling User Access (RBAC)

Prism Central supports role-based access control (RBAC) that you can configure to provide customized access permissions for users based on their assigned roles. The roles dashboard allows you to view information about all defined roles and the users and groups assigned to those roles.

  • Prism Central includes a set of predefined roles (see Built-in Role Management).
  • You can also define additional custom roles (see Custom Role Management).
  • Configuring authentication confers default user permissions that vary depending on the type of authentication (full permissions from a directory service or no permissions from an identity provider). You can configure role maps to customize these user permissions (see Configuring Role Mapping).
  • You can refine access permissions even further by assigning roles to individual users or groups that apply to a specified set of entities (see Assigning a Role).
    Note: Please note that the entities are treated as separate instances. For example, if you want to grant a user or a group the permission to manage cluster and images, an administrator must add both of these entities to the list of assignments.
  • With RBAC, user roles do not depend on the project membership. You can use RBAC and log in to Prism Central even without a project membership.
Note: Defining custom roles and assigning roles are supported on AHV only.

Built-in Role Management

The following built-in roles are defined by default. You can see a more detailed list of permissions for any of the built-in roles through the details view for that role (see Displaying Role Permissions). The Project Admin, Developer, Consumer, and Operator roles are available when assigning roles in a project.

Role Privileges
Super Admin Full administrator privileges
Prism Admin Full administrator privileges except for creating or modifying the user accounts
Prism Viewer View-only privileges
Self-Service Admin Manages all cloud-oriented resources and services
Note: This is the only cloud administration role available.
Project Admin Manages cloud objects (roles, VMs, Apps, Marketplace) belonging to a project
Note: You can specify a role for a user when you assign a user to a project, so individual users or groups can have different roles in the same project.
Developer Develops, troubleshoots, and tests applications in a project
Consumer Accesses the applications and blueprints in a project
Operator Accesses the applications in a project
VPC Admin Manages VPCs and related entities. Agnostic of the physical network/infrastructure. VPC admin of a Nutanix deployment.
Note: Previously, the Super Admin role was called User Admin , the Prism Admin role was called Prism Central Admin and Cluster Admin , and the Prism Viewer was called Viewer .

Custom Role Management

If the built-in roles are not sufficient for your needs, you can create one or more custom roles (AHV only).

Creating a Custom Role

About this task

To create a custom role, do the following:

Procedure

  1. Go to the roles dashboard (select Administration > Roles in the pull-down menu) and click the Create Role button.

    The Roles page appears. See Custom Role Permissions for a list of the permissions available for each custom role option.

  2. In the Roles page, do the following in the indicated fields:
    1. Role Name : Enter a name for the new role.
    2. Description (optional): Enter a description of the role.
      Note: All entity types are listed by default, but you can display just a subset by entering a string in the Filter Entities search field.
      Figure. Filter Entities Click to enlarge Filters the available entities

    3. Select an entity you want to add to this role and provide desired access permissions from the available options. The access permissions vary depending on the selected entity.

      For example, for the VM entity, click the radio button for the desired VM permissions:

      • No Access
      • View Access
      • Basic Access
      • Edit Access
      • Set Custom Permissions

      If you select Set Custom Permissions , click the Change link to display the Custom VM Permissions window, check all the permissions you want to enable, and then click the Save button. Optionally, check the Allow VM Creation box to allow this role to create VMs.

      Figure. Custom VM Permissions Window Click to enlarge displays the custom VM permissions window

  3. Click Save to create the role. The page closes and the new role appears in the Roles view list.
Modifying a Custom Role

About this task

Perform the following procedure to modify or delete a custom role.

Procedure

  1. Go to the roles dashboard and select (check the box for) the desired role from the list.
  2. Do one of the following:
    • To modify the role, select Update Role from the Actions pull-down list. The Roles page for that role appears. Update the field values as desired and then click Save . See Creating a Custom Role for field descriptions.
    • To delete the role, select Delete from the Action pull-down list. A confirmation message is displayed. Click OK to delete and remove the role from the list.
Custom Role Permissions

A selection of permission options are available when creating a custom role.

The following table lists the permissions you can grant when creating or modifying a custom role. When you select an option for an entity, the permissions listed for that option are granted. If you select Set custom permissions , a complete list of available permissions for that entity appears. Select the desired permissions from that list.

Entity Option Permissions
App (application) No Access (none)
Basic Access Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create AWS VM, Create Image, Create VM, Delete AWS VM, Delete VM, Download App Runlog, Update AWS VM, Update VM, View App, View AWS VM, View VM
Set Custom Permissions (select from list) Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create App, Create AWS VM, Create Image, Create VM, Delete App, Delete AWS VM, Delete VM, Download App Runlog, Update App, Update AWS VM, Update VM, View App, View AWS VM, View VM
VM Recovery Point No Access (none)
View Only View VM Recovery Point
Full Access Delete VM Recovery Point, Restore VM Recovery Point, Snapshot VM, Update VM Recovery Point, View VM Recovery Point, Allow VM Recovery Point creation
Set Custom Permissions (Change) Abort App Runlog, Access Console VM, Action Run App, Clone VM, Create App, Create AWS VM, Create Image, Create VM, Delete App, Delete AWS VM, Delete VM, Download App Runlog, Update App, Update AWS VM, Update VM, View App, View AWS VM, View VM
Note:

You can assign permissions for the VM Recovery Point entity to users or user groups in the following two ways.

  • Manually assign permission for each VM where the recovery point is created.
  • Assign permission using Categories in the Role Assignment workflow.
Tip: When a recovery point is created, it is associated with the same category as the VM.
VM No Access (none)
View Access Access Console VM, View VM
Basic Access Access Console VM, Update VM Power State, View VM
Edit Access Access Console VM, Update VM, View Subnet, View VM
Full Access Access Console VM, Clone VM, Create VM, Delete VM, Export VM, Update VM, Update VM Boot Config, Update VM CPU, Update VM Categories, Update VM Description, Update VM Disk List, Update VM GPU List, Update VM Memory, Update VM NIC List, Update VM Owner, Update VM Power State, Update VM Project, View Cluster, View Subnet, View VM.
Set Custom Permissions (select from list) Access Console VM, Clone VM, Create VM, Delete VM, Update VM, Update VM Boot Config, Update VM CPU, Update VM Categories, Update VM Disk List, Update VM GPU List, Update VM Memory, Update VM NIC List, Update VM Owner, Update VM Power State, Update VM Project, View Cluster, View Subnet, View VM.

Granular permissions (applicable if IAM is enabled, see Granular Role-Based Access Control (RBAC)) for details.

Allow VM Power Off, Allow VM Power On, Allow VM Reboot, Allow VM Reset, Expand VM Disk Size, Mount VM CDROM, Unmount VM CDROM, Update VM Memory Overcommit, Update VM NGT Config, Update VM Power State Mechanism

Allow VM creation (additional option) (n/a)
Blueprint No Access (none)
View Access View Account, View AWS AZ, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet
Basic Access Access Console VM, Clone VM, Create App,Create Image, Create VM, Delete VM, Launch Blueprint, Update VM, View Account, View App, View AWS AZ, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet, View VM
Full Access Access Console VM, Clone Blueprint, Clone VM, Create App, Create Blueprint, Create Image, Create VM, Delete Blueprint, Delete VM, Download Blueprint, Export Blueprint, Import Blueprint, Launch Blueprint, Render Blueprint, Update Blueprint, Update VM, Upload Blueprint, View Account, View App, View AWS AZ, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet, View VM
Set Custom Permissions (select from list) Access Console VM, Clone VM, Create App, Create Blueprint, Create Image, Create VM, Delete Blueprint, Delete VM, Download Blueprint, Export Blueprint, Import Blueprint, Launch Blueprint, Render Blueprint, Update Blueprint, Update VM, Upload Blueprint, View Account, View App, View AWS AZ, View AWS Elastic IP, View AWS Image, View AWS Key Pair, View AWS Machine Type, View AWS Region, View AWS Role, View AWS Security Group, View AWS Subnet, View AWS Volume Type, View AWS VPC, View Blueprint, View Cluster, View Image, View Project, View Subnet, View VM
Marketplace Item No Access (none)
View marketplace and published blueprints View Marketplace Item
View marketplace and publish new blueprints Update Marketplace Item, View Marketplace Item
Full Access Config Marketplace Item, Create Marketplace Item, Delete Marketplace Item, Render Marketplace Item, Update Marketplace Item, View Marketplace Item
Set Custom Permissions (select from list) Config Marketplace Item, Create Marketplace Item, Delete Marketplace Item, Render Marketplace Item, Update Marketplace Item, View Marketplace Item
Report No Access (none)
View Only Notify Report Instance, View Common Report Config, View Report Config, View Report Instance
Full Access Create Common Report Config, Create Report Config, Create Report Instance, Delete Common Report Config, Delete Report Config, Delete Report Instance, Notify Report Instance, Run Report Config, Share Report Config, Share Report Instance, Update Common Report Config, Update Report Config, View Common Report Config, View Report Config, View Report Instance, View User, View User Group
Cluster No Access (none)
View Access View Cluster
Update Access Update Cluster
Full Access Update Cluster, View Cluster
Subnet No Access (none)
View Access View Subnet, View Virtual Switch
Image No Access (none)
View Only View Image
Set Custom Permissions (select from list) Copy Image Remote, Create Image, Delete Image, Migrate Image, Update Image, View Image
OVA No Access (none)
View Access View OVA
Full Access View OVA, Create OVA, Update OVA and Delete OVA
Set custom permissions Change View OVA, Create OVA, Update OVA and Delete OVA
Object Store No Access (none)
View Access View Object Store
Full Access View Object Store, Create Object Store, Update Object Store and Delete Object Store
Set custom permissions Change View Object Store, Create Object Store, Update Object Store and Delete Object Store
Analysis Session No Access (none)
View Only View Analysis Session
Full Access Create Analysis Session, Delete Analysis Session, Share Analysis Session, Update Analysis Session, View Analysis Session, View User and View User Group
Dashboard No Access (none)
View Only View Dashboard
Full Access Create Dashboard, Delete Dashboard, Share Dashboard, Update Dashboard, View Dashboard, View User and View User Group
Capacity Scenario No Access (none)
View Only View Capacity Scenario
Full Access Create Whatif, Delete Whatif, Share Whatif, Update Whatif, View Whatif, View User and View User Group

The following table describe the permissions.

Note: By default, assigning certain permissions to a user role might implicitly assign more permissions to that role. However, the implicitly assigned permissions will not be displayed in the details page for that role. These permissions are displayed only if you manually assign them to that role.
Permission Description Assigned Implicilty By
Create App Allows to create an application.
Delete App Allows to delete an application.
View App Allows to view an application.
Action Run App Allows to run action on an application.
Download App Runlog Allows to download an application runlog.
Abort App Runlog Allows to abort an application runlog.
Access Console VM Allows to access the console of a virtual machine.
Create VM Allows to create a virtual machine.
View VM Allows to view a virtual machine.
Clone VM Allows to clone a virtual machine.
Delete VM Allows to delete a virtual machine.
Export VM Allows to export a virtual machine
Snapshot VM Allows to snapshot a virtual machine.
View VM Recovery Point Allows to view a vm_recovery_point.
Update VM Recovery Point Allows to update a vm_recovery_point.
Delete VM Recovery Point Allows to delete a vm_recovery_point.
Restore VM Recovery Point Allows to restore a vm_recovery_point.
Update VM Allows to update a virtual machine.
Update VM Boot Config Allows to update a virtual machine's boot configuration. Update VM
Update VM CPU Allows to update a virtual machine's CPU configuration. Update VM
Update VM Categories Allows to update a virtual machine's categories. Update VM
Update VM Description Allows to update a virtual machine's description. Update VM
Update VM GPU List Allows to update a virtual machine's GPUs. Update VM
Update VM NIC List Allows to update a virtual machine's NICs. Update VM
Update VM Owner Allows to update a virtual machine's owner. Update VM
Update VM Project Allows to update a virtual machine's project. Update VM
Update VM NGT Config Allows updates to a virtual machine's Nutanix Guest Tools configuration. Update VM
Update VM Power State Allows updates to a virtual machine's power state. Update VM
Update VM Disk List Allows to update a virtual machine's disks. Update VM
Update VM Memory Allows to update a virtual machine's memory configuration. Update VM
Update VM Power State Mechanism Allows updates to a virtual machine's power state mechanism. Update VM or Update VM Power State
Allow VM Power Off Allows power off and shutdown operations on a virtual machine. Update VM or Update VM Power State
Allow VM Power On Allows power on operation on a virtual machine. Update VM or Update VM Power State
Allow VM Reboot Allows reboot operation on a virtual machine. Update VM or Update VM Power State
Expand VM Disk Size Allows to expand a virtual machine's disk size. Update VM or Update VM Disk List
Mount VM CDROM Allows to mount an ISO to virtual machine's CDROM. Update VM or Update VM Disk List
Unmount VM CDROM Allows to unmount ISO from virtual machine's CDROM. Update VM or Update VM Disk List
Update VM Memory Overcommit Allows to update a virtual machine's memory overcommit configuration. Update VM or Update VM Memory
Allow VM Reset Allows reset (hard reboot) operation on a virtual machine. Update VM, Update VM Power State, or Allow VM Reboot
View Cluster Allows to view a cluster.
Update Cluster Allows to update a cluster.
Create Image Allows to create an image.
View Image Allows to view a image.
Copy Image Remote Allows to copy an image from local PC to remote PC.
Delete Image Allows to delete an image.
Migrate Image Allows to migrate an image from PE to PC.
Update Image Allows to update a image.
Create Image Placement Policy Allows to create an image placement policy.
View Image Placement Policy Allows to view an image placement policy.
Delete Image Placement Policy Allows to delete an image placement policy.
Update Image Placement Policy Allows to update an image placement policy.
Create AWS VM Allows to create an AWS virtual machine.
View AWS VM Allows to view an AWS virtual machine.
Update AWS VM Allows to update an AWS virtual machine.
Delete AWS VM Allows to delete an AWS virtual machine.
View AWS AZ Allows to view AWS Availability Zones.
View AWS Elastic IP Allows to view an AWS Elastic IP.
View AWS Image Allows to view an AWS image.
View AWS Key Pair Allows to view AWS keypairs.
View AWS Machine Type Allows to view AWS machine types.
View AWS Region Allows to view AWS regions.
View AWS Role Allows to view AWS roles.
View AWS Security Group Allows to view an AWS security group.
View AWS Subnet Allows to view an AWS subnet.
View AWS Volume Type Allows to view AWS volume types.
View AWS VPC Allows to view an AWS VPC.
Create Subnet Allows to create a subnet.
View Subnet Allows to view a subnet.
Update Subnet Allows to update a subnet.
Delete Subnet Allows to delete a subnet.
Create Blueprint Allows to create the blueprint of an application.
View Blueprint Allows to view the blueprint of an application.
Launch Blueprint Allows to launch the blueprint of an application.
Clone Blueprint Allows to clone the blueprint of an application.
Delete Blueprint Allows to delete the blueprint of an application.
Download Blueprint Allows to download the blueprint of an application.
Export Blueprint Allows to export the blueprint of an application.
Import Blueprint Allows to import the blueprint of an application.
Render Blueprint Allows to render the blueprint of an application.
Update Blueprint Allows to update the blueprint of an application.
Upload Blueprint Allows to upload the blueprint of an application.
Create OVA Allows to create an OVA.
View OVA Allows to view an OVA.
Update OVA Allows to update an OVA.
Delete OVA Allows to delete an OVA.
Create Marketplace Item Allows to create a marketplace item.
View Marketplace Item Allows to view a marketplace item.
Update Marketplace Item Allows to update a marketplace item.
Config Marketplace Item Allows to configure a marketplace item.
Render Marketplace Item Allows to render a marketplace item.
Delete Marketplace Item Allows to delete a marketplace item.
Create Report Config Allows to create a report_config.
View Report Config Allows to view a report_config.
Run Report Config Allows to run a report_config.
Share Report Config Allows to share a report_config.
Update Report Config Allows to update a report_config.
Delete Report Config Allows to delete a report_config.
Create Common Report Config Allows to create a common report_config.
View Common Report Config Allows to view a common report_config.
Update Common Report Config Allows to update a common report_config.
Delete Common Report Config Allows to delete a common report_config.
Create Report Instance Allows to create a report_instance.
View Report Instance Allows to view a report_instance.
Notify Report Instance Allows to notify a report_instance.
Notify Report Instance Allows to notify a report_instance.
Share Report Instance Allows to share a report_instance.
Delete Report Instance Allows to delete a report_instance.
View Account Allows to view an account.
View Project Allows to view a project.
View User Allows to view a user.
View User Group Allows to view a user group.
View Name Category Allows to view a category's name.
View Value Category Allows to view a category's value.
View Virtual Switch Allows to view a virtual switch.
Granting Restore Permission to Project User

About this task

By default, only a self service admin or a cluster admin can view and restore the recovery points. However, a self service admin or cluster admin can grant permission to the project user to restore the VM from a recovery point.

To grant restore Permission to a project user, do the following:

Procedure

  1. Log on to Prism Central with cluster admin or self service admin credentials.
  2. Go to the roles dashboard (select Administration > Roles in the pull-down menu) and do one of the following:
    • Click the Create Role button.
    • Select an existing role of a project user and then select Duplicate from the Actions drop-down menu. To modify the duplicate role, select Update Role from the Actions pull-down list.
  3. The Roles page for that role appears. In the Roles page, do the following in the indicated fields:
    1. Role Name : Enter a name for the new role.
    2. Description (optional): Enter a description of the role.
    3. Expand VM Recovery Point and do one of the following:
      • Select Full Access and then select Allow VM recovery point creation .
      • Click Change next to Set Custom Permissions to customize the permissions. Enable Restore VM Recovery Point permission. This permission also grants the permission to view the VM created from the restore process.
    4. Click Save to add the role. The page closes and the new role appears in the Roles view list.
  4. In the Roles view, select the newly created role and click Manage Assignment to assign the user to this role.
  5. In the Add New dialog, do the following:
    • Under Select Users or User Groups or OUs , enter the target user name. The search box displays the matched records. Select the required listing from the records.
    • Under Entities , select VM Recovery Point , select Individual Entry from the drop-down list, and then select All VM Recovery Points.
    • Click Save to finish.

Configuring Role Mapping

About this task

After user authentication is configured (see Configuring Authentication), the users or the authorized directories are not assigned the permissions by default. The required permissions must be explicitly assigned to users, authorized directories, or organizational units using role mapping.

You can refine the authentication process by assigning a role with associated permissions to users, groups, and organizational units. This procedure allows you to map and assign users to the predefined roles in Prism Central such as, User Admin , Cluster Admin , and Viewer . To assign roles, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Role Mapping from the Settings page.

    The Role Mapping window appears.

    Figure. Role Mapping Window Click to enlarge displays annotated role mapping window

  2. To create a role mapping, click the New Mapping button.

    The Create Role Mapping window appears. Enter the required information in the following fields.

    Figure. Create Role Mapping Window Click to enlarge displays create role mapping window

  3. Directory or Provider : Select the target directory or identity provider from the pull-down list.
    Only directories and identity providers previously configured in the authentication settings are available. If the desired directory or provider does not appear in the list, add that directory or provider, and then return to this procedure.
  4. Type : Select the desired LDAP entity type from the pull-down list.
    This field appears only if you have selected a directory from the Directory or Provider pull-down list. The following entity types are available:
    • User : A named user. For example, dev_user_1.
    • Group : A group of users. For example, dev_grp1, dev_grp2, sr_dev_1, and staff_dev_1.
    • OU : organizational units with one or more users, groups, and even other organizational units. For example, all_dev, consists of user dev_user_1 and groups dev_grp1, dev_grp2, sr_dev_1, and staff_dev_1.
  5. Role : Select a user role from the pull-down list.
    You can choose one of the following roles:
    • Viewer : Allows users with view-only access to the information and hence cannot perform any administrative tasks.
    • Cluster Admin (Formerly Prism Central Admin): Allows users to view and perform all administrative tasks except creating or modifying user accounts.
    • User Admin : Allows users to view information, perform administrative tasks, and to create and modify user accounts.
  6. Values : Enter the entity names. The entity names are assigned with the respective roles that you have selected.
    The entity names are case sensitive. If you need to provide more than one entity name, then the entity names should be separated by a comma (,) without any spaces in between them.

    LDAP-based authentication

    • For AD

      Enter the actual names used by the organizational units (it applies to all users and groups in those OUs), groups (all users in those groups), or users (each named user) used in LDAP in the Values field.

      For example, entering sr_dev_1,staff_dev_1 in the Values field when the LDAP type is Group and the role is Cluster Admin, implies that all users in the sr_dev_1 and staff_dev_1 groups are assigned the administrative role for the cluster.

      Do not include the domain name in the value. For example, enter all_dev , and not all_dev@<domain_name> . However, when users log in to Cluster Admin, include the domain along with the username.

      User : Enter the sAMAccountName or userPrincipalName in the values field.

      Group : Enter common name (cn) or name.

      OU : Enter name.

    • For OpenLDAP

      User : Use the username attribute (that was configured while adding the directory) value.

      Group : Use the group name attribute (cn) value.

      OU : Use the OU attribute (ou) value.

    SAML-based authentication:

    You must configure the NameID attribute in the identity provider. You can enter the NameID returned in the SAML response in the Values field.

    For SAML, only User type is supported. Other types such as, Group and OU, are not supported.

    If you enable Identity and Access Management, see Security Management Using Identity and Access Management (Prism Central)

  7. Click Save .

    The role mapping configurations are saved, and the new role is listed in the Role Mapping window.

    You can create a role map for each authorized directory. You can also create multiple role maps that apply to a single directory. When there are multiple maps for a directory, the most specific rule for a user applies.

    For example, adding a Group map set to Cluster Admin and a User map set to Viewer for a few specific users in that group means all users in the group have administrator permission except those few specific users who have only viewing permission.

  8. To edit a role map entry, click the pencil icon for that entry.
    After clicking the pencil icon, the Edit Role Mapping window appears which is similar to the Create Role Mapping window. Edit the required information in the required fields and click the Save button to update the changes.
  9. To delete a role map entry, click the X icon for that entry and click the OK button to confirm the role map entry deletion.
    The role map entry is removed from the list.

Granular Role-Based Access Control (RBAC)

Granular Role-Based Access Control (RBAC) allows you to assign fine-grained VM operation permissions to users based on your specific requirements. This feature enables you to create custom roles with finer permission entities like "Allow VM Power On" or "Allow VM Power Off" as compared to the broader permissions categories like "Update VM".

The procedure to configure Granular RBAC to users is similar to the procedure outlined in Creating a Custom Role. For the complete list of available permissions, see Custom Role Permissions.

Note:
  • Granular RBAC is supported for VMs running on an AHV cluster.
  • Ensure that IAM is enabled, see Enabling IAM for details.
  • You must be running Prism Central version pc.2021.9 and AOS version 6.0.1 or above.

Cluster Role-Based Access Control (RBAC)

Cluster role-based access control (RBAC) feature enables a super-admin user to provide Prism Admin and Prism Viewer roles access to one or more clusters registered with Prism Central. A user with Prism Central admin or viewer role is able to view and act on the entities like VM, host, container, VM recovery points, and recovery plans from the allowed clusters.

Cluster RBAC is currently supported on an on-prem Prism Central instance hosted in a Prism Element cluster running AHV or ESXi. After you enable the Micro Services Infrastructure feature on Prism Central, the Cluster RBAC feature is then automatically enabled.

This feature supports clusters that are hosted on AHV and VMware ESXi.

Note: The Prism Central supports assigning up to 15 clusters to any user or user group.
Configuring Cluster RBAC

About this task

To configure Cluster RBAC in Prism Central for users or user groups, do the following.

Procedure

  1. Log on to Prism Central as an admin user or any user with super admin access.
  2. Configure active directory settings.
    Note: You can skip this step if an active directory is already configured.
    Go to Prism Central Settings > Authentication , click + New Directory and add your preferred active directory.
  3. Click the hamburger menu and go to Administration > Roles .
    The page displays system defined and custom roles.
  4. Select Prism Admin or Prism Viewer role, then click Actions > Manage Assignment .
  5. Click Add New to add a new user or user groups or OU (IDP users or user groups) to this role.
    Figure. Role Assignment Click to enlarge role assignment view

    You will add users or user groups and assign clusters to the new role in the upcoming steps.

  6. In the Select Users or User Groups or OUs field, do the following:
    1. Select the configured AD or IDP from the drop-down.
      The drop-down displays a list of available types of user or user group such as Local User, AD based user or user groups, or SAML based user or user groups. Select Organizational Units or OU for AD or directories that use SAML based IDP for authentication.
      Figure. User, User Group or OU selection Click to enlarge Displaying the User, User Group or OU selection drop-down list.

    2. Search and add the users or groups in the Search User field.

      Typing few letters in the search field displays a list of users from which you can select, and you can add multiple user names in this field.

  7. In the Select Clusters field, you can provide cluster access to AD users or User Groups using the Individual entity option (one or more registered clusters) or ALL Clusters option.
    Figure. Select Clusters Click to enlarge ahv cluster selection view

  8. Click Save .
    AD or IDP users or User Groups can log on and access Prism Central as a Prism Admin or Prism Viewer, and view or act on the entities like VM, host, and container from the configured clusters.

Cluster RBAC for Volume Group

Cluster role-based access control (RBAC) for Volume Group feature enables a super-admin user to provide Prism Admin and Prism Viewer roles access to one or more clusters registered with Prism Central. A user with Prism Admin role can view and update the entities like volume groups, virtual disks, and storage containers from the allowed clusters. However, a user with Prism Viewer role can only view the entities.

Cluster RBAC is currently supported on an on-prem Prism Central instance hosted in a Prism Element cluster running AHV. After you enable the Micro Services Infrastructure feature on Prism Central, the Cluster RBAC feature is then automatically enabled.

Cluster RBAC for Volume Group feature is supported on AHV and ESXi clusters.

Note: Prism Central supports Cluster RBAC for VG feature from PC.2022.6 release.
Table 1. List of Permissions for Prism Admin and Prism Viewer Roles
Role Privileges
Prism Admin Full administrator privileges except for creating or modifying the user accounts
Prism Viewer View-only privileges
Configuring Cluster RBAC for Volume Group

About this task

To configure Cluster RBAC for Volume Group in Prism Central for users or user groups, do the following.

Procedure

  1. Log on to Prism Central as an admin user or any user with super admin access.
  2. Configure Active Directory settings.
    Note: You can skip this step if an Active Directory is already configured.
    Go to Prism Central Settings > Authentication , click + New Directory and add your preferred Active Directory.
  3. Click the hamburger menu and go to Administration > Roles .
    The page displays system defined and custom roles.
  4. Select Prism Admin or Prism Viewer role, then click Actions > Manage Assignment .
    Figure. Prism Central Roles Click to enlarge roles view

    For illustration purpose, the Prism Admin role is selected in this step.
  5. Click Add New to add a new user or user groups to this role.
    Figure. Role Assignment Click to enlarge role assignment view

    You will add users or user groups and assign clusters to the Prism Admin or Prism Viewer role in the upcoming steps.

  6. In the Select Users or Groups field, do the following:
    1. Select the configured active directory (AD) from the drop-down.
    2. Search and add the users or user groups.
    To search a user or user group, start typing few letters and the system will automatically suggest the names.
  7. In the Select Clusters field, you can provide cluster access to AD users or User Groups using the Individual entity option (one or more registered clusters) or ALL Clusters option.
    Figure. Select Clusters Click to enlarge ahv cluster selection view

  8. Click Save .
    AD users or User Groups can log on and access Prism Central as a Prism Admin or Prism Viewer. They can view or act on the available entities in the configured clusters such as volume groups, virtual disks, and storage containers.

Assigning a Role

About this task

In addition to configuring basic role maps (see Configuring Role Mapping), you can configure more precise role assignments (AHV only). To assign a role to selected users or groups that applies just to a specified set of entities, do the following:

Procedure

  1. Log on to Prism Central as "admin" user or any user with "super admin" access.
  2. Configure Active Directory settings.
    Note: You can skip this step if an active directory is already configured.
    Go to Prism Central Settings > Authentication , click + New Directory and add your preferred active directory.
  3. Click the hamburger menu and go to Administration > Roles .
    The page displays system defined and custom roles.
  4. Select the desired role in the roles dashboard, then click Actions > Manage Assignment .
  5. Click Add New to add Active Directory based users or user groups, or IDP users or user groups (or OUs) to this role.
    Figure. Role Assignment Click to enlarge role assignment view

    You are adding users or user groups and assigning entities to the new role in the next steps.

  6. In the Select Users or User Groups or OUs field, do the following:
    1. Select the configured AD or IDP from the drop-down.
      The drop-down displays a list of available types of user or user group such as Local User, AD based user or user groups, or SAML based user or user groups. Select Organizational Units or OU for AD or directories that use SAML based IDP for authentication.
      Figure. User, User Group or OU selection Click to enlarge Displaying the User, User Group or OU selection drop-down list.

    2. Search and add the users or groups in the Search User field.

      Typing few letters in the search field displays a list of users from which you can select, and you can add multiple user names in this field.

  7. In the Select Entities field, you can provide access to various entities. The list of available entities depends on the role selected in Step 4.

This table lists the available entities for each role:

Table 1. Available Entities for a Role
Role Entities
Consumer AHV VM, Image, Image Placement Policy, OVA, Subnets: VLAN
Developer AHV VM, Cluster, Image, Image Placement Policy, OVA, Subnets:VLAN
Operator AHV VM, Subnets:VLAN
Prism Admin Individual entity (one or more clusters), All Clusters
Prism Viewer Individual entity (one or more clusters), All Clusters
Custom role (User defined role) Individual entity, In Category (only AHV VMs)

This table shows the description of each entity:

Table 2. Description of Entities
Entity Description
AHV VM Allows you to manage VMs including create and edit permission
Image Allows you to access and manage image details
Image Placement Policy Allows you to access and manage image placement policy details
OVA Allows you to view and manage OVA details
Subnets: VLAN Allows you to view subnet details
Cluster Allows you to view and manage details of assigned clusters (AHV and ESXi clusters)
All Clusters Allows you to view and manage details of all clusters
VM Recovery Points Allows you to perform recovery operations with recovery points.
Recovery Plan (Single PC only)

Allows you to view, validate, and test recovery plans. Also allows you to clean up VMs created after recovery plan test.
Individual entity Allows you to view and manage individual entities such as AHV VM, Clusters, and Subnets:VLAN
  1. Repeat Step 5 and Step 6 for any combination of users/entities you want to define.
    Note: To allow users to create certain entities like a VM, you may also need to grant them access to related entities like clusters, networks, and images that the VM requires.
  2. Click Save .

Displaying Role Permissions

About this task

Do the following to display the privileges associated with a role.

Procedure

  1. Go to the roles dashboard and select the desired role from the list.

    For example, if you click the Consumer role, the details page for that role appears, and you can view all the privileges associated with the Consumer role.

    Figure. Role Summary Tab Click to enlarge

  2. Click the Users tab to display the users that are assigned this role.
    Figure. Role Users Tab Click to enlarge

  3. Click the User Groups tab to display the groups that are assigned this role.
  4. Click the Role Assignment tab to display the user/entity pairs assigned this role (see Assigning a Role).

Installing an SSL Certificate

About this task

Prism Central supports SSL certificate-based authentication for console access. To install a self-signed or custom SSL certificate, do the following:
Important: Ensure that SSL certificates are not password protected.
Note: Nutanix recommends that you replace the default self-signed certificate with a CA signed certificate.

Procedure

  1. Click the gear icon in the main menu and then select SSL Certificate in the Settings page.
  2. To replace (or install) a certificate, click the Replace Certificate button.
  3. To create a new self-signed certificate, click the Replace Certificate option and then click the Apply button.

    A dialog box appears to verify the action; click the OK button. This generates and applies a new RSA 2048-bit self-signed certificate for Prism Central.

    Figure. SSL Certificate Window: Regenerate
    Click to enlarge

  4. To apply a custom certificate that you provide, do the following:
    1. Click the Import Key and Certificate option and then click the Next button.
      Figure. SSL Certificate Window: Import Click to enlarge
    2. Do the following in the indicated fields, and then click the Import Files button.
      Note:
      • All the three imported files for the custom certificate must be PEM encoded.
      • Ensure that the private key does not have any extra data (or custom attributes) before the beginning (-----BEGIN CERTIFICATE-----) or after the end (-----END CERTIFICATE-----) of the private key block.
      • See Recommended Key Configurations to ensure proper set of key types, sizes/curves, and signature algorithms.
      • Private Key Type : Select the appropriate type for the signed certificate from the pull-down list (RSA 4096 bit, RSA 2048 bit, EC DSA 256 bit, or EC DSA 384 bit).
      • Private Key : Click the Browse button and select the private key associated with the certificate to be imported.
      • Public Certificate : Click the Browse button and select the signed public portion of the server certificate corresponding to the private key.
      • CA Certificate/Chain : Click the Browse button and select the certificate or chain of the signing authority for the public certificate.
      Figure. Importing Certificate Click to enlarge
      In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. See Recommended Key Configurations to ensure proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
      Note: There is no specific requirement for the subject name of the certificates (subject alternative names (SAN) or wildcard certificates are supported in Prism).
      You can use the cat command to concatenate a list of CA certificates into a chain file.
      $ cat signer.crt inter.crt root.crt > server.cert
      Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

Results

After generating or uploading the new certificate, the interface gateway restarts. If the certificate and credentials are valid, the interface gateway uses the new certificate immediately, which means your browser session (and all other open browser sessions) will be invalid until you reload the page and accept the new certificate. If anything is wrong with the certificate (such as a corrupted file or wrong certificate type), the new certificate is discarded, and the system reverts back to the original default certificate provided by Nutanix.

Note: The system holds only one custom SSL certificate. If a new certificate is uploaded, it replaces the existing certificate. The previous certificate is discarded.

Controlling Remote (SSH) Access

About this task

Nutanix supports key-based SSH access to Prism Central. Enabling key-based SSH access ensures that password authentication is disabled and only the keys you have provided can be used to access the Prism Central (only for nutanix/admin users). Thus making the Prism Central more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to Prism Central, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.

    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.

    Figure. Cluster Lockdown Window Click to enlarge displays cluster lockdown window

  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.

    Remote login access is enabled by default.

  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    3. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.

    There are no public keys available by default, but you can add any number of public keys.

  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Security Policies using Flow

Nutanix Flow includes a policy-driven security framework that inspects traffic within the data center. For more information, see the Flow Microsegmentation Guide.

Data-in-Transit Encryption

Data-in-Transit Encryption allows you to encrypt service level traffic between the cluster nodes. Data-in-Transit Encryption, along with Data-at-Rest Encryption, protects the entire life cycle of data and is an essential countermeasure for unauthorized access of critical data.

To enable Data-in-Transit Encryption, see Enabling Data-in-Transit Encryption.
Note:
  • Data-in-Transit Encryption can have an impact on I/O latency and CPU performance.
  • Intra-cluster traffic encryption is supported only for the Stargate service.
  • RDMA traffic encryption is not supported.
  • When a Controller VM goes down, the traffic from guest VM to remote Controller VM is not encrypted.
  • Traffic between guest VMs connected to Volume Groups is not encrypted when the target disk is on a remote Controller VM.

Enabling Data-in-Transit Encryption

About this task

Data-in-Transit Encryption allows you to encrypt service level traffic between the cluster nodes. To enable Data-in-Transit Encryption, do the following.

Before you begin

  1. Ensure that the cluster is running AOS version 6.1.1 and Prism Central version pc.2022.4.
  2. Ensure that you allow port 2009, which is used for Data-in-Transit Encryption.

Procedure

  1. Log on to the Prism Central and click the gear icon.
  2. Go to Hardware > Clusters > Actions and select Enable Data-In-Transit Encryption .
    The following confirmation dialog box is displayed.
    Figure. Enable Data-in-Transit Encryption Click to enlarge

  3. Click Enable to confirm.

What to do next

You can disable Data-in-Transit Encryption after you have enabled it. To disable Data-in-Transit Encryption, see Disabling Data-in-Transit Encryption.

Disabling Data-in-Transit Encryption

About this task

You can disable Data-in-Transit Encryption after you have enabled it. To disable Data-in-Transit Encryption, do the following.

Procedure

  1. Log on to the Prism Central and click the gear icon.
  2. Go to Hardware > Clusters > Actions and select Disable Data-In-Transit Encryption .
    The following confirmation dialog box is displayed.
    Figure. Disable Data-in-Transit Encryption Click to enlarge

  3. Click Disable to confirm.

Password Retry Lockout

For enhanced security, Prism Central and Prim Element locks out the default 'admin' account for a period of 15 minutes after five unsuccessful login attempts. Once the account is locked out, the following message is displayed at the login screen.

Account locked due to too many failed attempts

You can attempt entering the password after the 15 minutes lockout period, or contact Nutanix Support in case you have forgotten your password.

Security Management Using Identity and Access Management (Prism Central)

Enabled and administered from Prism Central, Identity and Access Management (IAM) is an authentication and authorization feature that uses attribute-based access control (ABAC). It is disabled by default. This section describes Prism Central IAM prerequisites, enablement, and SAML-based standard-compliant identity provider (IDP) configuration.

After you enable the Micro Services Infrastructure (CMSP) on Prism Central, IAM is automatically enabled. You can configure a wider selection of identity providers, including Security Assertion Markup Language (SAML) based identity providers. The Prism Central web console presents an updated sign-on/authentication page.

The enable process migrates existing directory, identity provider, and user configurations, including Common Access Card (CAC) client authentication configurations. After enabling IAM, if you want to enable a client to authenticate by using certificates, you must also enable CAC authentication. For more information, see Identity and Access Management Prerequisites and Considerations. Also, see the Identity and Access Management Software Support topic in the Prism Central Release Notes for specific support requirements.

The work flows for creating authentication configurations and providing user and role access described in Configuring Authentication) are the same whether IAM is enabled or not.

IAM Features

Highly Scalable Architecture

Based on the Kubernetes open source platform, IAM uses independent pods for authentication (AuthN), authorization (AuthZ), and IAM data storage and replication.

  • Each pod automatically scales independently of Prism Central when required. No user intervention or control is required.
  • When new features or functions are available, you can update IAM pods independently of Prism Central updates through Life Cycle Manager (LCM).
  • IAM uses a rolling upgrade method to help ensure zero downtime.
Secure by Design
  • Mutual TLS authentication (mTLS) secures IAM component communication.
  • The Micro Services infrastructure (CMSP) on Prism Central provisions certificates for mTLS.
More SAML Identity Providers (IDP)

Without enabling CMSP/IAM on Prism Central, Active Directory Federation Services (ADFS) is the only supported IDP for Single Sign-on. After you enable it, IAM supports more IDPs. Nutanix has tested these IDPs when SAML IDP authentication is configured for Prism Central.

  • Active Directory Federation Services (ADFS)
  • Azure Active Directory Federation Services (Azure ADFS)
    Note: Azure AD is not supported.
  • Okta
  • PingOne
  • Shibboleth
  • Keycloak

Users can log on from the Prism Central web console only. IDP-initiated authentication work flows are not supported. That is, logging on or signing on from an IDP web page or site is not supported.

Updated Authentication Page

After enabling IAM, the Prism Central login page is updated depending on your configuration. For example, if you have configured local user account and Active Directory authentication, this default page appears for directory users as follows. To log in as a local user, click the Log In with your Nutanix Local Account link.

Figure. Sample Default Prism Central IAM Logon Page, Active Directory And Local User Authentication Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

In another example, if you have configured SAML authentication instances named Shibboleth and AD2, Prism Central displays this page.

Figure. Sample Prism Central IAM Logon Page, Active Directory , Identity Provider, And Local User Authentication Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

Note: After upgrade to pc.2022.9 if the Security Assertion Markup Language (SAML) IDP is configured, you need to download the Prism Central metadata and re-configure the SAML IDP to recognize Prism Central as the service provider. See Updating ADFS When Using SAML Authentication to create the required rules for ADFS.

Identity and Access Management Prerequisites and Considerations

Make sure you meet the requirements listed before you enable the microservices infrastructure, which enables IAM.

IAM Prerequisites

For specific minimum software support and requirements for IAM, see the Prism Central release notes.

For microservices infrastructure requirements, see Enabling Microservices Infrastructure in the Prism Central Guide .

Prism Central
  • The Microservices Infrastructure and IAM is supported on clusters running AHV or ESXi only. For ESXi clusters you may need to enter your vCenter credentials (user name and password) and a network for deployment.
  • The host cluster must be registered with this Prism Central instance.
  • During installation or upgrade, ensure that you allocate a Virtual IP address (VIP) for Prism Central. For information about how to set the VIP for the Prism Central VM, see Installing Prism Central (1-Click Method) in the Acropolis Upgrade Guide . Once set, do not change this address.
  • Ensure that you have created a fully qualified domain name (FQDN) for Prism Central. Once the Prism Central FQDN is set, do not change it. For more information about how to set the FQDN in the Cluster Details window, see Managing Prism Central in the Prism Central Guide .
  • When microservices infrastructure is enabled on a Prism Central scale-out three-node deployment, reconfiguring the IP address and gateway of the Prism Central VMs is not supported.
  • Ensure connectivity between Prism Central and its managed Prism Element clusters.
  • Enable Microservices Infrastructure on Prism Central (CMSP) first to enable and use IAM. For more information, see Enabling Microservices Infrastructure in the Prism Central Guide .
  • IAM supports small or large single PC VM deployments. However, you cannot expand the single VM deployment to a scale-out three-node deployment once CMSP has been enabled.
  • IAM supports scale-out three-node PC VM deployments. Reverting this deployment to a single PC VM deployment is not supported.
  • Make sure Prism Central is managing at least one Prism Element cluster. For more information about how to register a cluster, see Register (Unregister) Cluster with Prism Central in the Prism Central Guide .
  • You cannot unregister the Prism Element cluster that is hosting the Prism Central deployment where you have enabled CMSP and IAM. You can unregister other clusters being managed by this Prism Central deployment.
Prism Element Clusters

Ensure that you have configured the following cluster settings. For more information, see Modifying Cluster Details in Prism Web Console Guide .

  • Virtual IP address (VIP). Once set, do not change this address
  • iSCSI data services IP address (DSIP). Once set, do not change this address
  • NTP server
  • Name server

IAM Considerations

Existing Authentication and Authorization Migrated After Enabling IAM
  • When you enable IAM by enabling CMSP, IAM migrates existing authentication and authorization configurations, including Common Access Card client authentication configurations.
Upgrading Prism Central After Enabling IAM
  • After you upgrade Prism Central, if CMSP (and therefore IAM) was previously enabled, both the services are enabled by default. You must contact Nutanix Support for any custom requirement.
Note: After upgrade to pc.2022.9 if the Security Assertion Markup Language (SAML) IDP is configured, you need to download the Prism Central metadata and re-configure the SAML IDP to recognize Prism Central as the service provider. See Updating ADFS When Using SAML Authentication to create the required rules for ADFS.
User Session Lifetime
  • Each session has a maximum lifetime of 8 hours
  • Session idle time is 15 minutes. After 15 minutes, a user or client is logged out and must re-authenticate.
Client Authentication and Common Access Card (CAC) Support
  • IAM supports deployments where CAC authentication and client authentication are enabled on Prism Central. After enabling IAM, if you want to enable a client to authenticate by using certificates, you must also enable CAC authentication.
  • Ensure that port 9441 is open in your firewall if you are using CAC client authentication.
Hypervisor Support
  • You can deploy IAM on an on-premise Prism Central (PC) deployment hosted on an AOS cluster running AHV or ESXi. Clusters running other hypervisors are not supported.

Enabling IAM

Before you begin

  • IAM on Prism Central is disabled by default. When you enable the Micro Services Infrastructure on Prism Central, IAM is automatically enabled.
  • See Enabling Microservices Infrastructure in the Prism Central Guide .
  • See Identity and Access Management Prerequisites and Considerations and also the Identity and Access Management Software Support topic in the Prism Central release notes for specific support requirements.

Procedure

  1. Enable Micro Services Infrastructure on Prism Central as described in Enabling Micro Services Infrastructure in the Prism Central Guide .
  2. To view task status:
    1. Open a web browser and log in to the Prism Central web console.
    2. Go to the Activity > Tasks dashboard and find the IAM Migration & Bootstrap task.
    The task takes up to 60 minutes to complete. Part of the task is migrating existing authentication configurations.
  3. After the enablement tasks are completed, including the IAM Migration & Bootstrap task, log out of Prism Central. Wait at least 15 minutes before logging on to Prism Central.

    The Prism Central web console shows a new log in page as shown below. This confirms that IAM is enabled.

    Note:

    Depending on your existing authentication configuration, the log in page might look different.

    Also, you can go to Settings > Prism Central Management page to verify if Prism Central on Microservices Infrastructure (CMSP) is enabled. CMSP and IAM enablement happen together.

    Figure. Sample Prism Central IAM Logon Page Click to enlarge Sample Prism Central IAM Logon Page shows new credential fields

What to do next

Configure authentication and access. If you are implementing SAML authentication with Active Directory Federated Services (ADFS), see Updating ADFS When Using SAML Authentication.

Configuring Authentication

Caution: Prism Central does not support the SSLv2 and SSLv3 ciphers. Therefore, you must disable the SSLv2 and SSLv3 options in a browser before accessing Prism Central. This disabling avoids an SSL Fallback and access denial situations. However, you must enable TLS protocol in the browser.

Prism Central supports user authentication with these authentication options:

  • SAML authentication. Users can authenticate through a supported identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties: an identity provider (IDP) and Prism Central as the service provider.

    With IAM, in addition to ADFS, other IDPs are available. For more information, see Security Management Using Identity and Access Management (Prism Central) and Updating ADFS When Using SAML Authentication.

  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts .
  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.

Enabling and Configuring Client Authentication/CAC

Before you begin

  • To enable a client to authenticate by using certificates, you must also enable CAC authentication.
  • Ensure that port 9441 is open in your firewall if you are using CAC client authentication. After enabling CAC client authentication, your CAC logon redirects the browser to use port 9441.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. Click the Client tab, then do the following steps.
    1. Select the Configure Client Chain Certificate check box.
    2. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note: Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
    3. To enable client authentication, click Enable Client Authentication .
    4. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This deletion also disables client authentication.)
      • To enable OCSP or CRL-based certificate revocation checking, see Certificate Revocation Checking.

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism validates if the certificate is signed by the trusted signing certificate of your organization.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server. When client authentication is enabled, this process becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on the local machine, or by providing it through a smart card reader.

    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  3. To specify a service account that the Prism Central web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box. Then do the following in the indicated fields:
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to logon to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The Prism Central console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:

    • Validates that the certificate has been signed by the trusted signing certificate of your organization.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Central supports both certificate authentication and basic authentication in order to handle both Prism Central login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Central login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    If you map a Prism Central role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism Central by using CAC authentication, to successfully log out of Prism Central, close the browser after you click Log Out .

Updating ADFS When Using SAML Authentication

With Nutanix IAM, to maintain compatibility with new and existing IDP/SAML authentication configurations, update your Active Directory Federated Services (ADFS) configuration - specifically the Prism Central Relying Party Trust settings. For these configurations, you are using SAML as the open standard for exchanging authentication and authorization data between ADFS as the identity provider (IDP) and Prism Central as the service provider. See the Microsoft Active Directory Federation Services documentation for details.

About this task

In your ADFS Server configuration, update the Prism Central Relying Party Trust settings by creating claim rules to send the selected LDAP attribute as the SAML NameID in email address format. For example, map the User Principal Name to NameID in the SAML assertion claims.

As an example, this topic uses UPN as the LDAP attribute to map. You could also map the email address attribute to NameID. See the Microsoft Active Directory Federation Services documentation for details about creating a claims aware Relying Party Trust and claims rules.

Procedure

  1. In the Relying Party Trust for Prism Central, configure a claims issuance policy with two rules.
    1. One rule based on the Send LDAP Attributes as Claims template.
    2. One rule based on the Transform an Incoming Claim template
  2. For the rule using the Send LDAP Attributes as Claims template, select the LDAP Attribute as User-Principal-Name and set Outgoing Claim Type to UPN .
    For User group configuration using the Send LDAP Attributes as Claims template, select the LDAP Attribute as Token-Groups - Unqualified-Names and set Outgoing Claim Type to Group .
  3. For the rule using the Transform an Incoming Claim template:
    1. Set Incoming claim type to UPN .
    2. Set the Outgoing claim type to Name ID .
    3. Set the Outgoing name ID format to Email .
    4. Select Pass through all claim values .

Adding a SAML-based Identity Provider

Before you begin

  • An identity provider (typically a server or other computer) is the system that provides authentication through a SAML request. There are various implementations that can provide authentication services in line with the SAML standard.
  • You can specify other tested standard-compliant IDPs in addition to ADFS. See the Prism Central release notes topic Identity and Access Management Software Support for specific support requirements and also Security Management Using Identity and Access Management (Prism Central).

    IAM allows only one identity provider at a time, so if you already configured one, the + New IDP link does not appear.

  • You must configure the identity provider to return the NameID attribute in SAML response. Prism Central uses the NameID attribute for role mapping.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. To add a SAML-based identity provider, click the + New IDP link.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Configuration name : Enter a name for the identity provider. This name appears in the logon authentication screen.
    2. Group Attribute Name (Optional) : Optionally, enter the group attribute name such as groups . Ensure that this name matches the group attribute name provided in the IDP configuration.
    3. Group Attribute Delimiter (Optional) : Optionally, enter a delimiter that needs to be used when multiple groups are selected for the Group attribute.
    4. Import Metadata : Click this option to upload a metadata file that contains the identity provider information.

      Identity providers typically provide an XML file on their website that includes metadata about that identity provider, which you can download from that site and then upload to Prism Central. Click + Import Metadata to open a search window on your local system and then select the target XML file that you downloaded previously. Click the Save button to save the configuration.

      Figure. Identity Provider Fields (metadata configuration) Click to enlarge

    This step completes configuring an identity provider in Prism Central, but you must also configure the callback URL for Prism Central on the identity provider. To configure the callback URL, click the Download Metadata link just below the Identity Providers table to download an XML file that describes Prism Central and then upload this metadata file to the identity provider.
  3. To edit an identity provider entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete an identity provider entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Restoring Identity and Access Management Configuration Settings

Prism Central regularly backs up the Identity and Access Management (IAM) database, typically every 15 minutes. This procedure describes how to restore a specific time-stamped IAM backup instance.

About this task

You can restore authentication and authorization configuration settings available from the IAM database. For example, use this procedure to restore your authentication and authorization configuration to a previous state. You can choose an available time-stamped backup instance when you run the shell script in this procedure, and your authentication and authorization configuration is restored to the settings in the point-in-time backup.

Procedure

  1. Log in to the Prism Central VM through an SSH session as the nutanix user.
  2. Run the backup shell script restore_iamv2.sh
    nutanix@pcvm$ sh /home/nutanix/cluster/bin/restore_iamv2.sh
    The script displays a numbered list of available backups, including the backup file time-stamp.
    Enter the Backup No. from the backup list (default is 1):
  3. Select a backup by number to start the restore process.
    The script displays a series of messages indicating restore progress, similar to:
    You Selected the Backup No 1
    Stopping the IAM services
    Waiting to stop all the IAM services and to start the restore process
    Restore Process Started
    Restore Process Completed
    ...
    Restarting the IAM services
    IAM Services Restarted Successfully

    After the script runs successfully, the command shell prompt returns and your IAM configuration is restored.

  4. To validate that your settings have been restored, log on to the Prism Central web console and go to Settings > Authentication and check the settings.

Accessing a List of Open Source Software Running on a Cluster

As an admin user, you can access a text file that lists all of the open source software running on a cluster.

About this task

Perform the following procedure to access a list of the open source software running on a cluster.

Procedure

  1. Log on to any Controller VM in the cluster as the admin user by using SSH.
  2. Access the text file by using the following command.
    less /usr/local/nutanix/license/blackduck_version_license.txt
Read article
Beam User Guide

Beam Hosted

Last updated: 2022-10-31

Beam Overview

Beam is a Cost Governance SaaS product offering by Nutanix that helps cloud-focused organizations to gain visibility into cloud spend across multiple cloud environments.

The following are a few of the key functionalities that help you with the cost management:

  • Provides you with deep visibility and rich analytics detailing cloud consumption patterns.
  • One-click cost optimization across your cloud environments.
  • Proactively identifies idle and underutilized resources, delivers specific recommendations to resize infrastructure services, and ensure optimal cloud consumption.

Beam provides the following capabilities:

  • Visibility into cloud consumption : Provides businesses with deep visibility into their multi-cloud consumption at an aggregate and granular level. Beam also automatically identifies cost anomalies to ensure cloud operators can immediately identify when spending deviations happen.
  • Optimization of Cloud Consumption : Provides cloud operators with a one-click feature and optimization recommendations to easily right-size cloud resources. Beam uses machine intelligence to continuously suggest optimal purchase plans for reserved instances that drive deep cost savings.
  • Control over Cloud Consumption : Helps you to set policies that continuously maintain high levels of cloud cost efficiency by automating various cost-saving actions. You can also create budgets for various teams or projects, track the spending against allocated budgets, and get alerts when a budget exceeds.

Cost Management for Nutanix On-Prem

Beam application enables you to take control of your cloud spend on the Nutanix on-premises resources. The application supports Nutanix enterprise cloud deployments to extend cost visibility capability into the multi-cloud cost governance portfolio.

Beam provides cost visibility into the on-premises clusters and resources. The costs are modeled to reflect the true cost of owning an on-prem infrastructure including Nutanix hardware, software, facilities, administration, and so on. The holistic cost visibility helps you to understand and control the overall cost associated with the Nutanix private cloud environment.

Nutanix cost management supports the following features.

  • True cost of running on-premises infrastructure with the help of a Total Cost of Ownership (TCO) model.
  • Metering and cost visibility of Nutanix clusters and resources on a daily and monthly granularity.
  • Enable financial governance by creating chargeback reports and budget alerts.

The application obtains the purchase and consumption data from Nutanix sales databases, MyNutanix Portal account, and Pulse .

Note: The historical spend reported in Beam is delayed by at least 24 hours as the cost metering is updated only once in 24 hours.

Multicloud Views

Apart from providing cost visibility and optimization recommendations for individual clouds, the application also provides cost visibility and optimization recommendations for entities with multiple clouds (Nutanix, AWS, Azure, and GCP). You can configure two types of multicloud entities - Financial and Scope.

Financial is a hierarchical structure comprising of Business units and Cost Centers used for Chargeback. Chargeback allows you to group cloud resources across multiple clouds (Nutanix, AWS, Azure, and GCP) within Cost Centers and Business units. By design, a cloud resource cannot be part of different cost centers or business units. For more information, see Chargeback.

Scope is a custom resource group with resources across multiple clouds (Nutanix, AWS, Azure, and GCP). Scopes are useful in providing visibility to logical resource groups like applications, project teams, cross-functional initiatives, and so on. Hence, by design, a cloud resource can be part of multiple Scopes. For more information, see Scopes.

User Interface Layout

This section provides information on the layout and navigation of the Beam console.

The Beam console provides a graphical user interface to manage Beam settings and features.

View Selector

When you log on to the Beam for the first time, the View Selector pop-up appears. The View Selector pop-up allows you to search for an account by typing the account name in the search box or navigate and select a cloud account, business unit or cost center, and custom scope.

The Dashboard is the welcome page that appears after logging and selecting a view in Beam.

Figure. View Selector Click to enlarge

For the subsequent logins, the last visited page for the last selected view (account, business unit or cost center, and scope) appears. You can click View in the top-right corner to open the View Selector pop-up and select a different view.

Table 1. Available Views
View Description
All Clouds Select this view to get total cost visibility for AWS-, Azure-, GCP-, and Nutanix-accounts for which you have access.
Nutanix Allows you to select Nutanix-accounts.
Financial Allows you to select business units or cost centers you created for the chargeback.
Scopes Allows you to select the custom scopes you created.

Beam Console

The Beam Console screens are divided into different sections, as described in the following image and table.

Figure. Beam Console Overview Click to enlarge
Table 2. Beam Console Layout
Menu Description
Hamburger icon Displays a menu of features on the left. When the main menu is displayed, the hamburger icon changes to an X button. Click the X button to hide the menu.
Alerts The Alert option allows you to see system-generated alert notifications. Click the bell icon from the main menu bar to view alerts. Click See More to view the complete list of notification.
User Menu The user drop-down menu has the following options.
  • Profile - Displays your account information, timezone and email preferences, and Product Access (read/write) for Beam.
  • What's New - Displays the recent product updates.
  • Terms of Service
  • Privacy Policy
  • Responsible Disclosure
  • Log out - Click to log out of the Beam console.
Help Menu You can click the question mark icon in the top-right corner of the Console to view the following:
  • Read Documentation - Redirects you to the Beam documentation.
  • Guided Tours - Opens the list of walkthrough videos that helps you to navigate through various tasks and workflows in Beam.
Widgets Section The widgets section displays the informational and actionable widgets corresponding to the feature that you have selected from the main menu. For instance, the Dashboard page display widgets like Spend Overview , Spend Analysis , Spend Overview - TCO Cost Heads , and Spend Overview - Virtual Machines .
View option Click View in the top-right corner to open the View Selector pop-up.

Main Menu

Clicking the Hamburger icon displays a menu of features on the left. When the main menu is displayed, the Hamburger icon changes to an X button. Click the X button to hide the menu.

Figure. Main Menu Click to enlarge

The following table describes each feature in the main menu.

Table 3. Main Menu - Available Features
Feature Description
Dashboard Displays the Dashboard page (see Dashboard (Nutanix)).
Analyze Displays the Analyze page (see Cost Analysis (Nutanix)).

Analyze allows you to track cost consumption across all your cloud resources at both the aggregate and granular level. You can drill down cost further based on your services, accounts, or application workload.

Chargeback Displays the Chargeback page (see Chargeback).

Enables the financial control of your cloud spends by providing the ability to allocate cloud resources to departments based on definitions. It provides a consolidated view across all your cloud accounts in a single pane for Finance views.

Budget Displays the Budget page (see Budget).

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

Reports Displays the Reports page (see Cost Reports).

Beam generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications. The reports are generated automatically and are available to view, download, or share from the Beam console.

Configure menu The Configure menu allows you to do the following operations.
  • Add and manage Nutanix accounts in Beam. For more information, see Onboarding Nutanix Account.
  • Manage Beam users
  • Chargeback
  • Scopes
  • Nutanix Cost Configuration

Administration and User Management

Beam allows you to do the following administrative controls.

  • Add and manage users accessibility to various cloud accounts in Beam.
  • Two types of access can be granted to a user.
    • Admin Access - The user gets read and write access to all cloud accounts added in your Beam account.
    • User Access - The administrator can grant Read Only or Read & Write permissions on selected cloud accounts to a user, thus allowing the administrator to exercise the principle of least privilege.

Adding a Beam User

You can add and manage users using the Beam console.

About this task

To add a user in Beam, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > User Management . Click Add User .
  2. In the Details page, enter the user details and change the timezone as desired. Then, click Next to go to the Permissions page.
    In the Permissions page, you can choose Admin Access or User Access .
  3. Click Admin Access if you want to give administrator rights to the user. See User Roles for details on types of users. Then, click Save to complete the task.
  4. Click User Access if you want to grant Read Only or Read & Write permissions on selected cloud accounts to the user.
    1. Select the required access ( No Access , Read Only , and Read & Write ) across clouds.
      The No Access option is selected by default.
    2. Select the AWS Payer Account and Linked Accounts, Azure Billing Accounts and Subscriptions, Nutanix accounts, and GCP Billing Accounts and Projects for which you want to provide access. Then, click Save to complete the task.
    Figure. User Management - Granting Permissions Click to enlarge

User Roles

This section helps you to understand all the different roles in Beam and when you would use each.

Figure. Beam User Roles Click to enlarge User Roles
Owner Role

When a user creates a Beam Subscription , a tenant is triggered and the user becomes the owner of that tenant.
 An owner is shown as Admin (Owner) in the User Management page. The owner role has the following attributes.

  • Create other users with or without administrative privileges.
  • Access billing subscription information.
  • Access to licensing information. For more information on licensing for Nutanix on-prem accounts, see Licensing.
  • Access to all features.
  • View the admin menu.
  • Perform all the read or write operations.
  • Create or edit budgets.
  • View, create, edit, or delete cost centers.
  • Restrict access to a limited number of cloud account for a user.
Note:
  • My Nutanix Account Administrators are considered as Owner in Beam.

    You can't create an owner role in Beam. However, you can create as many as three Beam owners from My Nutanix. Only My Nutanix Account Administrators have the necessary permissions to create Beam owners. For more information on My Nutanix user management, see Cloud Services Administration Guide .

  • Only My Nutanix Account Administrators or Beam Owners are able to manage billing and renewals of Beam subscription. For more information on managing billing and renewals, see Cloud Services Administration Guide .
Administrator Role

An administrator is shown as Admin in the User Management page. The administrator role has the following attributes.

  • Access to all features.
  • View the admin menu.
  • Add other users.
  • Perform all the read or write operations.
  • Create or edit budgets.
  • View, create, edit, or delete cost centers.
  • Restrict access to a limited number of cloud account for a user.
User Role

A user role is created by granting read-only or read & write permission for selected accounts.

  • An account having a user role cannot create other users.
  • An account having a user role with read-only permission can view the cost data.
  • Read-only or read & write permission can be granted on the selected Nutanix accounts.

ADFS Integration

Active Directory Federation Services (ADFS) is a Single Sign-On (SSO) solution that you can use for implementing single sign-on in Beam. To integrate ADFS with Beam, log on to your MyNutanix account, and perform the integration. See SAML Authentication for details. To log on to your Beam account using ADFS, see Logging into Xi Cloud Services as an SAML User .

User Management

You can configure the following user group types in ADFS.
  • Administrator user - Users added to this group have administrator access.
  • Beam user - Users added to this group can perform operations based on the access policy assigned by the administrator user.
Note:
  • If you are a part of the Beam user group and logging into Beam through ADFS for the first time, you will not have access to any cloud account. Contact your account administrator to get access to an account.
  • If you are a part of the administrator user group and logging into Beam through ADFS, you cannot add and delete users using the Beam user management. You can perform these actions in ADFS.
  • If you are logging into Beam through ADFS, you cannot change roles (administrator to user or user to administrator). You can perform these actions in ADFS.

Support

If you log on to Beam through ADFS, you can get access to Nutanix Support only if your MyNutanix account was used to add your Beam user account. However, if your Beam user account was not added using your MyNutanix account, you do not get access to support.

Starting Beam Free Trial

Beam is a multi-cloud cost governance SaaS product that provides a free, full-featured, 14-day trial period. During the free trial period, you can configure your Nutanix On-premises and public cloud accounts (AWS, Azure, and GCP) in Beam to evaluate the features. It takes about 15 minutes to configure your cloud accounts in Beam.

About this task

The following section describes the procedure to start a free trial.
  • If you have access to the MyNutanix account, perform Step 1 .
  • If you do not have access to the MyNutanix account, perform Step 2 .

Procedure

  1. If you have access to the MyNutanix account, do the following.
    1. Login to your MyNutanix account.
    2. In the Dashboard, scroll down to find the Beam application. Then, click Launch to open the Beam application and start your free trial.
      Figure. MyNutanix Dashboard - Launching Xi-Beam Click to enlarge
  2. If you do not have an existing MyNutanix account, do the following.
    1. Open the Beam webpage.
    2. Click Start Free Trial and fill the form that appears. Then, click Submit .
      Figure. Free Trial - Form Click to enlarge
      Your MyNutanix account gets created. Also, you will receive a verification email that contains a link for logging into the Beam application.

What to do next

You can add your Nutanix, AWS, Azure, and GCP accounts in Beam.

Onboarding Nutanix Account

You can add your Nutanix account in Beam application. The application allows you to track your cloud expenses, optimize your cloud cost, and achieve visibility into your spend.

Before you begin

Ensure that you meet the following prerequisites:
  • License ID of a Nutanix asset. Refer to Nutanix Support Portal.
  • Beam administrator access.

About this task

To add your Nutanix account in Beam, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Nutanix Accounts .
  2. Enter your License Key or Asset Serial Number .
    Figure. Add Nutanix Account Page Click to enlarge display of beam console
    Note: This step can give an error message in the following scenarios.
    • The serial number or license key is invalid.
    • The serial number is valid but does not belong to your Nutanix account.
  3. Once validated, Beam fetches the account name automatically. Optionally, update the account name.
  4. Click Save to complete.
    Note: You can set up only one Nutanix account for each Beam account.

What to do next

After you add the Nutanix account, do the following for the cluster on-boarding.
  • You must enable Pulse 4K for the clusters for which you want cost visibility. You can enable Pulse from the Prism console for the clusters that need cost visibility. The version of NCC must be 3.6.4 or higher. To enable Pulse for your cluster, see Pulse Health Monitoring and Support.
    Note:
    • Beam for Nutanix on-premises only works when Pulse is enabled.
    • There are some known data inconsistencies for clusters with AOS versions 5.10.10, 5.10.11, and 5.15. It is recommended to upgrade the AOS version to 5.15.1 or higher for cluster costing and resource level metering to work in Beam.
    • Beam recommends upgrading NCC to 3.10 to get accurate cost visibility for snapshots (Protection Domain). There are some known data inconsistencies for clusters with NCC version 4.3.0.1. Hence, it is recommended to upgrade the NCC version to 4.4.0 or higher. For more information about upgrading NCC, see Nutanix Cluster Check (NCC) Guide.
  • To view the cluster names and VM names, you must enable additional support information. To enable additional support information, see Configuring Pulse.
  • Beam supports the cost visibility for Objects and it requires the version of Prism Element to be 5.11.2 or higher and Prism Central to be 5.17.1 or higher.
Note: The cost of VM does not depend on the utilization of resources (for example, CPU, RAM, storage). It depends on the allocated resources to a VM.

Beam Gov(US) Cost Governance

Beam Gov(US) is an isolated instance running in AWS GovCloud (US) region to cater specific regulatory compliance requirements of the US government agencies at federal, state, and local levels, government contractors, educational institutes, and other US customers who run sensitive workload in AWS GovCloud and Azure Government Cloud, and want to bring Cost Governance for their cloud infrastructure, gain visibility, optimize and control their spend. Beam Gov(US) provides a single plane consolidated view for Cost Governance.

The onboarding process for Beam Gov(US) users differs from the onboarding process for commercial account users.
  • Separate Beam instance is deployed in the AWS GovCloud region (US-West) with a separate login URL for Beam Gov(US) users.
  • Beam Gov(US) supports the cost governance for AWS, Azure, GCP, and Nutanix On-premises in the GovCloud.
  • Beam Gov(US) also supports commercial cloud accounts along with the GovCloud.
Note: Beam Gov(US) Cost Governance is an early access feature. For more information, contact Nutanix Support .

Beam Gov(US) Sign up

The Beam Gov(US) sign up includes two steps.
  1. Beam Gov(US) Sign up Request Approval - The sign up request approval process involves the following steps.
    • Sign-up Request

      Contact the sales team expressing an interest in using Beam Gov(US).

    • Customer Verification

      The request is forwarded for verification to the Nutanix GovCloud Security team. As part of the verification, you will receive a form through DocuSign that you must fill and send back.

    • Beam Account Activation

      Once the request passes through the verification, Customer Service initiates the account creation process, and the primary user is notified with login details through an email. The email contains verification link, upon verifying email, the user can login to the Beam Gov(US).

  2. Account Registration Completion - In the Beam Gov(US) login page, create a password and select a timezone. Then login into the Beam console. You can enable or disable the MFA from the Profile page in the Beam console. For more information on enabling or disabling the MFA, see User Management.
Note: After you complete the sign-up steps, perform the Nutanix account onboarding. There are no changes in the onboarding steps for Beam Gov(US). For more information, see Onboarding Nutanix Account.

User Management

You can add and manage users for GovCloud using the Beam console.

You can perform the following operations in Beam Gov(US).
  • Add a user

    To go to the User Management page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > User Management . To add a user, see Adding a Beam User.

    In the case of AWS user management, the user may have access to the commercial account but no access to the GovCloud account linked to the commercial account. To give AWS GovCloud account access to a user, you must select the GovCloud account linked to the commercial account.

    Figure. User Management - GovCloud Account Click to enlarge Access to GovCloud Account
  • Resetting Password and MFA Management
  • Configuring Single Sign-On

Resetting Password and MFA Management

You can reset your password and enable or disable MFA from the Profile page in the Beam console.

About this task

To go to the Profile page, click the user drop-down menu on the top-right corner and select Profile .

To change your password, click the Change Password link.

Figure. Profile Page - Change Password Link Click to enlarge Change password link

To enable MFA, do the following.

Procedure

  1. In the Profile page, click the Enable MFA link.

    The Enable Multi-factor Authentication page appears.

  2. Do one of the following.
    • Scan the QR code by using the virtual MFA application (Google or Microsoft Authenticator).
    • Click the secret code link to get the secret code. Enter the secret code in your virtual MFA application.
      Figure. Enable Multi-factor Authentication Page Click to enlarge Enabling MFA

      The virtual MFA application starts generating codes.

  3. Enter the codes in Code 1 and Code 2 boxes. Then click Set MFA to enable MFA.

    To disable MFA, click the Disable MFA link in the Profile page.

    In the case your MFA device is lost or not accessible, click Login via OTP (for lost MFA)? link. Beam sends an OTP to your registered email id. Enter the OTP to login to Beam.

    Figure. Beam Login Page - Lost MFA Link Click to enlarge

Configuring Single Sign-On

The following section describes how to configure the single sign-on feature for Beam Gov(US).

About this task

To configure the single sign-on feature, do the following.

Procedure

  1. Log on to the Beam Console.
  2. Click the user drop-down menu on the top-right corner and select Single Sign On .
  3. In the Single Sign On page, in the Application Id box, enter the application id.
    Note: To get the Application Id, contact Nutanix Support .

    A success message appears, displaying that the single sign-on is successfully configured.

  4. Log out from the Beam Console to go to the login page.
  5. In the Login page, click Login with Single Sign On .
  6. In the Email box, enter your email id.
    In the case of conflict, while logging through the email address, you can click Try with Application ID to login with the application id.
  7. Click Login .

    You are redirected to your organization’s single sign-on application page.

    Enter your credentials and login to Beam Gov(US).

Getting Started With Configurations

After you onboard your Nutanix accounts in Beam, the following information becomes available for you to consume:
  • Dashboard - Get a graphical view of your overall spend, spend analysis, spend overview - TCO cost heads, and spend overview - virtual machines.
  • Analyze - Provides a deep visibility into your projected and current spend and allows you to track spend across Nutanix resources both at an aggregate and granular level. You can drill down the cost further based on your Clusters, Services, Service Types, Cost Centers, and Tags (Prism Categories are considered as Tags).
The following are the configurations to perform so that you can begin controlling your Nutanix On-premises consumption:
  • Verify and update Nutanix cost - The Product Portfolio tab allows you to view all the Nutanix products purchased to date, their capacity, and assumed market prices. Beam calculates the spend data based on the assumed market prices of the Nutanix product. You can verify and reconfigure the system assumed cost of your purchased Nutanix products or licenses manually.
  • Configure the Total Cost of Ownership (TCO) model - The Cluster tab provides a list of all the clusters along with the monthly cost for each cluster. Total Cost of Ownership (TCO) of a cluster is considered as its cost across Beam. This model covers all the cost considerations of running Nutanix Enterprise Cloud in your datacenter. You can view the details of different cost heads for a specific cluster and edit the industry-specific TCO assumptions to the actual TCO inputs for your cluster.
  • Select the Resource Metering Model - The Resource Metering tab allows you to select the resource metering model for individual clusters. Resource metering provides a granular view of the cost of running a resource within a Nutanix cluster. The granular resource level costing helps in understanding the cost incurred to run a Nutanix resource that can be used for Chargeback. Resource metering is done by allocating the amortized cost of the cluster proportionally to the resources that are running in the cluster. You can select between Actual Virtual Capacity and Target Virtual Capacity models.
  • Create Business Units and Cost Centers for Chargeback – You can define a business unit by combining a group of cost centers. Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your Nutanix accounts.
  • Create Budget Alerts - A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

Nutanix Cost Configuration

Beam allows you to define configuration settings for your cloud cost data. Cost Configuration is a set of rules that allow you to update cost inputs manually, and also select cost presentation options, allocation model options, and reporting rules that define how the cost of cloud resources get reported in Beam.

Cost metering in Beam involves the following three step process to provide a detailed view of the existing spend on the resources based on the purchased products and deployed clusters.
  1. Cluster Costing - is where the cost of purchased Nutanix, Third-party, or OEM hardware and software is added to the Total Cost Ownership (TCO) model to calculate the monthly amortized cost of the cluster. For more details about the TCO configuration of a cluster, see the Cluster - TCO Configuration section of this user guide.
  2. Resource Metering - is where the monthly amortized cost of the cluster is split on a daily granularity to the virtual resources used within the cluster, based on their usage. For more details about resource metering and resource metering model, see the Resource Metering section of this user guide.
  3. Service Costing - is where the blended resource (such as guest VMs, Snapshots, Files, and Objects) cost is calculated by including cost of their usage and overheads such as the software cost and the hidden shared platform cost. For example, for Nutanix VM, the cost of vRAM, vStorage, and vCPU along with the amortized cost of management license (such as Prism, Flow, and Calm) and the platform (PC) are included.

Beam supports only Pulse enabled clusters. Any cluster with Pulse not enabled is considered as not onboard for Beam and is not shown in the Cluster tab. If pulse is not available due to some reason for a particular day then the cluster cost incurred for that day is known as Unmetered cluster cost

You can configure the cost for Nutanix Product Portfolio, Cluster, and Resources.

To go to the Cost Configuration page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Nutanix Cost Configuration .

In the Nutanix Cost Configuration page, you can view the following tabs.
  • Product Portfolio - The Product Portfolio tab is a comprehensive repository of all the Nutanix products purchased to date, their capacity, and assumed market prices.
  • Cluster - The Cluster tab provides a list of all the clusters and monthly costs for each cluster. TCO model is the cost allocation basis for the clusters. You can edit the TCO details for each cluster.
  • Resource Metering - The Resource Metering tab provides a list of clusters along with the resource metering model selected for each cluster. The resource metering model allocates the amortized cost of the cluster proportionally to the resources that are running in the cluster. The proportional allocation can be based on the actual virtual capacity or target virtual capacity. You can configure the allocation type at a cluster level.

Nutanix Cost - Verify and Update

The Product Portfolio tab is a comprehensive repository of all the Nutanix products purchased to date, their capacity, and assumed market prices.

The Product Portfolio tab allows an administrator to visualize the following.
  • Filter a product by the name or view the active or expired products.
  • View the list of products that Beam considers for the cost calculations. Beam considers only the products with active terms for amortized cost calculations.
  • View the assumed costs for each product or license purchase.

In the top-right corner of the Product Portfolio tab, select the depreciation period from the Depreciation Schedule drop-down list and click Apply . The Hardware costs and any perpetual licenses are amortized based on the depreciation period you select. This impacts the amortized cost calculations within the product.

You can filter the products according to the product type (software or hardware) and product status (active or expired).

Figure. Product Portfolio Page Click to enlarge display of beam console

Configuring Nutanix Products Cost

You can reconfigure the system assumed cost of your purchased Nutanix products or licenses manually.

About this task

Beam calculates the spend data based on the assumed market prices of the Nutanix product. The spend data shown in Beam may not be the actual price that you may have paid as a Nutanix customer.
You can configure the cost in any of the following ways.
  • Inline cost editing : This option allows you to edit the cost of individual products. You can filter a subset of products using the search and individually overwrite the price with the desired price you want to use in the calculation.
    Note: In case you reconfigure the price for a line item more than once, Beam considers the latest update for all calculations.
  • Upload and Import : This option allows you to import the cost of individual Nutanix products with a Nutanix cost configuration file.

You can download the Nutanix cost configuration file (in XLS format) of the product portfolio to edit the price and terms of individual products in an offline mode. You can upload and import the cost configuration file in Beam after you update the file with the new cost and term inputs.

To configure the cost of your purchased Nutanix products, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Nutanix Cost Configuration .
    The Product Portfolio tab appears.
  2. In the Product Portfolio tab, click Configure Cost .
    The Cost Configurations page appears.

    You can select Inline cost editing or Upload and Import .

  3. If you select Inline cost editing , do the following.
    • In the Cost column, enter the new cost value for the products you want. Then click Save to save your changes.
  4. If you select Upload and Import , do the following.
    1. Click Download File to download the Nutanix cost configuration (XLS) file.
      Figure. Nutanix Cost Configuration - Upload and Import Click to enlarge
      Follow the on-screen instructions and Nutanix cost configuration guidelines to update the cost and terms in the NX_Cost_Configuration sheet of the cost configuration (XLS) file.
      Note: You can only edit the highlighted cells in the New Cost (in USD) column in the NX_Cost_Configuration sheet of the cost configuration (XLS) file.
    2. In the Cost Configurations page, click Upload and Import to upload the updated cost configuration (XLS) file in Beam.

Cluster - TCO Configuration

The Cluster tab provides a list of all the clusters along with the monthly cost for each cluster.

This section allows you to configure the TCO of a cluster. The TCO model has a default industry standard assumptions that allows you to include the cost of any non platforms by providing the true cost of running the Nutanix Enterprise Cloud in your datacenter.

You can click Edit TCO against a cluster to view the details of different cost heads for a specific cluster and edit the industry-specific TCO assumptions to the actual TCO inputs for your cluster.
Figure. TCO Configuration Click to enlarge

Total Cost of Ownership

Beam for Nutanix Enterprise Cloud using a total cost of ownership (TCO) model covers all the cost considerations of running Nutanix Enterprise Cloud in your datacenter.

TCO is a configurable financial model that allows you to analyze the direct and indirect costs of owning, operating, and maintaining the Nutanix Enterprise Cloud. To simplify the accounting of the different cost heads contributing to the total cost of running your Nutanix Enterprise Cloud, TCO provides a costing model that includes default costs and assumed (industry standard) costs. Using TCO, you can view all the assumed costs and the logic using which the cost is calculated. You can also change the assumed cost to the actual cost incurred by you.

TCO Model

The TCO model is based on the following cost data points.
  • Purchase data for Nutanix software and hardware cost associated with your Nutanix account.
  • Assumed or configured cost of the third-party software and hardware that are running as a part of your Nutanix cloud environment.
  • Assumed or configured cost (based on the number of nodes) of the datacenter infrastructure, facilities, and services required to run the Nutanix enterprise cloud.

You can click Edit TCO against a cluster to view the details of different cost heads for a specific cluster and edit the industry-specific TCO assumptions to the actual TCO inputs for your cluster.

You can add custom costs for any custom third-party software, telecom service, facilities cost, and services cost.

TCO Model - Nutanix

Table 1. Cost Heads
Cost Heads Description Amortized
Hardware
  • Nutanix Hardware Cost - Represents the cost of the Nutanix hardware appliances (NX series) that you have purchased. This cost data is pulled directly from your Nutanix account. However, you can click Edit to edit the cost of individual products.
  • Third Party Hardware Cost - Represents the cost of hardware purchased from third-party server vendors or OEM appliances. You can click Configure to enter the Average Cost per Third party server node per month .
    The third-party hardware cost is based on the following assumptions.
    • Average Price per Third-Party Server is assumed as $12,000 with a default support period of 5 years. You can edit this assumption based on your requirement.
    • The start date for third-party server is aligned with the purchase date of the Acropolis software.
  • To edit the custom RAM cost, click Edit Custom RAM Cost and enter the following details.
    • Cost Description
    • Memory Capacity
    • Cost per GiB per Month
  • Custom RAM Cost Breakup - Represents the cost break-up of physical memory (in GiB) in the overall server hardware cost. You can click Configure to enter the cost per memory capacity per month.
Cost of Nutanix hardware amortized for the depreciation period you select in the Product Portfolio tab.
Software
  • Nutanix Software Cost - Represents the cost of the Nutanix software licenses (perpetual or term-based) that you have purchased. This cost data is pulled directly from your Nutanix account. However, you can click Edit to edit the Cost and duration of individual licenses.
    Note:
    • The perpetual licenses are amortized based on the depreciation period input in the Product Portfolio tab.
    • Currently, Beam supports the following software licenses in the TCO model - Acropolis, Prism, Calm, Flow, Files, Objects, Acropolis software bundles for VDI and ROBO, Nutanix Cloud Infrastructure (NCI), Nutanix Cloud Manager (NCM), and Nutanix Unified Storage (NUS) tiers.
  • Third Party Software Cost - Represents the cost of third-party software license that you have purchased.

    The default vSphere cost is based on the following formula. The vSphere cost is included only for the clusters with ESX as the hypervisor.

    vSphere Cost = Number of Nodes * Number of Processors per Node * vSphere Licence Cost per Processor

    The number of processors per node is 2, and the number of nodes is the number of nodes that constitutes the cluster.

    To add a new third-party software cost, click Add Custom Third Party Software Cost and enter the following details.
    • Cost Description
    • Cost per License
    • Cost Type
    • Term
    • Start Date
    • Number of Licences
Cost of Nutanix software amortized for the depreciation period you select in the Product Portfolio tab for a one-time or perpetual license. The cost amortization for term software licenses is based on the term for which the license is purchased.
Facilities
  • The data center type On Premises represents the cost of facilities required to run Nutanix private cloud on an on premise datacenter.
    Note: The default assumption is that Nutanix is running on an on premise datacenter.

    You can click Configure to change the Power and Cooling Cost and Data Center Space/Infrastructure Cost .

    The PUE and Cost per KWh within the Power and Cooling cost is based on industry standard average and US standard power rate respectively. You can change the values based on your actual cost incurred and your country standards. Similarly, you can update the industry standard assumed cost for Average Rack Units per Rack and Cost per Rack per Month for DC Infrastructure/Space within Data Center Space/Infrastructure Cost .

  • The data center type Co-Location represents the cost of facilities required to run Nutanix private cloud on co-located datacenter spaces, including the assumed Custom Facilities Cost . You can click Configure to change the Co-Location Cost and add the Average Rack Units per Rack and Co-Location Cost per Rack per Month .
  • To add custom third-party facilities cost, click Add Custom Third Party Facilities Cost and enter the Cost Description and Monthly Cost Per Node .

Note: If you change the datacenter type from On premise to Co-Location, or conversely, the cost model is reset to default state of the datacenter type that you select.
Amortized cost of facilities spread over the months.
Telecom

Represents the cost of the Telecom services and Ethernet switches that you have purchased.

Ethernet Switch/Top of Rack Switch Cost - Represents the cost of the Ethernet switch or the top of rack switch used in your datacenter rack.

This calculation is based on the following formula.

Ethernet Cost/TOR = Number of Nodes * Number of Ports per Node * (Average Cost / Number of Ports per Ethernet Switch)

You can click Configure to change the assumed cost to the actual cost of Average Cost per Ethernet/TOR Switch (5 Year Support) and Number of Ports per Ethernet Switch . The change is reflected in the values for the Average Cost per Port and the Number of Nodes .

The Ethernet/Top of rack switch cost is based on the following assumptions.
  • Number of ports used per node is two.
  • Ethernet switch cost gets added based on the purchase date of the new Nutanix hardware, OEM, or third-party hardware.
To add a custom third-party telecom services cost, click Add Custom Third Party Telecom Cost and enter the following details.
  • Cost Description
  • Start Date
  • Cost Type
  • Cost to be Added
  • End Date
  • Recurring
  • One-time - Telecom cost spread over the start and end time.
  • Recurring - Telecom cost spread over a recurring period.
Services

Represents the cost of the custom services that you have purchased. Beam does not assume the cost of custom services and must enter the cost based on your actual cost.

To add a custom third-party services cost, click Add Custom Third Party Services and enter the following details.
  • Cost Description
  • Start Date
  • Cost Type
  • Cost to be Added
  • End Date
  • One-time - Services cost spread over the start and end time.
  • Recurring - Services cost spread over a recurring period.
People
Represents the cost of the administration of Nutanix nodes that you have purchased. You can click Configure to change the assumed cost to the actual cost that you have incurred on the following administration cost parameters.
  • Nutanix Administration Outsourced Percentage (Assumed value is five percent)
  • External Admin - Annual Fully Burdened FTE Rate (Assumed cost is USD 80000)

  • Internal Admin - Annual Fully Burdened FTE Rate (Assumed cost is USD 150000 based on US benchmark for IT admin salary)
  • Number of Nutanix Nodes Managed per FTE (Assumed value is 100 based on general observation)

Amortized cost of administrative staff spread over the months.

Resource Metering

The Resource Metering tab allows you to select the resource metering model for individual clusters.

Resource metering provides a granular view of the cost of running a resource within a Nutanix cluster. The granular resource-level costing helps in understanding the cost incurred to run a Nutanix resource that can be used for Showback or Chargeback.

Resource metering is based on the sunk cost allocation from the cluster level to the resource level (based on the specifications and allocation hours). You can view the list of resources in a cluster and cost associated with each of the resources at a daily or monthly granularity.

Note: The cost of a resource depends on the provisioned capacity of CPU and RAM, and actual utilization of storage.

Resource metering is done by allocating the amortized cost of the cluster proportionally to the resources that are running in the cluster.

The following are the available types of resource metering model.
  • Actual Virtual Capacity - The amortized cluster cost is split to the resources based on the actual provisioned virtual capacity (vCPU, vRAM, and vStorage).
  • Target Virtual Capacity - The amortized cluster cost is split to the resources based on the target provisioned virtual capacity based on the overcommit ratio (vCPU : pCPU, vRAM : pRAM, and vStorage : pStorage).
For more information, see Resource Metering Model.

You can change the Resource Metering Model from the Clusters table. In the Resource Metering Model column, click the drop-down list and select the metering model you want for the cluster. Then click Save to apply the selected metering model to the cluster.

Figure. Selecting Resource Metering Model Click to enlarge Selecting Resource Metering Model

You can also edit the overcommit ratio for the Target Virtual Capacity model. Click Edit against the cluster for which you want to change the overcommit ratio. In the Target Virtual Capacity - Overcommit Ratio page, enter the new ratio and click Done . Then click Save to save the changes.

Resource Metering Model

The Resource Metering Model allocates the cluster cost to the Nutanix resources within the cluster. There are two types of Resource Metering Model - Actual Virtual Capacity and Target Virtual Capacity .

Actual Virtual Capacity

The Actual Virtual Capacity model allows you to split the cluster cost to the resources based on the actual provisioned virtual capacity (vCPU, vRAM, and vStorage).

The cost of a cluster gets divided between the cluster raw CPU cores and the raw total cluster flash drive capacity in tebibytes (TiBs) based on the Nutanix Acropolis capacity-based licensing ratio. The cost of the cluster raw CPU cores gets fully allocated to n vCPUs in the cluster based on vCPU-hours ( n is the number of vCPUs). The cost of the cluster RAM (in GiB) gets fully allocated to n vRAMs in the cluster based on vRAM-hours ( n is the number of vRAMs). The cost of the raw total cluster flash drive capacity in tebibytes (TiBs) gets fully allocated to m vDisk in the cluster based on vDisk-hours ( m is the vDisk capacity).

This allocation logic helps you to get the cost of 1 vCPU-hour, 1 GiB of vRAM-hour, and 1 GiB of vDisk-hour. The cost of a resource is derived by using these two variables, specification (vCPU, vRAM, storage allocation), and allocated hours.

Target Virtual Capacity

The Target Virtual Capacity model allocates the cluster cost to the resources based on the target provisioned virtual capacity based on the overcommit ratio (vCPU : pCPU, vRAM : pRAM, and vStorage : pStorage).

The cost of a cluster gets divided between the cluster CPU cores, RAM (in GiB), and the flash drive capacity in tebibytes (TiBs) based on the overcommit ratio. The vRAM : pRAM is assumed to be 1:1, and is not configurable.

The cost of the cluster CPU cores gets fully allocated to expected vCPUs in the cluster based on vCPU-hours (expected vCPU = actual pCPU * expected vCPU overcommit). The cost of the cluster RAM gets fully allocated to expected vRAMs in the cluster based on vRAM-hours (expected vRAM = actual pRAM * expected vRAM overcommit). The cost of the cluster flash drive capacity in tebibytes (TiBs) gets fully allocated to expected vStorage in the cluster based on vDisk-hours (expected vStorage = actual pStorage * expected vStorage overcommit).

The difference between planned vCore, vRAM, or vStorage versus actual vCore, vRAM, or vStorage is shown as Unprovisioned Virtual Capacity or Overprovisioned Virtual Capacity in the VM Name column of the VM details table in the Resource Metering tab.
Note:
  • Unprovisioned virtual capacity - When the planned virtual capacity is greater than actual virtual capacity.
  • Overprovisioned virtual capacity - When the planned virtual capacity is less than actual virtual capacity.

In the overprovisioned virtual capacity scenario, the cluster cost from the sum of all the resource cost is higher than the cluster amortized cost. You can view this information in the VM details table in the Resource Metering tab.

Outage Scenarios

This section provides a list of the outage scenarios that can occur and the method by which Beam visualizes Resource Metering.

  • Cluster node outage - Beam calculates the Resource Metering based on the best or last available cluster configuration. The best available data is either the current day cluster configuration or the last known complete cluster configuration.
  • Cluster outage - Beam ignores the cluster configuration for the current date and takes the latest or last available cluster configuration for the Resource Metering.
  • Pulse disablement - Beam calculates the number of days from the current day until the last available data point. If the difference is more than 15 days, Beam considers the cluster to be Pulse disabled for the current month.
  • VM outage - Beam ignores the resource configuration for the current date and takes the latest or last available resource configuration for the Resource Metering.
  • VM deletion during pulse outage - Beam keeps using the last known available resource configuration until the pulse outage crosses a threshold of 15 days. After the pulse outage crosses the threshold, Beam considers the resource to be deleted.
  • Pulse outage - Beam ignores the cluster configuration for the current date and takes the latest or last available cluster or resource configuration for the Resource Metering.

Currency Configuration

Configure currency to view the cloud spend in your preferred currency.

The currency configuration reduces the ambiguity in analyzing cloud spend from different cloud providers who often provide billing data in different currencies. For example, GCP might report cloud consumption in INR while AWS report cloud consumption in USD. Consolidating the billing data from these different sources without considering currency type results in ambiguous reports and dashboards. In order to simplify multicloud cost governance, Beam allows you to configure a single currency to calculate spend data across all views. You can also configure the corresponding conversion rates for the configured currency or use dynamic rates that Beam provides.

Source currency

Source currency is the currency that your cloud service provider reports billing. When you onboard multiple cloud billing accounts from different cloud providers or regions, you can see a consolidated list of all the source currencies in the respective billing data on the Currency Configuration page. If you onboard a GCP billing account reported in INR and an AWS payer account reported in USD, you will see INR and USD in the source currency list.

Target currency

Target currency is the preferred functional currency for your organization to view spend analysis across Beam in. All the spend data shown in Beam is also converted to this currency. Beam uses the target currency to provide a unified cost view when you onboard multiple cloud accounts with different source currencies. Furthermore, this configuration gives you the flexibility to set a single currency for the following views.
  • Multicloud views such as All-Clouds , Financial (Business Unit or Cost Center), or Scopes
  • Cloud overviews such as GCP Overview , AWS Overview , or Azure Overview

Currency conversion

When you select target currency, you must also select the conversion rates applicable for the selected currency. Beam supports the following currency conversion types.
  • Dynamic Currency Conversion : Beam uses the third-party API from https://exchangeratesapi.io/ to ingest the conversion rates for the corresponding source currencies.
    Note: Beam uses the conversion rate for the first of every month to calculate spend for the entire month. If the conversion rate for the first of every month is unavailable, Beam uses the last fetched monthly rates to calculate spend for that month.
  • Custom Defined Currency Conversion : Beam uses the user-defined exchange rate to calculate historic and projected spend data across all days/months/years.

Configuring Currency and Conversion Rates

You can use this configuration to select the target currency and the corresponding conversion rates applicable for spend analysis across Beam.

About this task

After you configure target currency, you see the following across Beam.
  • Spend data in the target currency for all cloud overview and mulitcloud views.
  • A currency toggle to select between source currency and target currency for individual cloud accounts.
For more information on general guidelines and considerations, see Currency Configuration Considerations .

To configure the currency, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu and then go to Configure > Currency .
    The Currency Configuration page appears.
    Figure. Configure Currency Click to enlarge
  2. Click View Source Currencies to view the list of source currencies corresponding to the cloud resource billing data.
  3. Under Target Currency , select the preferred currency in the drop-down list.
    For example, if you select AMD Armenian Dram , costs across Beam reports in AMD.
  4. In the Currency Conversion area, do one of the following to configure the conversion rates applicable for the selected currency.
    • To automatically fetch the currency exchange rates, click Dynamic Currency Conversion . Additionally, click View to see the ingested exchange rates for the last 12 months.
      Note: Currency exchange rates are updated by Nutanix on a monthly basis. Currency exchange rates are provided as estimates and for informational purposes only. The rates and conversion estimates should not be relied on for invoiced billing conversions by cloud service providers.
      Click to enlarge
    • To manually define the currency exchange rates, click Custom Defined Currency Conversion and then click Customize .

      In the text box, enter the exchange rates as shown in the following figure.

      Click to enlarge
  5. Click Save to apply the currency configuration.
    The configured currency applies to all the available views in Beam.

Currency Configuration Considerations

Limitations and guidelines to consider when using the currency configuration.

Dashboard

  • The Reserved Instances widget on the Dashboad always reports with source currency across all AWS and Azure views.
  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.

Analyze

  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.
  • The reports in the Analyze page is in the currency that is selected at the time of report generation.
    • If you select source currency in the toggle, and then click download, share, or schedule, the reports are in the source currency.
    • If you select target currency in the toggle, and then click download, share, or schedule, the reports are in the configured target currency.
      Note: If you modify the target currency after scheduling the reports, Beam always uses the target currency that is configured at the time of generating the schedule reports. For example, if the target currency is INR while scheduling the reports. And you modify the target currency as GBP, then Beam generates the scheduled reports in GBP.
    • If the currency toggle is not available, the reports you download, share, or schedule are in the configured target currency.

Chargeback

  • By default, Beam provides the spend data in the target currency. The currency toggle is not available in Chargeback .
  • The system report (Global Chargeback Report) that you can download, share, or schedule is in target currency that is configured at the time of report generation.

Budget

  • By default, Beam provides the spend data in the target currency. The currency toggle is not available in Budget .
  • Beam shows the allocated cost of the budget in the target currency.

Reports

  • System reports are in target currency that is configured at the time of report generation. If the target currency configuration is INR and you modify the target currency to GBP after 4 weeks. The system reports generated weekly are in INR for week 1, week 2, week 3, and week 4. The system reports generated for week 5 are in GBP as the target currency at the time of report generation is GBP.
    Note: The following reports are in source currency.
    • Daily reports: New EC2 RI Recommendation report and EC2 RI Utilization Report
    • Monthly reports: Expired RI Report , Expiring RI Summary Report , and On Demand vs Reserved Hours Cost
  • The currency in the Custom reports is based on the configuration in the create or edit page. The reports are in either source currency or target currency as configured while creating the reports.

Save

  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.
  • The system report (Cost Optimization Detailed Report) that you can download, share, or schedule in the Save page is in target currency that is configured at the time of report generation.
  • The drill-down reports that you can download, share, or schedule in any of the tabs on the Save page uses the selection in the currency toggle.
    • If you select source currency in the toggle, and then click download, share, or schedule, the reports are in the source currency.
    • If you select target currency in the toggle, and then click download, share, or schedule, the reports are in the configured target currency.
    • If the currency toggle is not available, the reports you download, share, or schedule are in the configured target currency.
    Figure. Save Click to enlarge

Purchase

Beam uses the source currency provided in the billing data for assessing reserved instances and displays the data in source currrency. Beam does not use the target currency provided in the currency configuration page for assessing reserved instances.

Cloud Account Onboarding

  • When you onboard your first cloud account, Beam configures the source currency as the default target currency with Dynamic Currency Conversion rates.
  • When you onboard a cloud account with new currency and the currency conversion is Custom Defined Currency Conversion . Beam adds the the latest dynamic rate as a custom rate for the new currency.

Dashboard (Nutanix)

The dashboard for Nutanix displays informational widgets for ease of cost governance.

To view the Nutanix dashboard, log on the Beam console and select any of the connected Nutanix accounts.

Nutanix dashboard displays the following widgets.
Note: Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .

Spend Overview

This widget shows the Month to date, the amortized cost for Nutanix software you have purchased. This gives you a high-level view of the ongoing costs of running your Nutanix on-premises private cloud. In a multi-cloud context, it helps you to get a sense of the run-rate costs of using the private-cloud, similar to a public cloud ongoing spend view. By default, this widget displays the top spend of the top five clusters or services in your Nutanix account.

Figure. Dashboard - Spend Overview Click to enlarge Spend overview of Nutanix clusters

Spend Analysis

The Spend Analysis widget displays a bar graph of the historical spend analysis for the selected period with the current and projected spend for the past few months.

By default, this view displays the monthly spend data for the clusters. You can change the default view to display daily projections. To view a detailed information, click View Spend Trend , which redirects you to the Analyze page.
Figure. Dashboard - Spend Analysis Click to enlarge Spend analysis of Nutanix clusters

Spend Overview - TCO Cost Heads

This widget shows the total cost incurred for all the onboarded clusters. You can view the cost breakups for all the cost heads (for example, software, telecom, hardware, and so on). You can hover over the chart to view the contribution of each cost head (in percentage) to the total spend.

To view a detailed information, click View Details , which redirects you to the Resource Metering tab in the Analyze page. Resource metering provides a granular view of the cost of running a resource within a Nutanix cluster. The granular resource-level costing helps in understanding the cost incurred to run a Nutanix resource that can be used for Showback or Chargeback.

Figure. Dashboard - TCO Cost Heads Click to enlarge

Spend Overview - Virtual Machines

This widgets shows the cost incurred (Month to date) for all the VMs and VM-Snapshots that are running in your clusters.

You can click View All , which redirects you to the Compute view for detailed information that includes: total spend overview, the cost for all the sub-services, VM IDs, and cost based on categories applied to the Nutanix resources.

Figure. Dashboard - Virtual Machines Spend Click to enlarge

Cost Analysis (Nutanix)

Beam provides a detailed view of the existing spend on Nutanix resources based on the deployed clusters and purchased products.

The Analyze page allows you to track spend across Nutanix resources both at an aggregate and granular level. You can drill down the cost further based on your clusters, VMs, and Tags (Prism Categories are considered as Tags).

Figure. Analyze Click to enlarge

Analyze views help you with the following.

  • Graphical and tabular view of spend data. The top portion of the Analyze page displays a graphical view of your data. The bottom portion of the Analyze page displays a tabular view of your spend data.
    Note:
    • By default, each view displays values in a line chart form. You can change the default view to a pie chart or bar graph.
    • The spend data for current day can be delayed by 24 hours.
    • Beam displays the date and time it last updated the spend data at the top-right corner of the Analyze view. Beam updates spend data every 6 hours. However, Beam skips updating, if the data ingested from the cloud providers is the same since the last update.
  • Deep visibility into your historical, current, and projected spend based on the time period selected. You can select the time period between Day or Month at the top-right of the chart.
  • Customize the view of your spend data according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply .
  • Schedule, share, and download customized reports under each view.
  • Cumulative spend analysis. To visualize the cumulative spend data, you can turn on the Cumulative toggle.
  • Spend data in your preferred currency . You can select the currency toggle at the top right to switch between target and source currency. For more information on general guidelines and considerations, see Currency Configuration Considerations .

The Analyze page contains the following sub-tabs.

  • Current Spend
  • Projected Spend
  • Resource Metering
  • Compute
  • Storage
Current Spend
The current spend view provides the current cluster cost and its breakdown based on the usage and resource data.
Table 1. Current Spend
View Description
Overview Displays the total cluster spend.
Clusters Displays the total cost of each cluster in your Nutanix account.
Service Types Displays the total cost for all the services types such as compute, storage, and no service in your Nutanix account.
Services Displays the total cost for all the services in your Nutanix account.
Cost Centers Displays the total spend split by Beam cost centers. This view includes the unallocated costs that are not tagged in your Nutanix account.
Tags Displays the current spend based on the category applied to your Nutanix resources. Click the Tags drop-down list to select a category.
Note: The chart also displays all the values that are tagged to a selected tag key.
Note: Both No Service and No Service Type include the spend of unprovisioned capacity and unmetered cluster cost.
Projected Spend
The projected spend view provides an insight into the actual and projected cost for the selected period.
Note: The projected cost is based on the amortized cost of the clusters and does not consider the actual utilization of the cluster.
How Spend Projection Works

Beam calculates the monthly amortized costs of clusters on the basis of Total Cost of Ownership (TCO). This monthly amortized cost data is also used in projecting future month's cost data. When the hardware or software scale-up or scale-down occurs or when TCO inputs are updated, Beam recalculates the monthly amortized cluster costs and updates the forecast accordingly.

Table 2. Projected Spend
View Description
Overview Displays a line chart of the projected and actual spend for the selected period.
Clusters Displays a line chart of the projected and actual spend for all the clusters in your Nutanix account.
Resource Metering
The Resource Metering view provides a granular view of the cost of running the resources such as VM, VM-Snapshot, Files, and Objects within a Nutanix cluster. The granular level costing helps in understanding the cost incurred to run a Nutanix resource that can be used for Showback or Chargeback.

The Resource Metering tab displays the list of clusters and cost associated with each of the clusters.

  • Cluster Details - Shows the list of all clusters (excluding Prism Central Logical clusters) along with the latest or last known configuration metadata and cumulative daily amortized cost for the selected period.
    Figure. Cluster Details Widget Click to enlarge Cluster Details

    Click View Details to view the individual cluster data for the selected month. At the top-right corner, you can click the share , schedule , and download icon respectively to share, schedule, and download the report with all details of the respective cluster.

    Figure. List of resources in a Cluster Click to enlarge List of resources

You can view the following widgets.

Total Cost of Ownership - Shows the total cost incurred for the cluster. You can view the cost breakups for all the cost heads (for example, software, telecom, hardware, service, people, and facilities). You can hover over the chart to view the contribution of each cost head (in percentage) to the total spend.

Cluster Details - Shows the detailed information of the cluster. For example, resource metering model, the name of the hypervisor, user VM count, and used VM snapshot storage, used file storage, and used object storage.

List of VMs, VM-Snapshots, and Files & Objects - By default, the list of VMs running within a cluster are displayed. You can click the drop-down list in the right corner to view the cost of storage consumed by VM-Snapshots, and Files & Objects. Depending on the resource type you selected, you can view the metered cost for VMs, VM-Snapshots, Files and Objects based on usage and price per unit on a daily basis.

Table 3. List of Resources and Entities
Resources Entity Description
VM VM Name The name of the Nutanix VM.
VM ID The unique identifier of the Nutanix VM.
vCPU-hrs The aggregate of vCPU configuration and the number of hours the VM has been powered on for the selected month.
Average price per vCPU-hr The average daily cost per vCPU-hrs for the selected month to date.
vRAM-hrs The aggregate of provisioned vRAM configuration and the number of hours the VM has been powered on for the selected month.
Average price per vRAM-hr The average daily cost per vRAM-hrs for the selected month to date.
vStorage-GiB The average daily utilization of storage per VM.
Cumulative price per vStorage-GiB The total cost of storage utilization per VM for the selected month to date.
Cost (Month-to-date) The cumulative cost of VM per day for the selected time period.
Avg Cost per Hour The average cost of VM per hour for the selected month to date.
VM-Snapshots PD Name The name of the Snapshot/Protected Domain.
vStorage-GiB The average daily utilization of storage per Snapshot/Protected Domain.
Cumulative price per vStorage-GiB The total cost of storage utilization per Snapshot/Protected Domain for the selected month to date.
Cost (Month-to-date) The cumulative cost of Snapshot/Protected Domain per day for the selected time period.
Files & Objects Service Name The name of the File Service or Object Service.
vStorage-GiB The average daily utilization of storage per Files service and Objects service.
Cumulative price per vStorage-GiB The total cost of storage utilization per Files service and Objects service for the selected month to date.
Cost (Month-to-date) The cumulative cost of Files service and Objects service per day for the selected time period.
Unprovisioned Virtual Capacity
When the planned virtual capacity (such as vCore, vRAM, or vStorage) is greater than actual virtual capacity.
Overprovisioned Virtual Capacity
When the planned virtual capacity (such as vCore, vRAM, or vStorage) is less than actual virtual capacity.

In the overprovisioned virtual capacity scenario, the cluster cost from the sum of all the resource cost is higher than the cluster amortized cost. You can view this information in the VM details table.

Unmetered Cluster Cost
Unmetered Cluster Cost tracks days when the pulse was not available and shows the cluster cost incurred for those days. The Resource metering is not available when the pulse is disabled. This line item appears for each cluster corresponding to the days in the month when the pulse was disabled or unavailable.
Figure. Overprovisioned/Unprovisioned Virtual Capacity and Unmetered Cluster Cost Click to enlarge

Compute
The Compute view provides the cost specific to all the Compute Services and Sub Services that are running in your Nutanix clusters.
Table 4. Compute
View Description
Overview Displays a line chart of the total spend for all the compute services within the cluster for the selected period. You can hover over the stacked bar to view the total spend for a specific day.
Clusters Displays a line chart of the total cost of each cluster in your Nutanix account.
Services Displays a line chart of the total cost for all the compute services within your Nutanix cluster.
Sub Services Displays a line chart of the total cost for all the sub-services within your Nutanix cluster.

Beam provides cost visibility to the Nutanix Virtual Machine Service. This service has VM and VM-Snapshot Sub-Services.

Cost Centers Displays the total spend split by Beam cost centers. This view includes the unallocated costs that are not tagged in your Nutanix account.
Resources

Displays all the VM IDs (VM Sub Service) and protection domains (VM-Snapshot Sub Service) along with their associated cost.

Tags
Displays a line chart of the total cost based on the categories applied to your Nutanix resources. You can click the tag drop-down list to select a category.
Note: The chart also displays all the values that are tagged to a selected tag key.
Storage
The Storage view provides the cost specific to all the Nutanix services used for storage that are running within your clusters.
Table 5. Storage
View Description
Overview Displays a line chart of the total spend on all the services used for storage within the cluster for the selected period.
Clusters Displays a line chart of the total cost of each cluster in your Nutanix account.
Services Displays a line chart of the total cost for all the storage services within your Nutanix cluster.
Cost Centers Displays the total spend split by Beam cost centers. This view includes the unallocated costs that are not tagged in your Nutanix account.

Beam Service Categorization - Nutanix

The cost details for the following services, service types, and sub-services are shown in Beam.
Table 1. Service categorization
Category Description
Services Nutanix Virtual Machine Includes all the user VMs along with their Snapshots in the cluster.
Nutanix Files Includes all the Files services in the cluster.
Nutanix Objects Includes all the Object services in the cluster.
Nutanix End Users Includes all the user VMs hosting VDIs in the cluster running an Acropolis VDI software package.
Nutanix Edge Include all the user VMs hosting ROBO in the cluster running an Acropolis ROBO software package.
Service Types Compute Includes all the compute services running in the Nutanix cluster
Storage Includes all the storage services running in the Nutanix cluster
No Service Includes both the Unmetered Cluster Cost and Unprovisioned Virtual Capacity. For more details, refer to the section Resource Metering in the topic Cost Analysis (Nutanix) .
Sub-Services Nutanix Virtual Machine Includes all the user VMs in the cluster.
Nutanix VM-Snapshot (Protection Domain) VMs are replicated through Snapshots, which consume virtual storage capacity. The VM-Snapshot sub-service provides cost visibility of Snapshots within a cluster at a Protection Domain granularity. A protection domain is a defined group of VM snapshots to be backed up locally on a cluster or replicated on the same schedule to one or more remote sites.
Note: Beam recommends upgrading NCC to 3.10 to get accurate cost visibility for snapshots (Protection Domain).
Nutanix Edge Virtual Machine Include all the user VMs hosting ROBO in the cluster running an Acropolis ROBO software package.
Nutanix Edge Snapshot Include all the user VM-Snapshots hosting ROBO in the cluster at a Protection Domain granularity running an Acropolis ROBO software package.
Nutanix End Users Virtual Machine Includes all the user VMs hosting VDIs in the cluster running an Acropolis VDI software package.
Nutanix End Users Shapshot Include all the user VM-Snapshots hosting VDIs in the cluster at a Protection Domain granularity running an Acropolis VDI software package.
Beam associates Prism categories (Tags) to the Protection Domains if there is one VM to one PD mapping since PDs by themselves do not support tagging through Prism Categories. This helps in the following:
  • Allow Beam to filter costs by using tags that are attached to the VMs and PDs.
  • In the Cost Center definition, when a prism category is selected, both the VMs and PDs are allocated to the defined cost center.

Cost Reports

Beam generates a report to track cost consumption across all your Nutanix accounts. The report contains detailed spend information for clusters including spend at a VM level within each cluster.

The Cost Report by Cluster report is generated automatically and available to download from the Beam console. This report contains month to date detailed spend information about the VMs within the cluster.
Note: The currency used in the Cost Report by Cluster report is based on the target currency configured at the time of report generation.

Report Options

Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
  • In the System tab, you can view the report. Clicking View All displays a table that contains the clusters and its associated cost. The cost column reflects the target currency using which the reports are generated. You can share, schedule, and download the report for each cluster.
    Figure. Nutanix Cost Report - Download Click to enlarge
  • In the Scheduled tab, you can view all the schedule reports. You can edit, disable, or delete any scheduled report.

Scheduling Reports

Beam application generates reports to track cost consumption for your Nutanix cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

The application allows you to schedule a report at a desired time.

To schedule a report, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View All in the top-right corner of the Cost Report by Cluster to display a table that contains the clusters and its associated cost.
  3. Click Schedule (on the right corner of each row) for which you want to schedule the report sharing.
    The Schedule Report Sharing window appears.
  4. In the Report Name field, enter the name for the report you want to share.
  5. Hover over any of the Daily , Weekly , or Monthly to set the delivery schedule.
  6. In the Repeat Days field, click the days on which the you need the report to be shared.
  7. In the At and Timezone drop-down list, select the time (Hour, Minute, and AM/PM) and timezone.
  8. In the Recipients box, type the email address of the recipient to whom you want to share the report.
  9. Press Enter or Space to add the recipient email address. You can also add multiple email address separated by a comma or semicolon.
  10. If you want send a copy of the report to yourself, select the Send a copy to myself check-box and click Schedule .
    All the schedule reports are available under the Reports > Scheduled . You can edit, disable, or delete any scheduled report.

Sharing Reports

Beam application generates reports to track cost consumption for your Nutanix cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

Beam allows you to share the report with stakeholders over email.

To share the report, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View All in the top-right corner of the Cost Report by Cluster to display a table that contains the clusters and its associated cost.
  3. Click Share (on the right corner of each row) for which you want to share the report.
    The Share Report window appears.
  4. In the Recipients box, type the email address of the recipient to whom you want to share the report.
  5. Press Enter or Space to add the recipient email address. You can also add multiple email address separated by a comma or semicolon.
  6. In the Report Name field, enter the name for the report you want to share.
  7. (Optional) In the Message field, enter any additional message that you want to share with the recipient along with the report.
  8. If you want send a copy of the report to yourself, select the Send a copy to myself check-box and click Share to complete.

Downloading Reports

Beam application generates reports to track cost consumption for your Nutanix cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

The application allows you to download the report for offline consumption.

To download a report, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View All in the top-right corner of the Cost Report by Cluster to display a table that contains the clusters and its associated cost.
  3. Click Download (on the right corner of each row) for which you want to download the report.
    The report is downloaded to your local system.

Chargeback

The Chargeback feature enables the financial control of your cloud spends by providing the ability to allocate cloud resources to departments based on definitions. It provides a consolidated view across all your cloud accounts in a single pane of finance view.

Note the following points when creating a chargeback for your Nutanix on-premises environment.

  • The parent account is equivalent to the Nutanix account.
  • The child account is equivalent to clusters.
  • The tags are equivalent to the category. For more information on Prism categories, see Category Management.
Note:
  • Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts.
  • You cannot build chargeback on the custom scope. You define a scope using accounts and resources and can add a resource across different scopes.
  • You can view the spend data in the configured target currency . For more information on general guidelines and considerations, see Currency Configuration Considerations .

Business Units

A business unit is defined as a collection of cost centers. You can use the business units to define hierarchies in your organization between different departments. It is not necessary to define a business unit to view chargeback. You can also define chargeback only based on cost centers.

Cost Center

A cost center is a collection of resources within a single or multiple cloud accounts. You can assign the resources to the cost center based on tags. You can either allocate a complete account or resources within an account to a cost center.

Note:
  • If a cloud account is assigned to a cost center with the tag definition as All Tags , then you cannot share this account with another cost center.
  • An account, cluster, or a tag once used in a Cost Center definition cannot be reused. This is to prevent double-counting of the cost of resources.
  • The resources that do not belong to any of the cost centers are grouped under Unallocated Resources . You can manually allocate any unallocated resources into a cost center.

Unallocated Resources

Unallocated resources are the resources that do not belong to any of the cost centers based on definitions.

Unallocated resources include the following.

  • Accounts not allocated to any cost center (includes all the resources within the account)
  • Resources within an account not allocated to any cost center.

Multicloud Configurations

You can define a cost center based on accounts, clusters, and prism categories. You can add resources belonging to a single or multiple accoounts when defining the cost center. Beam also allows you to extend the definition of your cost centers to public clouds, that is, you can add AWS and Azure resources.

Note: Beam associates Prism categories (Tags) to the Protection Domains if there is one VM to one PD mapping since PDs by themselves do not support tagging through Prism Categories

The following image describes an example of a multicloud configuration. The cost center consists of Nutanix, AWS, and Azure resources.

Figure. Multicloud Configuration Click to enlarge

Adding a Business Unit

Before you begin

You can create a business unit only if you have an Admin role in Beam.

About this task

You can define a business unit by combining a group of cost centers. Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts. You can select the owners and viewers for the business unit. Both owners and viewers have read-only access to the business unit.

To add a business unit, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
  2. In the Business Unit Configuration page, click the Create list and select Business Unit .
    The Create Business Unit page appears.
  3. In the Name box, enter a name for the business unit.
  4. In the Owners list, click to select owners for the business unit you are creating.
    Note: The business unit owner is financially accountable for the business unit. You can select multiple owners for the business unit.
  5. In the Viewers list, click to select viewers for the business unit you are creating.
  6. In the Cost Centers list, select the cost centers that you want to map to your business unit.
  7. Click Save Business Unit to complete.
    The business unit you just created appears in the Business Unit Configuration page. You can use the business unit to build chargeback.

Editing a Business Unit

You can edit (or delete) an existing business unit.

Before you begin

You can edit or delete a business unit only if you have an Admin role in Beam.

About this task

You can define a business unit by combining a group of cost centers. Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts.

To edit a business unit, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
    You can view the list of business units in the Business Unit configuration page. You can use the drop-down to filter the list by the business unit.
  2. Click Edit against the business unit that you want to edit.
    The Edit Business Unit page appears.
  3. Click Save Business Unit after you edit the fields according to your requirement.
    Once edited, it takes 24 to 48 hours for the cost data to get updated in the Budget page.

Adding a Cost Center

Before you begin

You can create a cost center only if you have an Admin role in Beam.

About this task

A cost center is a department to which you can allocate cloud accounts and resources based on the definition. You can define a cost center by selecting resources by accounts and tags across different clouds. You can select the owners and viewers for the cost center. Both owners and viewers have read-only access to the cost center.

Note the following points when creating a cost center for your Nutanix cloud.

  • The parent account is equivalent to the Nutanix account.
  • The sub account is equivalent to a cluster.
  • The tags are equivalent to the category. For more information on Prism categories, see Category Management.
Note: Configuring Prism Categories and mapping them to specific VMs is required to create tag-based Cost Centers. Without Prism Categories, Cost Centers can be created only by mapping an entire Cluster.

To add a cost center, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
  2. In the Business Unit configuration page, click the Create list and select Cost Center .
    The Create Cost Center page appears.
  3. In the Name box, enter a name for the cost center.
  4. In the Owners list, click to select owners for the cost center you are creating.
    Note: The cost center owner is financially accountable for the cost center. You can select multiple owners for the cost center.
  5. In the Viewers list, click to select viewers for the cost center you are creating.
  6. Click Define Cost Center to open the Define Cost Center page.
    You define the cost center by selecting the accounts and tags across different clouds.
  7. In the Define Cost Center page, do the following.
    1. In the Cloud list, select Nutanix .
    2. In the Parent Account list, select the parent account.
    3. In the Sub Accounts list, select the sub accounts. You can select multiple sub accounts.
    4. In the Tag Pair area, select the key and value pairs to further refine the definition of your cost center. You can click the plus icon to add more key and value pairs.
      Note:
      • The resources that are not tagged to any cost center are considered as untagged. You can select and assign the value pair UNTAGGED to the respective key set in order to assign the untagged resources to the cost center.
      • For the selected resource in your Nutanix account, you can tag multiple values to a single tag key set.
    5. Click Save Filter to save the filter. You can click Add Filter to add more filters.
    6. Click Save Definition to save your cost center definition and close the Define Cost Center page.
  8. Click Save Cost Center to complete.
    It may take up to 24 to 48 hours for the cost data related to the newly added cost center to get displayed in the Budget tab.

Editing a Cost Center

Beam allows you to edit (or delete) an existing cost center.

Before you begin

You can edit a cost center only if you have an Admin role in Beam.

About this task

A cost center is a department to which you can allocate cloud accounts and resources based on the definition. You can define a cost center by selecting resources by accounts and tags across different clouds.

To edit a cost center for multicloud accounts, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
    You can view the list of cost centers in the Business Unit configuration page. You can use the drop-down list in the top-right corner to filter the list by the cost center.
  2. Click Edit against the cost center that you want to edit.
    The Edit Cost Center page appears.
  3. Click Save Cost Center after you edit the fields according to your requirement.

Allocating Unallocated Resources To Cost Center

Before you begin

You can allocate an unallocated resource only if you have an Admin role in Beam.

About this task

Unallocated resources are the resources that do not belong to any of the cost centers based on definitions.

To allocate an unallocated resource for chargeback, do the following.

Procedure

  1. In the Beam console, select Finance from the View Selector pop-up and go the Chargeback page using the Hamburger icon in the top-left corner.
  2. In the top-right corner of the page, click the Unallocated button.
  3. In the Unallocated Cost table, click the expand icon against the account or subscription to view the list of services.
  4. Click View Details against the service item to view the list of resources.
  5. Click the Allocate button against the resource item that you want to allocate to a cost center.
    The Select a cost center pop-up window appears.
  6. In the Cost Center Name list, select the cost center for the resource.
  7. In the Percentage Split box, enter the resource cost percentage you want to allocate to the cost center you selected.
    If you want to split the cost of the resource between two or more cost centers, click Add a split to specify the cost centers and the percentage of cost split between the cost centers.
  8. Click Allocate Resource to complete.
    The resource you just allocated appears in Allocated Cost table.

Chargeback Views

Chargeback View (Administrator)

An administrator can do the following.

  • Assign unallocated resources to a cost center
  • Create a business unit and cost center

You can select the Allocated and Unallocated options in the top-right corner of the page to view the cost details for allocated and unallocated resources.

In the Unallocated Cost table, you can use the Allocate button to allocate resources to a cost center. To allocate the unallocated resources to cost centers, see Allocating Unallocated Resources To Cost Center.

In the Allocated Cost table, an administrator can browse through all the business units and cost centers (created by the administrator) to view detailed information.

Figure. Chargeback Click to enlarge Chargeback - Global View
Table 1. Chargeback Views (Administrator)
View Description
Spend Overview Displays a pie graph of the cost breakup summary for the allocated and unallocated resources.
Spend Analysis - Unallocated Cost

Displays a bar graph of the historical spend analysis for the unallocated resources with an actual and projected spend for the last three months.

Spend Analysis - Allocated Cost Displays a bar graph of the historical spend analysis for the allocated resources with an actual and projected spend for the last three months.
Top Spend Displays the services and accounts that are consumed the most in your business unit or cost center. You can use the drop-down list in the right corner to select Top services or Top accounts .
Unallocated Cost Displays the list of unallocated resources along with the cloud type, associated accounts, or subscriptions. You can also view the total cost incurred for each resource.

You can also allocate unallocated resources to a cost center. For more information, see Allocating Unallocated Resources To Cost Center.

Allocated Cost Displays detailed information for the allocated resources that include the following.
  • Business unit or cost center
  • Owner of the business unit and cost center
  • Total cost of the allocated resources
  • Definition of the business unit or cost center. For example, a business unit constituted of four cost centers.

In the Actions column, you can click the Edit and Delete buttons to edit the resource details or delete the resource.

You can use the drop-down list in the top-right corner of the Allocated Cost table to filter the resources by business unit, cost center, or business unit and cost center.

You can also click the share and download icons in the top-right corner to share or download the resource details.

Chargeback View (Owners and Viewers)

The owners and viewers can view the business units and cost centers for which they have access.

You can use the View Selector (public cloud) and View Selector (Nutanix) to select the business unit or cost center.

Note: Owners and viewers have only view access to the business units and cost centers unless they are an administrator. Owners help in identifying the financial owner for the business unit or cost center.

The following table describes the widgets available for business unit and cost center views.

Table 2. Chargeback Views (Owners and Viewers)
View Description
Spend Overview Displays the total spend cost (month to date) and the projected spend cost for the business unit or cost center.
Spend Analysis - Allocated Cost Displays a bar graph of the historical spend analysis for the allocated resources with an actual and projected spend for the last three months.
Top Spend Displays the services and accounts that are consumed the most in your business unit or cost center. You can use the drop-down list in the right corner to select Top services or Top accounts .
Allocated Cost Displays detailed information for the allocated resources (within a cost center) that includes the following.
  • Cloud type ( Nutanix , AWS , Azure , or GCP )
  • Name of the account, subscription, or cluster
  • Account, subscription, or cluster ID
  • Total cost incurred for the account, subscription, or cluster.
You can use the expand icon to view details about services within the account or subscription.
Note: If the pulse was not available for a Nutanix cluster for a given number of days in a month, the Unmetered Cluster Cost line item appears showing the cluster cost incurred for those days. For more information, see Cost Analysis (Nutanix).

You can click View Details against each service to view the resource details.

You can also click the share and download icons in the top-right corner to share or download the detailed report for the cost center.

Budget

The Budget feature in Beam extends the budgeting capability for a business unit, cost center, or scope. A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined. You can also create custom budgets.

Beam monitors and tracks the consumption continuously, and you can track any threshold breaches to the budgets that you have configured using the Budget page in the Beam console. Organization level budgets can be tracked at the quarterly or yearly level basis for the selected business unit, cost center, or custom budgets.

The Budget page allows you to view the budget cards based on the budgets that you have configured for your cloud accounts.

The budget card for a business unit or cost center displays the Budgeted amount , Current Spend , and Estimated spend . You can also edit the budget details or delete an existing budget using the Budget page.

You can define and track a budget based on the following resource groups.

  • Business Unit/Cost Centre based Budget – You can select multiple business units and cost centers to create a resource group. You can only select the business units and cost centers for which you have access.
  • Scope based Budget - You can create only one budget for each scope.

You can add a threshold for your budget alerts. When the budget reaches the threshold you specified, Beam sends a notification to the email addresses you enter when creating the budget alerts.

Creating a Budget Goal

Beam allows you to create Budgets based on the resource groups that you have defined.

About this task

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

To add a global Budget, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Budget .
  2. Click Create a Budget .
  3. In the Select Type page, select Custom Scopes based Budget or Business Unit/Cost Centre based Budget . Then click Next to go to the Define Resource Group page.
    Note: The custom budget option is not available for the Nutanix cloud. You can only create a business unit or cost center-based budget for the Nutanix cloud.
  4. Select the business units and cost centers or Scopes (if you selected Custom Scopes based budget ) from the resource drop-down menu to define the scope for the budget and click Next to go to the Allocate Budget page.
    Note: You can select the business unit and cost center from the drop-down menu only if you have access to the resource group item.
  5. In the Budget Name box, enter a budget name you want.
  6. In the Financial Year list, select the financial year session.
  7. In the Allocation Type area, select Automatic Allocation or Manual Allocation .
    Note: Selecting Automatic Allocation allows Beam to allocate budget based on your spend. Beam uses the last 40 days of data to project the budget for the current and next month.
  8. If you select Manual Allocation , do the following.
    1. In the Set Annual Budget field, enter the Annual Budget for the selected business center and cost center.
    2. To distribute the annual budget equally for all the quarters, click Distribute Equally .
      Alternatively, you can set the budget for each quarter manually. Similarly, you can either distribute the quarterly budget manually for each month or click Distribute Equally to distribute the quarterly budget equally for all the months. The monthly and quarterly budgets must add up to the annual budget that you have entered.
  9. Click Next .
    The Add Alerts to your Budget page appears.

    You can add percentages of the total budget value. When the budget reaches the threshold value you entered, Beam sends a notification via an email to the email addresses you enter.

    You can add alerts for the following periods.

    • Monthly Budget Alerts
    • Quarterly Budget Alerts
    • Yearly Budget Alerts
  10. Click Create against the period for which you want to create a budget alert.
    The Budget Alert page appears.
  11. In the Threshold box, enter the threshold value in percentage. Then click Save .
  12. In the Alert Notification box, enter the email addresses to which you want to send the alerts. Then click Save to create the budget.
    Note: Beam sends the budget alert notifications to the owners of the business unit, cost centers, or Scopes by default.

Budget Details

You can view your budget details from the Budget page in Beam. To view budget details, click View Details option in any of the budget cards. The Year Breakup and Cost Breakup for the budget is displayed.

To download or share the cost breakup report, click the download or share icon.

Expired Budgets

In the top-right corner, you can click the Expired button to view the expired budgets.

You can do the following in the expired budgets view.

  • Click the View Details option in any of the expired budget cards to view the details.
  • Click the Renew option to renew the latest expired budget for the current financial year.
    Note: The Renew option is available only for the latest expired budget. In case a budget for a resource group is already created for the current financial year, the renew option for the expired budget of the same resource group is not available.

Editing Budget Alerts

You can edit the budget alerts for the cost centers and business units in the Budget page. In the Budget page, you can view the budget cards for your cost centers and business units.

About this task

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

To edit the budget alerts for your cost centers, business units, or scopes, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Budget to open the Budget page.
  2. Click Edit against the business unit, cost center, or scope to edit the budget alerts.
    The Edit the Budget page appears.
  3. Click Budget Alert tab.
    You can view the details of the alerts in the Categories area.
    You can view the following icons in the Action column.
    • Disable Alert or Enable Alert icon - Click to disable or enable the alert.
    • Edit Alert icon - Click to edit the alert.
    • Remove Alert icon - Click to remove the alert.
  4. Click the Edit Alert icon to edit the alert.
    The Budget Alert page appears.

    You can edit the period and threshold values.

  5. Click Save to save your changes.
  6. In the Alert Notifications box, enter the email addresses to which you want to send the alerts.
  7. Click Update to update the budget alert.

Scopes

A scope is a logical group of your cloud resources that provides you with a custom view of your cloud resources. You can define scopes using cloud, accounts, and tags. The administrator can assign read-only access to a user. The user gets read-only access for the resources within the scope and not the cloud accounts that constitute the scope. After you create a scope, click View in the top-right corner to open the User Interface Layout and select the scope.

Creating a Scope

A scope is a logical group of your cloud resources that provides you with a custom view of your cloud resources.

About this task

This section describes the procedure to create a scope.
Note: Only a Beam administrator can create a scope.
Note the following points when creating a scope for your Nutanix cloud resources.
  • The parent account is equivalent to the Nutanix account.
  • The child account is equivalent to a cluster.
  • The tags are equivalent to the category. For more information on Prism categories, see Category Management.

To create a scope, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Scopes .
  2. Click Create a Scope to open the Scope Creation page.
    In the Scope Creation page, you need to enter a scope name and add viewers from the list available. Then you need to define the scope based on cloud, account, and tags.
  3. In the Name box, enter a name for the scope.
  4. In the Viewers list, select the viewers for the scope you are creating.
  5. Click Define Scope to select the cloud, account, and tags.
  6. In the Define Scope page, do the following.
    1. In the Cloud list, select Nutanix .
    2. In the Parent Account list, select the parent account for which you want to create a scope.
    3. In the Sub Accounts list, select the sub accounts from the list available.
      The Sub Accounts list displays all the sub accounts within the parent account you selected.
    4. In the Tag Pair area, select the key and value pairs.
      You can click the plus icon to add multiple key and value pairs.
    5. Click Save Filter to save your filter.
      You can click Add Filter to create more filters.
    6. Click Save Resource Group to save your scope definition and close the Define Scope page.
  7. In the Scope Creation page, click Save Scope to create the scope.
    You can select and view the scope you just created using the User Interface Layout.

Licensing

This section provides information about the supported licenses for your Beam accounts, license entitlement for Nutanix on-premises, and subscription entitlement for public clouds.

The supported licenses are as follows. If you have any of the following licenses, you can convert your Beam account to a paid state.
  • NCM Ultimate
  • NCM Pro
  • Prism Ultimate
The Licensing page allows you to:
  • View information about license entitlement (required for accessing cost management features for Nutanix On-premises).
    • Status can be In Trial , License Applied , Trial Expired , or License Expired .
  • Activate your license ( Apply License or Validate License ) to convert from In Trial or Expired to the License Applied status.
  • View information about Beam public cloud subscription (required for accessing cost management features for Public Cloud).
    • Status can be In Trial , Subscription Active , Trial Expired , or Subscription Expired .
  • Purchase public cloud subscriptions using the Purchase Online Now option.
Note: Only the Beam account owner can view the details on the Licensing page.

To access the Licensing page, click Hamburger Icon > Configure > Licensing .

Figure. Licensing Page Click to enlarge

License Entitlement for Nutanix - Status

Refer to the following table to know about the various statuses available for your Beam account.

Status Description
In Trial A banner gets displayed notifying you about the trial period and suggesting you purchase a license.
Trial Expired or License Expired The Expired status appears after your trial period gets over or the applied license gets expired. You would not be able to use the Beam features and need to purchase the license.
License Applied Indicates that your license is applied. You can also view the expiry date for the license.

License Entitlement for Nutanix - Actions

The following two cases may occur after the license gets applied:

System detects a valid license
Beam checks if the supported license is available for the Beam account in which you are logged in.
  • If a license gets detected, the License is Found status gets displayed along with the Apply License option to apply for the license manually. After you perform this action, the status changes to License Applied along with the expiry date.
  • You need to perform the Apply License action only for the first time after you buy a license and log on to Beam. Beam periodically checks for any new licenses that you purchased, then applies the license or updates the expiry date accordingly.
    • Scenario 1: Beam detects L1 (expiry date as 31 December 2021) on 1 January 2021 and you click Apply License to activate the license. On 31 December 2021, Beam detects L2 (expiry date as 31 December 2022) and automatically applies the license.
    • Scenario 2: Suppose you buy multiple licenses with different expiry dates. For example, L1 and L2 with expiry dates as 21 June and 21 December. Beam will show the license with the farthest expiry date (21 December) once the data about the new purchases get discovered through periodic checks.
System does not find a valid license
Suppose you have any one of the supported license and Beam is unable to identify or recognize your license due to some reason and shows the License Not Found status. The Validate License option allows you to manually enter and validate your license key.
The validation may fail due to the following reasons:
  • You entered an invalid license key.
  • You entered a valid license key that belongs to another MyNutanix account. The reason for validation failure is that the Beam account can only be linked to a single MyNutanix account. Contact Nutanix support for further help.
.

Subscription Entitlement for Public Clouds - Status

For public clouds (AWS and Azure), one of the following statuses get displayed on the Licensing page:

Status Description
In Trial A banner gets displayed to notify you about the trial period and an option to purchase a subscription. To purchase a subscription, click Purchase Online Now , which redirects you to the Nutanix Billing Center.
Subscription Active This status indicates that you have an active subscription. The expiration date of the active subscription is also displayed.
Trial Expired or License Expired The Expired status appears after your trial period gets over or the active subscription gets expired. You would not be able to use the Beam features and need to purchase or renew the subscription.

Feature Violations

You must have a valid license and subscription to use the Beam features for your Nutanix on-premises and public clouds. If you do not purchase a license or subscription, a banner gets displayed at the top of the Beam page indicating that there are feature violations.

Feature violations occur in the following cases:
  • You have an active license but you are using the public cloud features in Beam without an active subscription.
  • You have active public cloud subscriptions but you are using the Nutanix features in Beam without a license.

Beam API

Beam provides API services to allow you to programmatically retrieve data from the Beam's platform and perform different configurations.

The following are a few scenarios for using the API services:
  • You can programmatically retrieve analyze data for your cloud accounts using API. You can select the resource group to raise a query. For example, you can raise a query for a business unit or a cost center. For more information, see the Analysis v2 or Analyze sections in the Beam API Reference Guide.
  • You can use APIs for remote onboarding of your cloud accounts without the need to log in to the Beam GUI. For more information, see dev.beam.nutanix.com.
  • You can perform CRUD operations on business units, cost centers, and scopes using APIs. For more information, see the Cost Governance - Multicloud section in the Beam API Reference Guide .

Cost Management for Public Cloud

Beam allows you to track your cloud expenses, optimize your cloud cost, and achieve visibility into your spend. Beam centralizes all your cloud expenses and multiple accounts into a single place and simplifies cloud cost management.

The supported public clouds are AWS , Azure , and GCP .

AWS

Support for AWS allows you to track your AWS cloud expenses, optimize your cloud cost, and achieve visibility into your spend. Beam centralizes all your AWS cloud expenses and multiple AWS accounts into a single place and simplifies cloud cost management.

For a consolidated billing, you must sign up in the AWS Billing and Cost Management console and designate your account as a payer account. You can link other accounts (linked accounts) to the payer account for consolidated billing. In AWS consolidated billing, the payer account is charged for all the linked accounts every month. For more information, see AWS Billing and Cost Management User Guide on AWS documentation website .

Beam helps you to optimize AWS services like EC2, RDS, EBS, EIP, Route53, Workspace, EC2, DynamoDB, ELB, Redshift, and ElastiCache.

Note: The historical spend reported in Beam is delayed by at least 48 hours. The cloud providers take around 24 to 48 hours to update the complete billing data.

Azure

Support for Azure enables you to take control of your cloud spend on the Azure cloud. Beam helps you to optimize the Azure services like Managed Disks, Azure SQL DB, VM, Public IP, PostgreSQL Server, MySQL Server, Load Balancer, Application Gateways, SQL DataWarehouse, and Redis Cache.

Beam now supports the following two Azure billing account types for cost governance.
  • Enterprise Agreement (EA)
  • Microsoft Customer Agreement (MCA)
Note: The historical spend reported in Beam is delayed by at least 48 hours. The cloud providers take around 24 to 48 hours to update the complete billing data.

Google Cloud Platform

Support for Google Cloud Platform (GCP) allows you to track your GCP resources and provide insights into your cost spends. Beam centralizes all your GCP cloud expenses and multiple GCP accounts into a single pane and simplifies cloud cost management.

Beam helps you analyze your GCP services like Compute Engine, Cloud Storage, Kubernetes Engine, and BigQuery.

Note: The historical spend reported in Beam is delayed by at least 48 hours. The cloud providers take around 24 to 48 hours to update the complete billing data.

Multicloud Views

Apart from providing cost visibility and optimization recommendations for individual clouds, Beam also provides cost visibility and optimization recommendations for entities with multiple clouds (Nutanix, AWS, Azure, and GCP). You can configure two types of multicloud entities - Financial and Scope.

Financial is a hierarchical structure comprising of Business units and Cost Centers used for Chargeback. Chargeback allows you to group cloud resources across multiple clouds (Nutanix, AWS, Azure, and GCP) within Cost Centers and Business units. By design, a cloud resource cannot be part of different cost centers or business units. For more information, see Chargeback.

Scope is a custom resource group with resources across multiple clouds (Nutanix, AWS, Azure, and GCP). Scopes are useful in providing visibility to logical resource groups like applications, project teams, cross-functional initiatives, and more. Hence, by design, a cloud resource can be part of multiple Scopes. For more information, see Scopes.

User Interface Layout

This section provides information on the layout and navigation of the Beam console.

The Beam console provides a graphical user interface to manage Beam settings and features.

View Selector

When you log on to the Beam for the first time, the View Selector pop-up appears. The View Selector pop-up allows you to search for an account by typing the account name in the search box or navigate and select a cloud account, business unit or cost center, and custom scope.

The Dashboard is the welcome page that appears after logging and selecting a view in Beam.

Figure. View Selector Click to enlarge

For the subsequent logins, the last visited page for the last selected view (account, business unit or cost center, and scope) appears. You can click View in the top-right corner to open the View Selector pop-up and select a different view.

Table 1. Available Views
View Description
All Clouds Select this view to get total cost visibility for AWS-, Azure-, GCP-, and Nutanix-accounts for which you have access.
Cloud Accounts (AWS, Azure, GCP, and Nutanix) Allows you to select AWS, Azure, GCP, and Nutanix accounts.
Financial Allows you to select business units or cost centers you created for the chargeback.
Scopes Allows you to select the custom scopes you created.

Beam Console

The Beam Console screens are divided into different sections, as described in the following image and table.

Figure. Beam Console Overview Click to enlarge

Table 2. Beam Console Layout
Menu Description
Hamburger icon Displays a menu of features on the left. When the main menu is displayed, the hamburger icon changes to an X button. Click the X button to hide the menu.
Alerts The Alert option allows you to see system-generated alert notifications. Click the bell icon from the main menu bar to view alerts. Click See More to view the complete list of notification.
User Menu The user drop-down menu has the following options.
  • Profile - Displays your account information, timezone and email preferences, and Product Access (read/write) for Beam.
  • What's New - Displays the recent product updates.
  • Terms of Service
  • Privacy Policy
  • Responsible Disclosure
  • Log out - Click to log out of the Beam console.
Help Menu You can click the question mark icon in the top-right corner of the console to view the following:
  • Read Documentation - Redirects you to the Beam documentation.
  • Guided Tours - Opens the list of walkthrough videos that helps you to navigate through various tasks and workflows in Beam.
Widgets Section The widgets section displays the informational and actionable widgets corresponding to the feature that you have selected from the main menu. For instance, the Dashboard page display widgets like Spend Overview , Spend Analysis , Save , and Cloud Efficiency .
View option Click View in the top-right corner to open the View Selector pop-up.

Main Menu

Clicking the Hamburger icon displays a menu of features on the left. When the main menu is displayed, the Hamburger icon changes to an X button. Click the X button to hide the menu.

Figure. Main Menu Click to enlarge

The following table describes each feature in the main menu.

Table 3. Main Menu - Available Features
Feature Description
Dashboard Displays the Dashboard page (see Dashboard).
Analyze Displays the Analyze page (see Analyze - Cost Analysis).

Analyze allows you to track cost consumption across all your cloud resources at both the aggregate and granular level. You can drill down cost further based on your services, accounts, or application workload.

Chargeback Displays the Chargeback page (see Chargeback).

Enables the financial control of your cloud spends by providing the ability to allocate cloud resources to departments based on definitions. It provides a consolidated view across all your cloud accounts in a single pane for Finance views.

Budget Displays the Budget page (see Budget).

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

Reports Displays the Reports page (see Cost Reports).

Beam generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications. The reports are generated automatically and are available to view, download, or share from the Beam console.

Save Displays the Save page (see Save - Cost Saving).

Helps you save money on cloud spend by identifying unused and underutilized resources. Also, you get specific recommendations for optimal consumption.

Purchase Displays the Purchase page (see Purchase - Recommendations).

Provides insights on overall RI usage statistics, recommended RI purchases, managing existing RI portfolio, and RI coverage.

Playbooks Displays the Playbooks page (see Playbooks).

Allows you to automate actions on your public cloud environment for better cloud management. It helps you to improve your operational efficiency by reducing manual intervention.

Configure menu The Configure menu allows you to do the following operations.
  • Add and manage cloud accounts in Beam.
  • Manage Beam users
  • Integrate third-party applications
    • Slack
    • ServiceNow Integration with Nutanix Beam
  • Cost Policy
  • Chargeback
  • Scopes
  • AWS Cost Configuration
  • Budget Alerts
  • AWS Invoice

Administration and User Management

Beam allows you to do the following administrative controls.

  • Add and manage users accessibility to various cloud accounts in Beam.
  • Two types of access can be granted to a user.
    • Admin Access - The user gets read and write access to all cloud accounts added in your Beam account.
    • User Access - The administrator can grant Read Only or Read & Write permissions on selected cloud accounts to a user, thus allowing the administrator to exercise the principle of least privilege.

Adding a Beam User

You can add and manage users using the Beam console.

About this task

To add a user in Beam, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > User Management . Click Add User .
  2. In the Details page, enter the user details and change the timezone as desired. Then, click Next to go to the Permissions page.
    In the Permissions page, you can choose Admin Access or User Access .
  3. Click Admin Access if you want to give administrator rights to the user. See User Roles for details on types of users. Then, click Save to complete the task.
  4. Click User Access if you want to grant Read Only or Read & Write permissions on selected cloud accounts to the user.
    1. Select the required access ( No Access , Read Only , and Read & Write ) across clouds.
      The No Access option is selected by default.
    2. Select the AWS Payer Account and Linked Accounts, Azure Billing Accounts and Subscriptions, Nutanix accounts, and GCP Billing Accounts and Projects for which you want to provide access. Then, click Save to complete the task.
    Figure. User Management - Granting Permissions Click to enlarge

User Roles

This section helps you to understand all the different roles in Beam and when you would use each.

Figure. Beam User Roles Click to enlarge User Roles
Owner Role

When a user creates a Beam subscription, a tenant is triggered and the user becomes the owner of that tenant.
 An owner is shown as Admin (Owner) in the User Management page. The owner role has the following attributes.

  • Create other users with or without administrative privileges.
  • Access billing subscription information.
  • Access to licensing information. For more information on licensing for AWS, Azure, and GCP accounts, see Licensing.
  • Access to all features.
  • View the admin menu.
  • Perform all the read or write operations.
  • Create or edit budgets.
  • View, create, edit, or delete cost centers.
  • Restrict access to a limited number of cloud account for a user.
Note:
  • My Nutanix Account Administrators are considered as Owner in Beam.

    You can't create an owner role in Beam. However, you can create as many as three Beam owners from My Nutanix. Only My Nutanix Account Administrators have the necessary permissions to create Beam owners. For more information on My Nutanix user management, see Cloud Services Administration Guide .

  • Only My Nutanix Account Administrators or Beam Owners are able to manage billing and renewals of Beam subscription. For more information on managing billing and renewals, see Cloud Services Administration Guide.
Administrator Role

An administrator is shown as Admin in the User Management page. The administrator role has the following attributes.

  • Access to all features.
  • View the admin menu.
  • Add other users.
  • Perform all the read or write operations.
  • Create or edit budgets.
  • View, create, edit, or delete cost centers.
  • Restrict access to a limited number of cloud account for a user.
User Role

A user role is created by granting read-only or read & write permission for selected accounts.

  • An account having a user role cannot create other users.
  • An account having a user role with read-only permission can view the cost data.
  • Read-only or read & write permission can be granted on the selected Nutanix accounts.

ADFS Integration

Active Directory Federation Services (ADFS) is a Single Sign-On (SSO) solution that you can use for implementing single sign-on in Beam. To integrate ADFS with Beam, log on to your MyNutanix account, and perform the integration. See SAML Authentication for details. To log on to your Beam account using ADFS, see Logging into Xi Cloud Services as an SAML User .

User Management

You can configure the following user group types in ADFS.
  • Administrator user - Users added to this group have administrator access.
  • Beam user - Users added to this group can perform operations based on the access policy assigned by the administrator user.
Note:
  • If you are a part of the Beam user group and logging into Beam through ADFS for the first time, you will not have access to any cloud account. Contact your account administrator to get access to an account.
  • If you are a part of the administrator user group and logging into Beam through ADFS, you cannot add and delete users using the Beam user management. You can perform these actions in ADFS.
  • If you are logging into Beam through ADFS, you cannot change roles (administrator to user or user to administrator). You can perform these actions in ADFS.

Support

If you log on to Beam through ADFS, you can get access to Nutanix Support only if your MyNutanix account was used to add your Beam user account. However, if your Beam user account was not added using your MyNutanix account, you do not get access to support.

Starting Beam Free Trial

Beam is a multi-cloud cost governance SaaS product that provides a free, full-featured, 14-day trial period. During the free trial period, you can configure your Nutanix On-premises and public cloud accounts (AWS, Azure, and GCP) in Beam to evaluate the features. It takes about 15 minutes to configure your cloud accounts in Beam.

About this task

The following section describes the procedure to start a free trial.
  • If you have access to the MyNutanix account, perform Step 1 .
  • If you do not have access to the MyNutanix account, perform Step 2 .

Procedure

  1. If you have access to the MyNutanix account, do the following.
    1. Login to your MyNutanix account.
    2. In the Dashboard, scroll down to find the Beam application. Then, click Launch to open the Beam application and start your free trial.
      Figure. MyNutanix Dashboard - Launching Xi-Beam Click to enlarge
  2. If you do not have an existing MyNutanix account, do the following.
    1. Open the Beam webpage.
    2. Click Start Free Trial and fill the form that appears. Then, click Submit .
      Figure. Free Trial - Form Click to enlarge
      Your MyNutanix account gets created. Also, you will receive a verification email that contains a link for logging into the Beam application.

What to do next

You can add your Nutanix, AWS, Azure, and GCP accounts in Beam.

Cloud Account Configuration (Onboarding)

The following sections describe the onboarding steps for AWS-, Azure-, and GCP-accounts.

Prerequisites - AWS, Azure, and GCP

Refer to this section to know about the access and permissions required to configure your cloud accounts in Beam.

AWS Account Onboarding Requirements

Ensure that you have the following access before starting with the account onboarding steps:
  • AWS access: Account Owner or administrator access to the AWS payer account.
  • AWS access: IAM user with access to AWS S3, AWS CloudFormation Template, AWS Billing & Cost Management console, and permission to create a role.
  • Beam administrator access.

Azure Accounts Onboarding Requirements

Enterprise Agreement
Ensure that you have the following access before starting with the EA billing account onboarding steps:
  • Enrollment administrator access.
  • (Optional) To verify if you have Enterprise Administrator access, go to https://jwt.io/. The configuration works if the token you tested has Enterprise or Indirect Enterprise listed.
    {
      "EnrollmentNumber": "XXXXXXXX",
      "Id": "027d019e-5674-4771-a09d-1XXXXXXX840",
      "ReportView": "Enterprise", (or Indirect Enterprise)
      "PartnerId": "",
      "DepartmentId": "",
      "AccountId": "",
      "iss": "ea.microsoftazure.com",
      "aud": "client.ea.microsoftazure.com",
      "exp": 1554264804,
      "nbf": 1538540004
    }
    
  • Beam administrator access.
Note:
  • Only Enrollment Access Keys are accepted in Beam.
  • Access key activation may take up to 30 minutes.
Microsoft Customer Agreement
Ensure that you have the following access before starting with the MCA billing account onboarding steps:
  • You must have a Billing Account Owner role for adding an MCA billing account.
  • You must provide Billing account reader access to the application.
  • Beam administrator access.

GCP Account Onboarding Requirements

Ensure that you have the following access before starting with the account onboarding steps:

  • Permissions and Roles: Beam requires access to specific APIs and a service account. A GCP service account is an authorized identity to enable authentication between Beam and GCP. A set of predefined and primitive roles grant the service account the permissions it requires to complete specific actions on the resources in your GCP project/billing account or organization. For more information, see IAM Roles and Required Permissions.
    Note: Your GCP projects does not have to be a part of an organization in order for Beam to fetch the billing data.
  • Beam administrator access.

GCP Onboarding - Overview

To enable Beam to retrieve data on your Google Cloud Platform (GCP) resources and provide insights into your cost spend, you must onboard your GCP billing accounts to Beam. You can add multiple GCP billing accounts in Beam and get an insight into your cost spend.

Terminology Reference

The following table introduces the principal GCP terms and entities:

Table 1. Terminology Reference
Term Description
Project A project organizes all your Google Cloud resources. A project consists of a set of users; a set of APIs; and billing, authentication, and monitoring settings for those APIs. For example, all of your Cloud Storage buckets and objects, along with user permissions for accessing them, reside in a project. You can create single or multiple projects and use them to organize your Google Cloud resources, including your Cloud Storage data, into logical groups.
Billing Account

A cloud-level resource managed in the GCP console.

It tracks all of the costs (charges and usage credits) incurred by your Google Cloud usage. You can link one or more projects to a billing account. Project usage gets charged to the linked Cloud Billing account.

For more information, see the Overview of Cloud Billing Concepts section in the Google Cloud Documentation .
BigQuery Table

Contains individual records organized in rows. Every table is defined by a schema that describes the column names, data types, and other information.

You must provision the Big query table in one of the projects. The table will store the billing data for all the projects within the billing account. Beam pulls the billing data from the BigQuery table and provides insight into your GCP cloud spend.

Service Account A special kind of account used by an application (not a person) to make authorized API calls.
  • Beam creates a Service account on-the-fly while performing the onboarding steps for the first GCP billing account. For the subsequent billing accounts, Beam will auto-populate the previously created service account information on the onboarding page.
  • You need to provide certain permissions to the Service account so that Beam can pull the billing data and provide cost insights. To achieve this, you need to create a few IAM roles in the GCP console and attach them to the Service account. For more information, see IAM Roles for GCP Billing Exports.

Getting Started With Onboarding

Perform the following steps to onboard your first GCP billing account in Beam:
  1. Create a Service account from the Beam console

    The Service account for your enterprise gets created in Nutanix's own secure GCP Project. You need to create IAM roles with the required permissions and attach them to the Service account so that Beam can fetch the billing data and provide cost insights.

  2. Create a BigQuery dataset if not already created (GCP Console). Then perform the Export cloud billing data to BigQuery operation.
  3. Create and attach IAM Roles and Required Permissions for users with and without access to GCP organizations.
  4. Enter the GCP billing account details in the Beam console to complete the onboarding process.
Warning: If you have access to a billing project with no access to its billing account, you cannot onboard such billing accounts in Beam.

Adding GCP Billing Account

This section describes the procedure to add your first GCP billing account in Beam. Beam fetches the billing data of all projects associated with the given billing account and provides visibility into your spend.

About this task

Warning: If you have access to a billing project with no access to its billing account, you cannot onboard such billing accounts in Beam.

To add a GCP billing account in Beam, do the following:

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > GCP Accounts .
  2. Click Add a GCP Billing Account .
    The GCP Account Onboarding page appears.

    If you are onboarding your first billing account, an option to create a service account is visible. For the subsequent billing accounts, Beam auto-populates the previously created service account ID on the Configure Permissions page.

  3. Click Create a Service Account .
    Beam creates a service account on-the-fly. The service account for your enterprise gets created in Nutanix's own secure GCP Project. Provide the required permissions to this account so that Beam can retrieve the billing data and provide cost insights.
    Note: The Service account created when onboarding the first GCP billing account gets used for the subsequent GCP billing accounts. When onboarding the subsequent GCP billing accounts, the service account field gets auto-populated in the Configure Permissions page.
    Figure. GCP - Service Account Details Click to enlarge
  4. Before you start adding the billing source information in the Input Billing Source page, do the following:
    1. Create a BigQuery dataset if not previously created (GCP Console). Then perform the Export cloud billing data to BigQuery operation.
      Note: Beam only supports Standard Usage Cost Data option.
    2. Create the required IAM roles and attach them to the service account in the GCP Console. For more information, see IAM Roles and Required Permissions.
    After you have created the IAM roles and attached them to the Service account, go to the Configure Permissions page in the Beam console and click Next .
    Figure. GCP Onboarding - Adding Billing Account Details Click to enlarge
  5. Enter the Billing Account Name , Billing Account ID , Project ID , and Dataset ID .
    Ensure to fetch the correct billing account details from the GCP console. For more information, see Fetching the Billing Account Details .
  6. When all the field entries are correct, click Validation and Save to complete the onboarding process.
    Your first GCP billing account is successfully configured in Beam.
Adding Multiple GCP Billing Accounts

About this task

The service account created when onboarding the first GCP billing account is used for the subsequent GCP billing accounts. The service account field is auto-populated when onboarding the next GCP billing accounts in Beam.

Perform the following steps to onboard your subsequent GCP billing accounts in Beam:

Procedure

  1. Create and attach the following IAM roles to the service account in the GCP Console:
    • beam-bill-ingestion-role (Mandatory) : Attach this role to the billing project in which the BigQuery Table is provisioned or the billing account according to your business requirements. You do not need to create this role again as you previously created this role at the organization level when onboarding your first GCP billing account.
    • beam-project-metadata-role (Optional) : Creating and attaching this IAM role is optional. You do not need to create and attach this role to the service account for the subsequent GCP billing accounts if you previously performed these steps when onboarding your first GCP billing account.This role is created and attached at the organization or folder level. Hence, it applies to all billing accounts within the organization or folder.
  2. Enter the GCP billing account details in the Beam console to complete the onboarding process.

IAM Roles and Required Permissions

You can follow one of the given options to provide the required IAM roles and permissions:

  • Users with access to GCP Organizations
  • Users without access to GCP Organizations
Users with access to GCP Organizations

Beam needs certain permissions to fetch the billing data and provide cost insights. For users with access to GCP organizations, you need to create two IAM roles and attach them to the service account created by Beam. Refer to this section to get an understanding about the IAM roles and its required permissions.

The required IAM roles are as follows:
(Mandatory) beam-bill-ingestion-role
Allows Beam to ingest bills from the BigQuery table and read Projects associated with a given billing account.
Role Created At Role Attached At Role JSON (Use JSON to programmatically create the role)
Organization level Billing project level and Billing account level
{
    "description": "Allows Beam to ingest bills from BigQuery and read Projects associated with a given billing account",
    "includedPermissions": [
"resourcemanager.projects.get",
      "bigquery.datasets.get",
      "bigquery.jobs.create",
      "bigquery.tables.get",
      "bigquery.tables.getData",
     "bigquery.readsessions.create",
     "bigquery.readsessions.getData",
   billing.resourceAssociations.list],
    "name": "beam-bill-ingestion-role-v1",
    "stage": "GA",
    "title": "Nutanix Beam Bill Ingestion Role (v1)"
}
(Optional) beam-project-metadata-role
Allows Beam to read project names, folder hierarchy, and folder names.
This role enables Beam to get the details of a project. For example, the billing account to which the project is associated, the name of the project, and so on. If you do not provide this permission, Beam will only show the Project ID in its console.
Note: Provide this permission for a specific project or the parent folder if you want to view the labels of the projects in Beam.
Role Created At Role Attached At Role JSON (Use JSON to programmatically create the role)
Organization level Folder or organization level
{
  	"title": "Nutanix Beam Project and Folders metadata read",
    "name": "beam-project-metadata-role",
    "description": "Allows Beam to read Project names, Folder hierarchy and Folder name",
    "includedPermissions": [
      "resourcemanager.projects.get",
      "resourcemanager.folders.get"
    ],
    "stage": "GA"
}
Note: The IAM role name provided in this topic is a recommendation. You can use a different role name if you want.

To create the roles in the GCP Console, see Creating IAM Roles.

GCP Permissions Required to Create and Attach IAM Roles

Refer to the following table to understand the GCP permissions to create and attach the IAM roles required by Beam.

Task GCP Role
Create and attach roles at Organization level
  • Organization Role Administrator role (roles/iam.organizationRoleAdmin) : GCP inbuilt role to create roles at the organization level.

    This role provides access to administer all custom roles in the organization and its associated projects.

  • resourcemanager.organizations.setIamPolicy : Additional permission required to attach roles.
Create and attach roles at Project level
  • Role Administrator role (roles/iam.roleAdmin) : GCP inbuilt role to create roles at the project level.

    This role provides access to all custom roles in the project.

  • resourcemanager.projects.setIamPolicy : Additional permission required to attach roles.
Create and attach roles at Folder level
  • Folder Admin (roles/resourcemanager.folderAdmin) : GCP inbuilt role to create roles at the folder level.

    This role provides all available permissions for working with folders.

  • resourcemanager.folders.setIamPolicy : Additional permission required to attach roles.
Attach roles at Billing account level roles/billing.admin : GCP Inbuilt Role to attach roles at the Billing account level.
Creating IAM Roles

This section describes the procedure to create the required IAM roles in the GCP console for the user with access to GCP organizations.

Before you begin

  • Refer to Users with access to GCP Organizations to get an understanding about the IAM roles and its required permissions.
  • You require specific GCP permissions to create and attach the IAM roles required by Beam. For more information, see the GCP Permissions Required to Create and Attach IAM Roles section in Users with access to GCP Organizations.

About this task

To create the required IAM role, do the following:

Procedure

  1. In the GCP Console, go to the Roles page.
  2. In the drop-down menu at the top of the page, select the organization in which you want to create the IAM roles.
    IAM Role Entity
    (Mandatory) beam-bill-ingestion-role Organization level. For example, nixbeam.com .
    (Optional) beam-project-metadata-role Organization level. For example, nixbeam.com .
    Note: The IAM role name provided in the table is a recommendation. You can use a different role name if you want.
  3. Click Create Role .
  4. In the Create Role page, enter the following information:
    Option Description
    Title

    Enter the role name you are creating.

    Role name: beam-bill-ingestion-role or beam-project-metadata-role .
    Description

    Enter a description for the role.

    Example: Allows Beam to ingest bills from the BigQuery table and read Projects associated with a given billing account for beam-bill-ingestion-role .
    ID Enter the role ID in the following format: beam_bill_ingestion_role or beam_project_metadata_role .
    Role launch stage Select General Availability .
  5. Click ADD PERMISSIONS .

    The Add Permissions window appears.

  6. In the Filter table search box, search for the permission and then select the checkbox.

    Refer to Users with access to GCP Organizations to view the required permissions for the role that you are creating.

    Repeat this step to search and select the remaining permissions.

    Figure. GCP IAM Role - Adding Permissions Click to enlarge
  7. Click ADD after you select all the required permissions.
    The added permissions appear in the assigned permissions area.
  8. Click CREATE to complete.

What to do next

Attach the IAM roles to the Service account.
Attaching IAM Roles to the Service Account

This section describes the procedure to attach the two IAM roles you created to the service account, for the user with access to GCP organizations.

Before you begin

You require specific GCP permissions to attach the IAM roles to the service account. For more information, see the GCP Permissions Required to Create and Attach IAM Roles section in Users with access to GCP Organizations.

About this task

To attach the IAM roles to the service account, do the following:

Procedure

  1. To attach beam-bill-ingestion-role to the service account, do the following:
    Note: This role is attached to the billing project in which the BigQuery Table is provisioned and to the billing account.
    1. To attach this role to the billing project, do the following:
      1. In the GCP Console, go to the IAM page.
      2. In the drop-down menu at the top of the page, select the project where you have provisioned the BigQuery Table.
      3. Click ADD at the top of the page. Then, go to Step 1.c.
        Figure. Billing Project - Adding Members Click to enlarge
    2. To attach this role to the billing account, do the following:
      1. In the GCP Console, go to the Billing page.

        A list of billing accounts appear.

      2. Click on your billing account.
      3. In the left pane, click Account management .
      4. Click ADD MEMBER . Then, go to Step 1.c.
        Figure. Billing Account - Adding Members Click to enlarge
    3. In the New members box, enter your service account name.
      You can also locate the service account name from Beam GCP account configuration console.
    4. In the Select a role list, search and select beam-bill-ingestion-role .
    5. Click SAVE to complete.
  2. (Optional) To attach beam-project-metadata-role to the Service account, do the following:
    Note: You can attach this role to the folder or organization according to your business requirements.
    1. In the GCP Console, go to the IAM page.
    2. In the drop-down menu at the top of the page, select the organization. For example, nixbeam.com.
    3. Click ADD at the top of the page.
      Figure. Organization - Adding Members Click to enlarge
    4. In the New members box, enter your service account name.
      You can also locate the service account name from Beam GCP account configuration console.
    5. In the Select a role list, search and select beam-project-metadata-role .
    6. Click SAVE to complete.
Users without access to GCP Organizations

Beam needs certain permissions to fetch the billing data and provide cost insights. For the users without access to GCP organizations, you need to create an IAM role and use a GCP inbuilt role and attach them to the service account created by Beam. Refer to this section to get an understanding about the IAM roles and its required permissions.

Note:
  • You need permissions to attach the roles at the billing account level.
  • Ensure that you create and attach the role only in the project that contains Billing export BigQuery Dataset.
The required IAM roles are as follows:
(Mandatory) beam-bill-ingestion-role
Allows Beam to ingest bills from the BigQuery table.
Role Created At Role Attached At Role JSON (Use JSON to programmatically create the role)
Project level Project level
{
    "description": "Allows Beam to ingest bills from BigQuery",
    "includedPermissions": [
      "bigquery.datasets.get",
      "bigquery.jobs.create",
      "bigquery.tables.get",
      "bigquery.tables.getData",
     "bigquery.readsessions.create",
     "bigquery.readsessions.getData"],
    "name": "beam-bill-ingestion-role-v1",
    "stage": "GA",
    "title": "Nutanix Beam Bill Ingestion Role (v1)"
}
Note: The IAM role name provided in this topic is a recommendation. You can use a different role name if you want.
(Mandatory) Billing Account Viewer
The Billing Account Viewer is a GCP in-built role and you need to attach that at the billing account level. Allows Beam to read Projects associated with a billing account, but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account or the ability to read payments related information.
For more details on how to attach the Billing Account Viewer role, see Attaching IAM Roles to the Service Account.

To create the roles in the GCP Console, see Creating IAM Roles.

GCP Permissions Required to Create and Attach IAM Roles

Refer to the following table to understand the GCP permissions to create and attach the IAM roles required by Beam.

Task GCP Role
Create and attach roles at Project level
  • Role Administrator role (roles/iam.roleAdmin) : GCP inbuilt role to create roles at the project level.

    This role provides access to all custom roles in the project.

  • resourcemanager.projects.setIamPolicy : Additional permission required to attach roles.
Attach roles at Billing account level roles/billing.admin : GCP Inbuilt Role to attach roles at the Billing account level.
Creating IAM Roles

This section describes the procedure to create the required IAM role in the GCP console for the user without access to GCP organizations.

Before you begin

  • Refer to Users without access to GCP Organizations to get an understanding about the IAM roles and its required permissions.
  • You require specific GCP permissions to create and attach the IAM roles required by Beam. For more information, see the GCP Permissions Required to Create and Attach IAM Roles section in Users without access to GCP Organizations.

About this task

To create the required IAM role, do the following:

Procedure

  1. In the GCP Console, go to the Roles page.
  2. In the drop-down menu at the top of the page, select the project in which you want to create the IAM roles.
    IAM Role Entity
    (Mandatory) beam-bill-ingestion-role Project level
    Note:
    • The IAM role name provided in the table is a recommendation. You can use a different role name if you want.
  3. Click Create Role .
  4. In the Create Role page, enter the following information:
    Option Description
    Title

    Enter the role name you are creating.

    Role name: beam-bill-ingestion-role .
    Description

    Enter a description for the role.

    Example: Allows Beam to ingest bills from the BigQuery table .
    ID Enter the role ID in the following format: beam_bill_ingestion_role .
    Role launch stage Select General Availability .
  5. Click ADD PERMISSIONS .

    The Add Permissions window appears.

  6. In the Filter table search box, search for the permission and then select the checkbox.

    Refer to Users without access to GCP Organizations to view the required permissions for the role that you are creating.

    Repeat this step to search and select the remaining permissions.

    Figure. GCP IAM Role - Adding Permissions Click to enlarge
  7. Click ADD after you select all the required permissions.
    The added permissions appear in the assigned permissions area.
  8. Click CREATE to complete.

What to do next

Attach the IAM roles to the Service account.
Attaching IAM Roles to the Service Account

This section describes the procedure to attach the two IAM roles to the service account, for the user without access to GCP organizations.

Before you begin

You require specific GCP permissions to attach the IAM roles to the service account. For more information, see the GCP Permissions Required to Create and Attach IAM Roles section in Users without access to GCP Organizations.

About this task

To attach the IAM roles to the service account, do the following:

Procedure

  1. To attach beam-bill-ingestion-role to the service account, do the following:
    Note: This role is attached to the billing project in which the BigQuery Table is provisioned.
    1. In the GCP Console, go to the IAM page.
    2. In the drop-down menu at the top of the page, select the project where you have provisioned the BigQuery Table.
    3. Click ADD at the top of the page.
      Figure. Billing Project - Adding Members Click to enlarge
    4. In the New members box, enter your service account name.
      You can also locate the service account name from Beam GCP account configuration console.
    5. In the Select a role list, search and select beam-bill-ingestion-role .
    6. Click SAVE to complete.
  2. To attach Billing Account Viewer role to the service account, do the following:
    Note: This role has to be attached at the billing account level.
    1. In the GCP Console, go to the Billing page.
      A list of billing accounts appear.
    2. Click on your billing account.
    3. In the left pane, navigate to Account management .
    4. Click SHOW INFO PANEL at the top-right corner of the page.
    5. Click the ADD PRINCIPAL button.
      Figure. Billing Account - Adding Principals Click to enlarge
    6. In the New Principals box, enter your billing account name.
      You can also locate the service account name from Beam GCP account configuration console.
      Figure. Billing Account - Attaching Role Click to enlarge
    7. From the Role drop-down list, search and select Billing Account Viewer.
    8. Click SAVE to complete.
    Note: Beam does not have the ability to read name associated with the Projects and, it shows a soft warning on the GCP accounts configuration page.

Fetching the Billing Account Details

This section describes the procedure to fetch the billing account details ( Billing Account ID , Project ID , and Dataset ID ) that you need to add in the Beam console to complete the onboarding process.

About this task

To fetch the billing account details, do the following:

Procedure

  1. To fetch the Billing Account ID , do the following:
    1. In the GCP Console, go to the Billing page.
      A list of billing accounts appear.
    2. Click on the Billing account that you are onboarding in Beam.
    3. In the left pane, click Account management .

      You can view the Billing account ID at the top of the page.

      Figure. GCP Console - Billing Account ID Click to enlarge
  2. To fetch the Project ID and Dataset ID , do the following:
    1. In the Billing Account page, in the left pane, click Billing export .
    2. In the Billing export page, enable Standard usage cost and then, locate the Project name and Dataset name IDs.
      Note: Beam does not support Detailed usage cost table.
      Figure. GCP Console - Project and Dataset IDs Click to enlarge

GCP Billing Accounts - Management

Refer to this section to understand the actions you can take on the GCP billing accounts in Beam.

You can do the following:
  • View the list of projects linked to the billing account in the Projects tab.
  • View the billing project (project where your cloud billing data gets exported) in the Billing Sources tab.
  • Add a billing source: If you change the billing project in the GCP Console, you must add the new billing project in the Billing Sources tab.
  • Delete a billing account: Beam provides you an option to delete a billing account. Billing data and the related analytics information gets removed from Beam.

Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > GCP Accounts . Then, click Manage Account against the Billing account that you want to manage.

Figure. Beam Console - Managing Billing Accounts Click to enlarge
Adding a Billing Source

When you change the source of your BigQuery export in the Billing Account > Billing export (GCP Console) according to your business requirements, you must add the new billing source in the Beam console.

About this task

To add a billing source in the Beam console, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > GCP Accounts .
  2. Click Manage Account against the Billing account in which you want to add a billing source.
  3. Click the Billing Sources tab.
    Figure. Billing Sources Click to enlarge
  4. In the top-right corner of the page, click the Actions drop-down list. Then, click Add Billing Source .

    The Add Billing Source pop-up appears.

    Figure. Add Billing Source Click to enlarge
  5. Enter the Project ID and Dataset ID to start fetching the billing data for the new billing source.

    For more information on how to get the Project ID and Dataset ID, see Fetching the Billing Account Details .

  6. When all the field entries are correct, click Validate and Save to complete adding billing source.
    Note:
    • Suppose you change the projects to export your billing data for each month. Beam fetches the billing data from the previous billing projects and switches to the new billing project that you added.
    • You cannot delete the previous billing projects. Beam needs the historical billing data to provide cost insights accurately.
    • Only one project can be active at a given time. In the Billing Sources tab, in the Last Data Point column, you can view the current billing project (project with the latest date) along with the previous billing projects.
Deleting a Billing Account

Refer to this section to delete a Billing account from Beam.

About this task

To delete a Billing account in Beam, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > GCP Accounts .
  2. Click Manage Account against the Billing account that you want to delete.
  3. In the top-right corner of the page, click the Actions drop-down list. Then, click Delete Billing Account .

    The Deleting GCP Billing Account window appears.

  4. Enter the Billing Account ID and click Delete .
    Warning: You cannot undo the delete action. The Billing account and all the associated projects for all users in Beam gets deleted. Also, the historical spend data and historical reports for the Billing account get erased.

Adding AWS Account

This section describes the procedure to add your AWS payer account in Beam.

About this task

Adding an AWS account in Beam involves three steps:

To add an AWS Payer account, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > AWS Accounts .
  2. Click Add Payer Account .
  3. In the Account Configuration page, perform the following steps.
    1. In the AWS Account Name box, enter a name for your AWS account.
    2. In the AWS Account ID box, enter your AWS account ID.
      Click AWS Management Console (Support Center) to view your AWS Account ID.
    3. If you already have a CUR, do the following.

      1. In Report Name , enter the name for the CUR.

      2. In S3 Bucket Name , enter the name of your CUR bucket.

      3. In Report Prefix , enter the prefix of the CUR in AWS.

      You can click the Where can I find this? hyperlink to go to the AWS Cost and Usage Reports page in the AWS console. For detailed instructions on getting the report name, S3 bucket name, and report prefix of your existing CUR, see Getting Cost and Usage Report (CUR) Details (AWS Console).

      To get the report prefix path for your CUR, see Getting Report Path Prefix (AWS Console).

      If you do not have a CUR, create a CUR and enter the information in the Report Name , S3 Bucket Name , and Report Prefix fields. To create a CUR, see Creating Cost and Usage Report (AWS Console).
      Figure. AWS Payer Account Onboarding - Account Configuration Click to enlarge

    4. Select the access type you want to grant to Beam.
      The available options are as follows.
      • Spend Analysis and Optimize Recommendations : This option is selected by default. You cannot clear this option. Beam requires read permissions for spend analysis and optimize recommendations.
      • Click to Fix and Playbooks : Selecting this access type would allow you to act on the saving insights, optimize the costs from the Beam console, create and invoke the Lambda function for Playbooks, and perform Eliminate actions.

      For more information, see Required Permissions.

      Optionally, check to enable Enable one-click policy updates to support upcoming features. This option allows Beam to prepare the permission policy in case additional permissions are required for a feature. Once you select this option, you must update the policy for the account from the AWS Accounts page.
      Note: Beam adds audits to the savings policy frequently. You can take advantage of the new audits added by enabling one-click policy updates option for AWS.
    5. Click Next to go to the Review Configuration page.
  4. In the Review Configuration page, perform the following steps.
    1. Review the information you entered in the Account Configuration page.
      Note: If you want to see the cost allocation tags in Beam, activate the cost allocation tags in AWS. To activate the cost allocation tags, see Activating the AWS-Generated Cost Allocation Tags.
    2. Click Execute CloudFormation Template .
      The following success message is displayed.

      Successfully fetched the CloudFormation Template.

    3. If your organization's compliance requirements disallow to directly execute the CloudFormation Template in the AWS console, download the template by clicking the Download icon and get the template reviewed and approved internally.
      Upload the approved template when performing the steps to execute the template in the AWS console. For detailed instructions, see Executing Beam CloudFormation Template (AWS Console).
    4. Click the hyperlink Click here to execute CloudFormation Template if you have the permission to directly execute the CloudFormation Template in the AWS console. This redirects you to the AWS console.
      For detailed instructions, see Executing Beam CloudFormation Template (AWS Console).
    5. Click Verify and Save .
      Note: It takes 24 to 48 hours for the data to appear in Beam.
      Your AWS payer account is successfully configured in Beam.

What to do next

Once you configure your AWS payer account, Beam detects all the associated linked accounts. It is recommended to generate and execute the CFT for each linked account to optimize your cloud spend better. To configure linked accounts in Beam, see Configuring Linked Accounts.

Configuring Linked Accounts

Beam detects the AWS linked accounts after you add the payer account.

About this task

To view the list of AWS linked accounts, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > AWS Accounts . You can view the payer accounts added in Beam. Click Manage to view the list of linked accounts detected for a payer account.

To configure the linked accounts in Beam, do the following.

Procedure

  1. Click Configure against the linked account you want to configure.
  2. In the Account Configuration page, perform the following steps.
    1. Select the access type you want to grant to Beam.
      The available options are as follows.
      • Spend Analysis and Optimize Recommendations : This option is selected by default. You cannot clear this option. Beam requires read permissions for spend analysis and optimize recommendations.
      • Click to Fix and Playbooks : Selecting this access type would allow you to act on the saving insights, optimize the costs from the Beam console, create and invoke the Lambda function for Playbooks, and perform Eliminate actions.

      For more information, see Required Permissions.

      Optionally, check to enable Enable one-click policy updates to support upcoming features. . This option allows Beam to prepare the permission policy in case additional permissions are required for a feature. Once you select this option, you must update the policy for the account from the AWS Accounts page.
    2. Click Next to go to the Review Configuration page.
  3. In the Review Configuration page, perform the following steps.
    1. Review the information you entered in the Account Configuration page.
    2. Click Execute CloudFormation Template .
      The following success message is displayed.

      Successfully fetched the CloudFormation Template.

    3. If your organization's compliance requirements disallow to directly execute the CloudFormation Template in the AWS console, download the template by clicking the Download icon and get the template reviewed and approved internally.
      Upload the approved template when performing the steps to execute the template in the AWS console. For detailed instructions, see Executing Beam CloudFormation Template (AWS Console).
      Note: Make sure that you are logged on to the same linked account for which you created the CloudFormation template.
    4. Click the hyperlink Click here to execute CloudFormation Template if you have the permission to directly execute the CloudFormation Template in the AWS console. This redirects you to the AWS console.
      For detailed instructions, see Executing Beam CloudFormation Template (AWS Console).
      Note: Make sure that you are logged on to the same linked account for which you created the CloudFormation template.
    5. Click Verify and Save .
    Beam starts providing cost-saving recommendations for the linked account you just configured. If you just configured your payer account, it takes up to 24-48 hours for the cost data to be displayed in Beam.
    Repeat these steps for all the linked accounts for which you want to see cost recommendations. The best practice is to configure all the linked accounts to optimize your cloud spend better.
Removing Linked Accounts

This section describes the steps to unconfigure linked accounts.

About this task

To view the list of AWS linked accounts, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > AWS Accounts . You can view the payer accounts added in Beam. Click Manage to view the list of linked accounts detected for a payer account.

To unconfigure the linked accounts from the payer account, do the following.

Procedure

  1. Click Edit against the linked account you want to configure.
    The Account Configuration page appears.
  2. In the top-right corner of the page, click Remove ARN .
    By removing this IAM Role, you will limit the permissions that Beam has for this linked account.

    A confirmation message appears. Click Delete to complete the workflow.

    Figure. AWS Linked Account - Unconfigure Click to enlarge

Verifying AWS Account Configuration

After your AWS account configuration in Beam is complete, you can verify if the configuration is correct.

About this task

To test if your AWS account is accessible, do the following.

Procedure

  1. Log on to your AWS Management Console.
  2. In the drop-down menu, click My Billing Dashboard .
    Figure. AWS Management Console Click to enlarge My Billing Dashboard
  3. In the sidebar menu, click Cost &Usage Reports .
    Figure. AWS Management Console - AWS Cost and Usage Reports Click to enlarge AWS Cost and Usage Reports
    If you can create a report or a CUR report exists on this page, your AWS access is correct.
    Note: If a CUR report exists in the AWS Cost and Usage Reports page, you can also create a CUR report.

Creating Cost and Usage Report (AWS Console)

To create an AWS Cost and Usage Report (CUR), go to the Reports page of the AWS Billing and Cost Management Console .

About this task

The AWS Cost and Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. Beam requires the cost and usage details of the AWS resources to provide insights and cost-saving recommendations.

To create an AWS Cost and Usage Report, do the following.

Procedure

  1. Log on to your AWS Management Console.
  2. In the drop-down menu, click My Billing Dashboard .
    Figure. AWS Management Console - CUR Report Click to enlarge My Billing Dashboard
  3. In the sidebar menu, click Cost & Usage Reports . Then click Create report .
    Figure. AWS Management Console - CreateCUR Report Click to enlarge My Billing Dashboard
    The Report content page appears.
  4. In the Report name box, enter the name of the CUR and select Include resource IDs check box. Then click Next .
    Figure. AWS Management Console - Report content page Click to enlarge
    The Delivery options page appears.
  5. Click Configure to configure a S3 bucket.
    Note: Make sure to select time granularity as Hourly and compression type as GZIP .
    Figure. AWS Management Console - CUR (Configuring S3 bucket) Click to enlarge Cost and Usage Report
    The Configure S3 Bucket page appears. You can select an existing S3 bucket or create a S3 bucket.
  6. In the Configure S3 Bucket page, do one of the following.
    • In the Select existing bucket area, in the S3 bucket name list, select an existing S3 bucket. Then click Next .

      Or

    • In the Create a bucket area, enter a S3 bucket name and select the region where you want to create the S3 bucket. Then click Next .
    Figure. AWS Management Console - Configure S3 Bucket page Click to enlarge
    The Verify policy page appears.
  7. Select I have confirmed that policy is correct check box. Then click Save .
    Figure. AWS Management Console - Verify Policy page Click to enlarge
    Your S3 bucket gets configured.
  8. In the Report path prefix box, enter a report path prefix.
    This step is optional.
  9. Ensure that you set Time granularity to Hourly , Report versioning to Create new report version , and Compression Type to GZIP . Then click Next .
  10. Review the details and click Review and Complete to create the CUR.

Getting Cost and Usage Report (CUR) Details (AWS Console)

About this task

The AWS Cost and Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. Beam requires the cost and usage details of the AWS resources to provide insights and cost-saving recommendations. You can get the CUR details using the AWS Billing Dashboard.

To get the details of your existing CUR, do the following.

Procedure

  1. Log on to your AWS Management Console. Ensure that you have sufficient AWS Billing Dashboard permissions and S3 access.
  2. In the drop-down menu, click My Billing Dashboard .
    Figure. AWS Management Console - CUR Details Click to enlarge My Billing Dashboard
  3. In the sidebar menu, click Cost & Usage Reports .
  4. In the AWS Cost and Usage Reports table, find your CUR report name and S3 bucket.
  5. Copy the CUR report name and S3 bucket name you want to configure in Beam from the table.
    Figure. AWS Cost and Usage Reports table Click to enlarge Copy Required Details
  6. To extract the report path prefix, select the required CUR report name check box and click Edit .
    Figure. AWS Cost and Usage Reports table Click to enlarge Open Report
    The Report content page appears.
  7. Ensure that the time granularity selected is Hourly and Include resource IDs is checked. If not, create a CUR report.
    Figure. Report content page Click to enlarge Check the Report content values
  8. Scroll down the Report content page and locate Report path prefix . Copy the text present and paste it into the Beam console.
    Note: Make sure that the compression type is GZIP .
    Figure. Report content page Click to enlarge Paste in your prefix
  9. If the Report path prefix field is empty, then leave the Prefix section in the Beam console empty as well.

Executing Beam CloudFormation Template (AWS Console)

About this task

CloudFormation template allows you to provision and manage your AWS resources effortlessly. You can create templates for the service or application architectures you want and have AWS CloudFormation use those templates for quick and reliable provisioning of the services or applications.

To execute the CloudFormation template, do the following.

Procedure

  1. Log on to your AWS account and go to the Create stack page.
  2. In the Prepare template area, click Template is ready .
  3. In the Specify template area, select the relevant option based on the template's location.
    • Click Amazon S3 URL and enter the URL if you opted to directly execute the CloudFormation template. Then, click Next to proceed.

      Or,

    • Click Upload a template file if you opted to download the template from Beam for internal review purposes, or you choose to configure the payer account using API. Then, click Choose File to upload the template file that you downloaded. Click Next to proceed.
    Figure. AWS Management Console - Select Template Click to enlarge Execute Beam CloudFormation Template
  4. In the Specify stack details page, in the Stack name box, enter a stack name if you want to. Then click Next .
    Note: Do NOT modify any other information on this page.
    Figure. AWS Management Console - Specify Details Click to enlarge Give your stack a name
  5. In the Configure stack options page, scroll down to the bottom of the page and click Next .
    Figure. AWS Management Console - Configure stack options Click to enlarge Go to AWS CloudFormation
  6. In the Review page, scroll down to the bottom of the page and select I acknowledge that AWS CloudFormation might create IAM resources with custom names check box. Then click Create stack .
    Figure. AWS Management Console - Review Click to enlarge Review and create stack
  7. In the Events tab, wait for the message CREATE_COMPLETE .
    Figure. AWS Management Console - Events tab Click to enlarge Verify your stack
  8. Click Verify and Save on the Beam console.

Activating the AWS-Generated Cost Allocation Tags

The cost allocation tag is a label that AWS assigns to an AWS resource. You can use the cost allocation tags to track your AWS costs on a detailed level. After you activate the cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report. This makes it easier for you to categorize and track your AWS costs.

About this task

To activate the AWS-generated tags, do the following.

Procedure

  1. Log on to your AWS Management Console and go to the Billing and Cost Management Dashboard (https://console.aws.amazon.com/billing/home#/).
  2. In the sidebar menu, click Cost allocation tags .
  3. In the AWS-Generated Cost Allocation Tags area, click Activate .
    Note: It can take up to 24 hours for the tags to activate.

Getting Report Path Prefix (AWS Console)

While performing the AWS account onboarding in Beam, you need to enter the report path prefix that is appended to the name of your CUR. This section describes the steps to get the report path prefix for your CUR from the AWS console.

About this task

To get the report path prefix for your CUR, perform the following steps in the AWS console.

Procedure

  1. Log on to your AWS Management Console. Ensure that you have sufficient AWS Billing Dashboard permissions and S3 access.
  2. In the drop-down menu, click My Billing Dashboard .
    Figure. AWS Management Console - CUR Details Click to enlarge My Billing Dashboard
  3. In the sidebar menu, click Cost & Usage Reports .
    The AWS Cost and Usage Reports page appears. You can view a table that contains all the reports created in the AWS account.
  4. To get the report path prefix, select the required CUR report name check box and click Edit .
    Figure. AWS Cost and Usage Reports table Click to enlarge Open Report
    The Report content page appears.
  5. Scroll down the Report content page and locate Report path prefix . Copy the text present and paste it into the Beam console.
    Note: If the Report path prefix field is empty, then leave the Report Prefix field in the Beam console empty as well.
    Figure. Report content page Click to enlarge Paste in your prefix

Required Permissions

This section describes the read and read & write policies required by Beam.

Payer Account

  • Spend Analysis and Optimize Recommendations (read permissions)
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "autoscaling:Describe*",
            "cloudfront:Get*",
            "cloudfront:List*",
            "opsworks:Describe*",
            "opsworks:Get*",
            "cloudtrail:DescribeTrails",
            "cloudtrail:GetTrailStatus",
            "cloudwatch:Describe*",
            "cloudwatch:Get*",
            "cloudwatch:List*",
            "dynamodb:DescribeTable",
            "dynamodb:ListTables",
            "ec2:Describe*",
            "ec2:GetReservedInstancesExchangeQuote",
            "elasticache:Describe*",
            "elasticbeanstalk:Check*",
            "elasticbeanstalk:Describe*",
            "elasticbeanstalk:List*",
            "elasticbeanstalk:RequestEnvironmentInfo",
            "elasticbeanstalk:RetrieveEnvironmentInfo",
            "elasticloadbalancing:Describe*",
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*",
            "redshift:Describe*",
            "elasticache:Describe*",
            "iam:List*",
            "iam:CreatePolicy",
            "iam:AttachRolePolicy",
            "iam:CreatePolicyVersion",
            "iam:DeletePolicyVersion",
            "iam:Get*",
            "workspaces:Describe*",
            "route53:Get*",
            "route53:List*",
            "rds:Describe*",
            "rds:ListTagsForResource",
            "s3:List*",
            "sqs:GetQueueAttributes",
            "sqs:ListQueues"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Resource": [
            "arn:aws:s3:::/*",
            "arn:aws:s3:::"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Resource": [
            "arn:aws:s3:::bucket-xyz/*",
            "arn:aws:s3:::bucket-xyz"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "organizations:ListAccountsForParent",
            "organizations:ListAccounts"
          ],
          "Resource": "*"
        }
      ]
    }
  • Click to Fix (read & write permissions)
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "Stmt1415015011805",
          "Action": [
            "ec2:Describe*",
            "ec2:CreateSnapshot",
            "ec2:DeleteSnapshot",
            "ec2:DeleteVolume",
            "ec2:CreateTags",
            "ec2:ReleaseAddress",
            "ec2:RegisterImage",
            "ec2:CreateImage",
            "ec2:DeregisterImage",
            "ec2:DisassociateAddress",
            "ec2:TerminateInstances",
            "ec2:CreateImage",
            "ec2:ModifySnapshotAttribute",
            "ec2:ModifyInstanceAttribute",
            "ec2:GetReservedInstancesExchangeQuote"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Resource": [
            "arn:aws:s3:::/*",
            "arn:aws:s3:::"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Resource": [
            "arn:aws:s3:::bucket-xyz/*",
            "arn:aws:s3:::bucket-xyz"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "organizations:ListAccountsForParent",
            "organizations:ListAccounts"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015042420",
          "Action": [
            "elasticloadbalancing:DeleteLoadBalancer",
            "elasticloadbalancing:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015086161",
          "Action": [
            "cloudtrail:DescribeTrails",
            "cloudtrail:GetTrailStatus"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015300839",
          "Action": [
            "rds:Describe*",
            "rds:List*",
            "rds:CreateDBSnapshot",
            "rds:DeleteDBSnapshot",
            "rds:DeleteDBInstance"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015300851",
          "Action": [
            "iam:CreatePolicy",
            "iam:AttachRolePolicy",
            "iam:CreatePolicyVersion",
            "iam:DeletePolicyVersion",
            "iam:List*",
            "iam:Get*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Action": [
            "cloudtrail:ListTags",
            "cloudtrail:GetEventSelectors",
            "events:*",
            "sns:Set*",
            "sns:Get*",
            "sns:GetTopicAttributes",
            "sns:DeleteTopic",
            "sns:Create*",
            "sns:Unsubscribe",
            "sns:ConfirmSubscription",
            "sns:SetTopicAttributes",
            "sns:Subscribe",
            "sns:List*"
          ],
          "Resource": "*",
          "Effect": "Allow",
          "Sid": "Stmt1534849103503"
        },
        {
          "Sid": "Stmt1415015300878",
          "Action": [
            "autoscaling:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611947000",
          "Effect": "Allow",
          "Action": [
            "redshift:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948000",
          "Effect": "Allow",
          "Action": [
            "cloudwatch:GetMetricStatistics",
            "dynamodb:DeleteTable"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948010",
          "Effect": "Allow",
          "Action": [
            "opsworks:Describe*",
            "opsworks:Get*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948020",
          "Effect": "Allow",
          "Action": [
            "route53:Get*",
            "route53:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948021",
          "Effect": "Allow",
          "Action": [
            "elasticache:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948022",
          "Effect": "Allow",
          "Action": [
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948023",
          "Effect": "Allow",
          "Action": [
            "dynamodb:Describe*",
            "dynamodb:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948025",
          "Effect": "Allow",
          "Action": [
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948024",
          "Effect": "Allow",
          "Action": [
            "elasticbeanstalk:Check*",
            "elasticbeanstalk:Describe*",
            "elasticbeanstalk:List*",
            "elasticbeanstalk:RequestEnvironmentInfo",
            "elasticbeanstalk:RetrieveEnvironmentInfo"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948028",
          "Effect": "Allow",
          "Action": [
            "cloudfront:Get*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948029",
          "Effect": "Allow",
          "Action": [
            "elasticache:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948030",
          "Effect": "Allow",
          "Action": [
            "config:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948035",
          "Effect": "Allow",
          "Action": [
            "cloudwatch:Describe*",
            "cloudwatch:Get*",
            "cloudwatch:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948036",
          "Effect": "Allow",
          "Action": [
            "cloudfront:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948076",
          "Effect": "Allow",
          "Action": [
            "kinesis:ListStreams"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948086",
          "Effect": "Allow",
          "Action": [
            "glacier:ListVaults"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948089",
          "Effect": "Allow",
          "Action": [
            "sqs:ListQueues",
            "sqs:GetQueueAttributes"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948088",
          "Effect": "Allow",
          "Action": [
            "workspaces:Describe*",
            "elasticache:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948092",
          "Effect": "Allow",
          "Action": [
            "lambda:CreateFunction",
            "lambda:TagResource",
            "lambda:InvokeFunction",
            "lambda:GetFunction",
            "lambda:ListAliases",
            "lambda:UpdateFunctionConfiguration",
            "lambda:GetFunctionConfiguration",
            "lambda:UntagResource",
            "lambda:UpdateAlias",
            "lambda:UpdateFunctionCode",
            "lambda:GetFunctionConcurrency",
            "lambda:ListTags",
            "lambda:DeleteAlias",
            "lambda:DeleteFunction",
            "lambda:GetAlias",
            "lambda:CreateAlias"
          ],
          "Resource": "arn:aws:lambda:*:*:function:*"
        },
        {
          "Sid": "Stmt1445611948093",
          "Effect": "Allow",
          "Action": [
            "lambda:ListFunctions",
            "iam:PassRole"
          ],
          "Resource": "*"
        }
      ]
    }

Linked Account

  • Optimize Recommendations (read permissions)
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "autoscaling:Describe*",
            "cloudfront:Get*",
            "cloudfront:List*",
            "opsworks:Describe*",
            "opsworks:Get*",
            "cloudtrail:DescribeTrails",
            "cloudtrail:GetTrailStatus",
            "cloudwatch:Describe*",
            "cloudwatch:Get*",
            "cloudwatch:List*",
            "dynamodb:DescribeTable",
            "dynamodb:ListTables",
            "ec2:Describe*",
            "ec2:GetReservedInstancesExchangeQuote",
            "elasticache:Describe*",
            "elasticbeanstalk:Check*",
            "elasticbeanstalk:Describe*",
            "elasticbeanstalk:List*",
            "elasticbeanstalk:RequestEnvironmentInfo",
            "elasticbeanstalk:RetrieveEnvironmentInfo",
            "elasticloadbalancing:Describe*",
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*",
            "redshift:Describe*",
            "elasticache:Describe*",
            "iam:List*",
            "iam:Get*",
            "workspaces:Describe*",
            "route53:Get*",
            "route53:List*",
            "rds:Describe*",
            "rds:ListTagsForResource",
            "s3:List*",
            "sqs:GetQueueAttributes",
            "sqs:ListQueues"
          ],
          "Effect": "Allow",
          "Resource": "*"
        }
      ]
    }
  • Click to Fix (read & write permissions)
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "Stmt1415015011805",
          "Action": [
            "ec2:Describe*",
            "ec2:CreateSnapshot",
            "ec2:DeleteSnapshot",
            "ec2:DeleteVolume",
            "ec2:CreateTags",
            "ec2:ReleaseAddress",
            "ec2:RegisterImage",
            "ec2:CreateImage",
            "ec2:DeregisterImage",
            "ec2:DisassociateAddress",
            "ec2:TerminateInstances",
            "ec2:CreateImage",
            "ec2:ModifySnapshotAttribute",
            "ec2:ModifyInstanceAttribute",
            "ec2:GetReservedInstancesExchangeQuote"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015042420",
          "Action": [
            "elasticloadbalancing:DeleteLoadBalancer",
            "elasticloadbalancing:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015086161",
          "Action": [
            "cloudtrail:DescribeTrails",
            "cloudtrail:GetTrailStatus"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015300839",
          "Action": [
            "rds:Describe*",
            "rds:List*",
            "rds:CreateDBSnapshot",
            "rds:DeleteDBSnapshot",
            "rds:DeleteDBInstance"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015300851",
          "Action": [
            "iam:List*",
            "iam:Get*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1415015300878",
          "Action": [
            "autoscaling:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611947000",
          "Effect": "Allow",
          "Action": [
            "redshift:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948000",
          "Effect": "Allow",
          "Action": [
            "cloudwatch:GetMetricStatistics",
            "dynamodb:DeleteTable"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948010",
          "Effect": "Allow",
          "Action": [
            "opsworks:Describe*",
            "opsworks:Get*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948020",
          "Effect": "Allow",
          "Action": [
            "route53:Get*",
            "route53:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948021",
          "Effect": "Allow",
          "Action": [
            "elasticache:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948022",
          "Effect": "Allow",
          "Action": [
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*"
          ],
          "Resource": "*"
        },
        {
          "Action": [
            "cloudtrail:ListTags",
            "cloudtrail:GetEventSelectors",
            "events:*",
            "sns:Set*",
            "sns:Get*",
            "sns:GetTopicAttributes",
            "sns:DeleteTopic",
            "sns:Create*",
            "sns:Unsubscribe",
            "sns:ConfirmSubscription",
            "sns:SetTopicAttributes",
            "sns:Subscribe",
            "sns:List*"
          ],
          "Resource": "*",
          "Effect": "Allow",
          "Sid": "Stmt1534849103503"
        },
        {
          "Sid": "Stmt1445611948023",
          "Effect": "Allow",
          "Action": [
            "dynamodb:Describe*",
            "dynamodb:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948025",
          "Effect": "Allow",
          "Action": [
            "elasticmapreduce:Describe*",
            "elasticmapreduce:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948024",
          "Effect": "Allow",
          "Action": [
            "elasticbeanstalk:Check*",
            "elasticbeanstalk:Describe*",
            "elasticbeanstalk:List*",
            "elasticbeanstalk:RequestEnvironmentInfo",
            "elasticbeanstalk:RetrieveEnvironmentInfo"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948028",
          "Effect": "Allow",
          "Action": [
            "cloudfront:Get*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948029",
          "Effect": "Allow",
          "Action": [
            "elasticache:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948030",
          "Effect": "Allow",
          "Action": [
            "config:Describe*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948035",
          "Effect": "Allow",
          "Action": [
            "cloudwatch:Describe*",
            "cloudwatch:Get*",
            "cloudwatch:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948036",
          "Effect": "Allow",
          "Action": [
            "cloudfront:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948076",
          "Effect": "Allow",
          "Action": [
            "kinesis:ListStreams"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948086",
          "Effect": "Allow",
          "Action": [
            "glacier:ListVaults"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948089",
          "Effect": "Allow",
          "Action": [
            "sqs:ListQueues",
            "sqs:GetQueueAttributes"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948088",
          "Effect": "Allow",
          "Action": [
            "workspaces:Describe*",
            "elasticache:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948091",
          "Effect": "Allow",
          "Action": [
            "s3:List*"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Stmt1445611948092",
          "Effect": "Allow",
          "Action": [
            "lambda:CreateFunction",
            "lambda:TagResource",
            "lambda:InvokeFunction",
            "lambda:GetFunction",
            "lambda:ListAliases",
            "lambda:UpdateFunctionConfiguration",
            "lambda:GetFunctionConfiguration",
            "lambda:UntagResource",
            "lambda:UpdateAlias",
            "lambda:UpdateFunctionCode",
            "lambda:GetFunctionConcurrency",
            "lambda:ListTags",
            "lambda:DeleteAlias",
            "lambda:DeleteFunction",
            "lambda:GetAlias",
            "lambda:CreateAlias"
          ],
          "Resource": "arn:aws:lambda:*:*:function:*"
        },
        {
          "Sid": "Stmt1445611948093",
          "Effect": "Allow",
          "Action": [
            "lambda:ListFunctions",
            "iam:PassRole"
          ],
          "Resource": "*"
        }
      ]
    }

Azure Onboarding - Overview

The Azure account is a globally unique entity that gets you to access Azure services and your Azure subscriptions. You can create multiple subscriptions in your Azure account to create separation, for example, for billing or management purposes. You can configure your Azure account in Beam to enable cost visibility and optimization capabilities for your Azure cloud resources. Once you configure the Azure account, Beam fetches your Azure cost data and provides insights on cost and optimization. For detailed procedure about onboarding the following two Azure billing account types, see

  • Adding Azure Enteprise Agreement Account
  • Adding Azure Microsoft Customer Agreement (MCA) Account

Adding Azure Enteprise Agreement Account

This section describes the procedure to add your first Azure enterprise agreement account in Beam. Beam fetches the billing data of all projects associated with the given billing account and provides visibility into your spend.

Before you begin

Before attempting to add azure enterprise agreement account in Beam, make sure you satisfy the following requirements.
  • You must have enterprise administrator privileges.
  • You must have an owner access to the individual azure subscriptions.

About this task

To add your Azure account, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
  2. Click Connect Azure Account .
    Figure. Connect Azure Account Click to enlarge

    The Connect Azure Enrollment page appears.
  3. From the types of the Azure contract, choose Enterprise Agreement (EA) and click Next .
  4. Enter Enrollment Information .
    Enter a name for your enrollment, Enrollment Number , and Enrollment Access Key and click Next .
    • To know how to get your enrollment number, click How do I get enrollment number? and follow the instructions. Alternatively, see Capturing Enrollment Number (Azure Portal).
    • To know how to get your enrollment access key, click How do I get enrollment access key? and follow the instructions. Alternatively, see Capturing Enrollment Access Key (Azure Portal).
    Figure. Add Azure Account - Enrollment Information Click to enlarge

  5. Click Next .
    The App Configuration page appears.
  6. Enter App Configuration details.

    Beam uses the Azure Active Directory (AD) application to interact with Azure Portal.

    Note: Beam requires the AD configuration to receive your Azure infrastructure metadata. Beam uses this metadata to provide cost-saving recommendations.

    Enter the Domain Name , Tenant / Directory ID , Application ID , and Secret Key . Click How do I get it? to see on-screen instructions on getting the application configuration details from the Azure portal. Alternatively, see Configuring Azure App (Azure Portal).

    Note: Ensure that you enable multi tenancy for the application in the Azure portal.
    Figure. Add Azure Account - App Configuration Click to enlarge

    After you enter the required fields, click Next .
  7. Perform Role Assignment .

    Click Download Script to download the PowerShell script. Click How do I assign role? and follow the on-screen instructions to run the script in the Azure PowerShell . Alternatively, see Assigning Role (PowerShell).

    Ensure that you observe the guidelines related to the PowerShell script created by Beam. For more information, see PowerShell Script Guidelines.

    You can also assign roles manually. See Assigning Role (Without PowerShell).

  8. Check to enable I confirm I have executed the above powershell script and click Done .

    If the enrollment is successful, the enrollment and associated tenant and subscriptions get captured in Beam.

    Note:
      1. Application secret key activation may take up to 5 minutes.
      2. It takes a few hours for Beam to show your Azure cost data.

What to do next

Beam identifies all the Azure subscriptions associated with your Azure Enrollment and starts providing cost-saving recommendations for subscriptions for which you have owner access. If your enrollment has multiple tenants, add all those tenants to Beam. For more information, see Adding Azure Tenants.

To create a playbook for automating your regular workflows, you must configure an Azure Function App for your Enrollment account.

Adding Azure Subscriptions

You can create multiple subscriptions in your Azure account to create separation, for example, for billing or management purposes.

Before you begin

You must have Owner permission on the subscriptions you want to configure.

About this task

To add subscriptions to your Azure account, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
  2. Click Manage for the enrollment you want to add subscriptions.
  3. In the Actions column, click the plus icon for the tenant under which you want to add the subscription.
  4. In the Add Subscriptions page, click Download Script
  5. Perform Role Assignment .
    Click Download Script to download the PowerShell script. Click How do I assign role? and follow the on-screen instructions to run the script in the Azure PowerShell . Alternatively, see Assigning Role (PowerShell).
    Ensure that you observe the guidelines related to the PowerShell script created by Beam. For more information, see PowerShell Script Guidelines.
  6. Check to enable I confirm I have executed the above powershell script and click Done .

    Any subscription for which you have the owner access; and is not already configured in Beam is added in a few minutes.

Adding Azure Tenants

About this task

A tenant is an organization that owns and manages a specific instance of Azure cloud services. If your enrollment has multiple tenants, you can add all those tenants to Beam.

To add tenants to your Azure account, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
    You can view the enrollment accounts added in Beam.
  2. Click Manage against the enrollment account in which you want to add a tenant.
  3. Click Add Tenant under the enrollment account.
  4. Enter the Tenant / Directory ID and Domain Name .
    Click How do I get tenant information? follow the on-screen instructions to get the tenant information. Alternatively, see Retrieving Tenant Information (Azure Portal).
    Click Grant Permissions to grant permissions for this tenant. Clicking Grant Permissions redirects you to the Azure portal. On the Azure portal, log in to grant access to the application under the tenant.
    Note: You must have administrator access to the tenant to grant application permissions.
Verifying Azure Account Configuration

After your Azure account configuration in Beam is complete, you can verify if the configuration is correct.

About this task

To test if your Azure account is accessible, go to https://ea.azure.com/.

If you are able to access this page and no error message appears, your Azure access is correct.

Capturing Enrollment Number (Azure Portal)

About this task

To use the billing API, you need your Enrollment Number. It is the enrollment number that gets invoiced for total consumption.

To capture enrollment number, do the following.

Procedure

  1. Log on to your Azure Enterprise portal.
    Note: You must sign in as an enterprise administrator.
  2. Copy the Enrollment Number listed under Enrollment Detail .
    Figure. Azure Enterprise Portal - Enrollment Click to enlarge Capture Enrollment Number
  3. In the Beam console, in the Enrollment Number box, paste the value you copied.
Capturing Enrollment Access Key (Azure Portal)

About this task

You need the enrollment access key for authentication when using the billing APIs.

To capture the enrollment access key, do the following.

Procedure

  1. In the Azure Enterprise portal , go to Reports > Download Usage > API Access Key .
  2. Click generate if the key is not set or click expand key and copy the value.
    Note: Ensure that the key is active by checking Effective Date . Click regenerate to generate a new key.
    Figure. Azure Enterprise Portal - Reports Click to enlarge Capture Enrollment Access Key
  3. In the Beam console, in the Enrollment Access Key box, paste the value you copied.
Configuring Azure App (Azure Portal)

About this task

The Azure App Configuration service allows users to manage configuration within the cloud. You can create App Configuration stores to store key-value settings and consume stored settings from within applications, deployment pipelines, release processes, microservices, and other Azure resources.

To configure Azure app, do the following.

Procedure

  1. Log on to the Azure Portal, and go to Azure Active Directory .

    Azure Cloud - https://portal.azure.com/

    Azure Government Cloud - https://portal.azure.us/

    Figure. Azure Active Directory - Overview Click to enlarge Capture Domain Name
  2. Click Overview .
  3. Copy the domain name (available in the area marked as 3 in the figure) and paste it in Domain Name field in the Beam console.
  4. To capture Tenant/Directory ID, do the following.
    1. In the Azure Active Directory page, click Properties .
      Figure. Azure Active Directory - Properties Click to enlarge Capture Tenant/Directiory ID
    2. Copy the Directory ID and paste it in Tenant/Directory ID field in the Beam console.
  5. To capture the Application ID, do the following.
    1. In the Azure Active Directory page, click App registrations .
      Figure. Azure Active Directory - App registrations Click to enlarge Create App registrations
    2. Click New Registration .
    3. In the Create page, in the Name field, enter the application name.
      Figure. App registrations - Create page Click to enlarge Create App registrations
    4. In the Supported account types area, click Accounts in any organizational directory (Any Azure AD directory - Multitenant) .
      Note: You must select this option to support multiple tenants.
    5. In the Redirect URI (optional) area, select Web from the drop-down list and enter the URL as https://beam.nutanix.com for redirect URI, and click Create .
      Note: In the case of Azure Government cloud, enter the URL as https://beam.nutanix.us for redirect URI.
    6. Copy the Application ID and paste it in Application ID field in the Beam console.
      Figure. App registrations - Registered app page Click to enlarge Capture Application ID
  6. To capture the secret key, do the following.
    1. In the Registered app page that you just created, click Certificates & secrets .
    2. In the Client secrets area, click New client secret .
      The client secret is a secret string that the application uses to prove its identity when requesting a token.
      Figure. Certificates & secrets - creating client secret Click to enlarge Creating client secret
    3. In the Add a client secret page, in the Description box, enter a description for the client secret.
      Figure. Certificates & secrets - creating client secret Click to enlarge Creating client secret
    4. In the Expires area, click to select the expiry you want. Then click Add .
      The client secret value gets generated.
    5. Copy the generated client secret value and paste in the Secret Key field in the Beam console.
      Figure. Certificates & secrets - copying client secret Click to enlarge Copying client secret
Troubleshooting Azure App Configuration Step

When the Azure App configuration credentials provided in the Beam console fail during validation, you can perform the following steps to validate them.

About this task

To troubleshoot the Azure app configuration step failure, do the following.

Procedure

  1. Click the cloudshell button (marked as 1 in the figure) in the top-right section or click shell.azure.com.
    Note: In the case of Azure Government cloud, click the cloudshell button. The link shell.azure.com is not valid for the Azure Government cloud.
    Figure. Azure Enterprise Portal - Cloud Shell Click to enlarge Script Execution
  2. If you are launching Cloudshell for the first time, select PowerShell (marked as 2 in the figure).
  3. In the Subscription drop-down menu, select a subscription. Click Create Storage .
    Figure. Azure Enterprise Portal - Subscription Click to enlarge Script Execution
  4. Copy and paste the following command in PowerShell.
    az login --service-principal --tenant TENANT_ID -u APPLICATION_ID -p SECRET_KEY --allow-no-subscriptions
    • Replace the TENANT_ID , APPLICATION_ID , and SECRET_KEY with the actual values from your Azure Portal and press Enter to run the command.
      • If you do not see any error message, it means that the credentials you provided are valid and Beam should accept them. Go to the Beam console and click Next in the App Configuration step (step 4) . This should work without any error.
      • If you see an error message like Get Token request returned http error… , it means that the credentials you provided are invalid. And this mostly happens if the APPLICATION_ID and SECRET_KEY have not come into effect in Azure as it takes some time to propagate them.

        In such a case, it is recommended to wait for 2 to 5 minutes and then click Next in the App Configuration step (step 4) . This should work without any error.

Assigning Role (PowerShell)

About this task

This section describes how to assign roles using Azure PowerShell.
Note: The powershell script allows you to provide permission for multiple subscriptions as a bulk update.

To assign a role, do the following.

Procedure

  1. Click the cloudshell button (marked as 1 in the figure) in the top-right section or click shell.azure.com.
    Note: In the case of Azure Government cloud, click the cloudshell button (marked as 1 in the figure). The link shell.azure.com is not valid for the Azure Government cloud.
    Figure. Azure Enterprise Portal - Cloud Shell Click to enlarge Script Execution
  2. If you are launching Cloudshell for the first time, select PowerShell (marked as 2 in the figure).
  3. In the Subscription drop-down menu, select a subscription. Click Create Storage .
    Figure. Azure Enterprise Portal - Subscription Click to enlarge Script Execution
  4. After PowerShell is up, click Upload and select the file you downloaded.
    Figure. Azure Enterprise Portal - Powershell Click to enlarge Script Execution
  5. Execute the command : cd | ls
  6. Execute the uploaded script with the command : ./<file-name>.ps1
    Example: ./Beam-cg-roleProvision.ps1
    Ensure that you observe the guidelines related to the PowerShell script created by Beam. For more information, see PowerShell Script Guidelines.
Assigning Role (Without PowerShell)

You can manually assign roles from the Azure Portal. Log on to the Azure Portal (Azure Cloud - https://portal.azure.com/, Azure Government Cloud - https://portal.azure.us/) and go to the Subscriptions page.

About this task

Perform the following steps after the Configuring Azure App steps of the enrollment onboarding process.

Note: To perform the following steps, you must have at least owner permissions on the subscriptions for which you want to perform role assignment.

To manually assign roles, do the following.

Procedure

  1. In the Subscriptions page, select the subscription for which you want to perform role assignment.
    Figure. Subscriptions Page - Azure Portal Click to enlarge
  2. Click Access control (IAM) .
  3. Go to Role assignments .
  4. Click Add to open the Add role assignment page.
  5. In the Role drop-down list, select the role you want to assign.
    Beam at least needs Reader permissions to be able to provide cost savings recommendations successfully. Select the role as Contributor to perform eliminate operations from the Beam console.
  6. Select the application registered in the Active Directory.
  7. Click Save to complete the role assignment for the selected subscription.
    Repeat these steps to perform a role assignment for all of your subscriptions.
    Note: To complete the onboarding process, you must download the PowerShell script without which you cannot select I confirm I have executed the above powershell script (step 6 of Adding Azure Enteprise Agreement Account).
Retrieving Tenant Information (Azure Portal)

About this task

A tenant is an organization that owns and manages a specific instance of Azure cloud services. You need to add the tenant ID when performing the steps to add the Azure tenants in Beam.

To retrieve the tenant information, do the following.

Procedure

  1. Log on to the Azure Portal, and go to Azure Active Directory .
    Figure. Azure Active Directory - Overview Click to enlarge Capture Domain Name
  2. Click Overview .
  3. Copy the domain name (available in the area marked as 3 in the figure) and paste it in Domain Name field in the Beam Console.
  4. In the Azure Active Directory page, click Properties .
    Figure. Azure Active Directory - Properties Click to enlarge Capture Tenant/Directiory ID
  5. Copy the Directory ID and paste it in Tenant/Directory ID field in the Beam console.
Enabling Write Permissions for Azure Subscriptions

You can use the Edit Subscriptions option to grant write permissions for your subscriptions.

About this task

The following steps are involved when editing a subscription:
  • Select the check box in the Write column to grant write permissions for the subscriptions.
  • Perform role assignment.

To enable the write permissions for Azure subscriptions, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
    You can view the Azure Enrollments added in Beam.
  2. Click Manage against the Azure Enrollment that contains the subscription or subscriptions you want to edit.
    You can view a table that contains the tenants within the Azure Enrollment.
  3. In the Actions column, click the Edit Subscriptions icon against the tenant that contains your subscriptions.
    The Manage Subscriptions page appears.
    Figure. Azure Accounts page - Editing Subscriptions Click to enlarge Edit Azure Subscriptions
  4. In the Click to Fix and Playbooks column, select the check box to grant write permissions to the subscriptions.
    The available options are as follows.
    • Optimize Recommendations : This option is selected by default. You cannot clear this option. Beam requires read permissions for spend analysis and optimize recommendations.
    • Click to Fix and Playbooks : Selecting this access type would allow you to act on the saving insights, optimize the costs from the Beam console, deploy and invoke functions in an Azure Function App, and perform Eliminate actions.
  5. Perform Role Assignment .
    Click Download Script to download the PowerShell script. Click How do I assign role? and follow the on-screen instructions to run the script in the Azure PowerShell . Alternatively, see Assigning Role (PowerShell).
    Ensure that you observe the guidelines related to the PowerShell script created by Beam. For more information, see PowerShell Script Guidelines.
  6. Check to enable I confirm I have executed the above powershell script and click Done to complete.
PowerShell Script Guidelines

Beam simplifies onboarding using PowerShell scripts by eliminating the manual steps of permission and role assignments for multiple subscriptions.

Using PowerShell scripts helps you with the following.
  • Creates a role for Beam to access your Azure account.
  • Assigns the permissions to read the resources.
  • Provides access to API.

Observe the following guidelines related to the PowerShell script created by Beam.

  • Requirement of PowerShell script for Azure on-boarding - PowerShell script grants the necessary read permissions to the registered applications that Beam uses to fetch the inventory from the Azure Portal for cost savings recommendations.
  • Storage Account - Beam does not create any storage in Azure. To execute the PowerShell script, you must create a storage account in Azure to upload the script to the Cloud Shell.
  • Cost incurred by creating the storage account and uploading the script - Azure pricing for standard storage is $0.06 per GB/month. The size of the script that Beam provides is about 2 KB, so the storage cost of the script is negligible. When you create a storage account for the Cloud Shell, Azure creates a file share that contains a 5GB image that incurs charges. For more information, see Azure Cloud Shell documentation.
  • Resource Groups - Cloud Shell creates three resources when storage is created.
    • Resource group: cloud-shell-storage-<region>
    • Storage account: cs<uniqueGuid>
    • File share: cs-<user>-<domain>-com-<uniqueGuid>

    To avoid charges, delete the storage account after script execution.

Adding Azure Microsoft Customer Agreement (MCA) Account

This section describes the procedure to add your Azure MCA billing account in Beam. Beam fetches the billing data of all subscriptions associated with the given billing account and provides visibility into your spend.

Before you begin

Before attempting to add an Azure MCA billing account in Beam, make sure you satisfy the following requirements.
  • You must have an administrator privileges for adding an Azure MCA billing account in Beam.
  • You must have a Billing Account Owner permission for adding an Azure MCA billing account.
  • You must have or create an Azure AD application in the same tenant where the Azure MCA billing account exists.

About this task

To add your Azure MCA billing account, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
  2. Click Connect Azure Account .
    Figure. Connect Azure Account Click to enlarge

    The Connect Azure Account page appears.
  3. From the types of the Azure contract, choose Microsoft Customer Agreement Account (MCA) and click Next .
  4. In the MCA Account Information page.

    Enter the Account ID and a name for your MCA billing account.

    To know how to get your account ID from the Azure portal, click Read our documentation to see the on-screen the instructions. Alternatively, see Capturing MCA Billing Account ID (Azure Portal).

    Figure. Add Azure MCA Billing Account Information Click to enlarge

  5. Click Next .
    The App Configuration page appears.
  6. To enter your Azure Active Directory (AD) application details, do one of the following:
    • From the Use Existing Application drop-down, select an existing application.
    • Click Create a New App button to create a new Azure application.

    Beam uses the Azure AD application to interact with the Azure portal and to fetch billing, purchase, and pricing data.

    For the on-screen instructions on how to create an Azure AD application, click Read our documentation .

  7. If you select Create a New App , enter the App Name , Client ID , Secret Key , and Tenant ID to configure a new Azure application.

    For the on-screen instructions on how to get the application configuration details from the Azure portal, click How do I get client id? , How do I get secret key? , or How do I get tenant id? . Alternatively, see Configuring Azure MCA App (Azure Portal)

    Figure. Add Azure MCA Billing Account - App Configuration Click to enlarge

  8. After you enter the required fields, click Next .
  9. Perform Role Assignment .
    Note: Before you perform Step 10, ensure that the application you just created has a Billing account reader access on the given Azure MCA billing account in order to fetch billing information.

    To provide the necessary permissions on the Azure portal, click Read our documentation to see on-screen instructions. Alternatively, see Assigning Billing Account Reader Role (Azure Portal).

  10. Select the I confirm I have provided the relevant permissions checkbox and click Save. .

    If the Azure MCA billing account is onboarded successfuly, the associated subscriptions get captured in Beam.

    Note: It takes upto 24 hours for Beam to show your Azure cost data and capture the associated subscriptions.

What to do next

Beam identifies all the Azure subscriptions associated with your Azure MCA billing account and you can configure these subscriptions in Beam to see the cost savings recommendations.
Capturing MCA Billing Account ID (Azure Portal)

About this task

To use the billing API, you need your Azure MCA billing account ID. It is the unique identifier for your billing account that gets invoiced for total consumption.

To capture the account ID, do the following.

Procedure

  1. Log on to the Microsoft Azure portal and go to Cost Management + Billing .
    Note: You must have a Billing Account Owner permission for adding an Azure MCA billing account.
  2. In the left navigation pane, click Billing Scopes to open page for that option in the pane to the right.
    The Azure portal displays the list of billing accounts for which you have an access.
  3. Select the Azure MCA billing account that you want to onboard to Beam.
  4. In the left navigation pane, go to Settings and click Properties .
  5. Copy the ID listed under the General tab.
    Figure. Azure MCA Billing Account ID Click to enlarge
  6. In the Beam console, in the Account ID box, paste the value that you copied.
Configuring Azure MCA App (Azure Portal)

About this task

The Azure App Configuration service allows users to manage configuration within the cloud. You can create App Configuration stores to store key-value settings and consume stored settings from within applications, deployment pipelines, release processes, microservices, and other Azure resources.

To configure Azure app, do the following.

Procedure

  1. Log on to Microsoft Azure Portal and go to Azure Active Directory .
    Figure. Azure Active Directory - Overview Click to enlarge
  2. In the left navigation pane, go to App registrations and click New Registration .
  3. In the Register an application page, in the Name field, enter a display name for your application.
    Figure. Azure MCA App Registration Click to enlarge
  4. In the Supported account types area, click Accounts in this organizational directory only (Nutanix only - Single tenant) .

    For now, Beam supports only single tenant application.

  5. Do not change anything in the Redirect URI (optional) area.
  6. Click Register to complete the initial registration.

    The Azure portal displays the corresponding application's Overview pane.

  7. Copy the Display name , Application (client) ID , and Directory (tenant) ID (available in the area marked as 2 and 3 in the figure) and paste it in respective fields in the Beam console.
    Figure. Azure Application Overview Pane Click to enlarge
  8. To capture the secret key, do the following.
    1. In the App registration page, select the application that you just created.
    2. Go to Certificates & secrets > Client secrets and click New client secret .
      The client secret is a secret string that the application uses to prove its identity when requesting a token.
      Figure. Certificates & secrets - creating client secret Click to enlarge
    3. In the Add a client secret page, in the Description box, enter a description for the client secret.
    4. In the Expires area, click to select the expiry you want. Then click Add .
      Note: The client secret validity is limited to 24 months or less. You cannot specify a custom validity longer than 24 months.
      The client secret value gets generated.
    5. Copy the generated client secret value as it is never displayed again after you leave this page.
      Figure. Certificates & secrets - copying client secret Click to enlarge
      Note: If the client secret key is expired, you need to regenerate the key as specified in Step 8.
    6. Paste in the Secret Key field in the Beam console.
Assigning Billing Account Reader Role (Azure Portal)

This section describes how to assign the Billing account reader role to the application from the Azure Portal.

About this task

Perform the following steps after the Configuring Azure MCA App (Azure Portal) steps of the MCA onboarding process.

Note: To perform the following steps, you must have at least a Billing Account Owner permission for the application you want to perform role assignment.

To assign Billing account reader role, do the following.

Procedure

  1. Log on to the Azure Portal and go to the Cost Management + Billing page.
  2. In the left navigation pane, click Billing scopes and select an Azure MCA billing account for which you want to perform role assignment.
    Figure. MCA Billing Account Click to enlarge
  3. Click Access control (IAM) to open page for that option in the pane to the right.
  4. Click Add to open the Add permission page.
  5. In the Role drop-down list, select the role Billing account reader .
    Beam needs Billing account reader permissions to be able to fetch billing, purchase, and pricing data.
  6. Select the application registered in the active directory.
  7. Click Save to complete the role assignment for the selected application.
Managing Azure MCA Subscriptions and Applications

This section describes how to provide granular permissions at a subscription level to enable optimization and click-to-fix actions and manage your Azure AD applications.

Before you begin

You must have atleast an owner permission for the subscriptions you want to configure.

About this task

To configure subscriptions and manage applications of your Azure MCA billing account in Beam, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
    You can view the Azure Enrollments and MCA billing accounts added in Beam.
  2. Click Manage against the Azure MCA billing account that contains the subscription or subscriptions you want to edit.
    A table that contains the subscriptions within the Azure MCA billing account is displayed.
    Figure. Azure MCA Subscriptions Click to enlarge

  3. To enable a permission for the selected subscription, do the following.
    Note: Beam now allows you to configure permissions for the multiple subscriptions as a bulk update. For more details, see Modifying Azure MCA Billing Account, Applications and Subscriptions.
    1. In the Subscription tab, click any subscription or the down arrow key (on the right corner of the each row) for which you want to configure permissions.

      You can also search for the subscription by typing its name in the search box.

      Figure. Congfigure Permission for MCA Subscription Click to enlarge

      The available options are as follows.
      • Visibility : This option is enabled by default. You cannot edit or clear this option.
      • Optimization : Selecting this permission type would allow you to see the saving opportunities on unused and underutilised resources.
      • Click to Fix : Selecting this permission type would allow you to act on the saving insights and optimize the costs from the Beam console ( Eliminate actions).
    2. In the Actions column, click the Configure icon against the permission (Optimization or Click to Fix) you want to enable.
      The Configure Permissions page appears.
    3. Do one of the following.
      • From the Use Existing Application drop-down, select an existing application.
      • Click Create a New App button to create a new Azure application.
    4. If you select Create a New App , enter the App Name , Client ID , Secret Key , and Tenant ID to configure a new Azure application.

      For the on-screen instructions on how to get the application configuration details from the Azure portal, click How do I get client id? , How do I get secret key? , or How do I get tenant id? . Alternatively, see Configuring Azure MCA App (Azure Portal)

    5. Click Download Script to download the PowerShell script to grant permissions for your subscriptions.

      To run the script in Azure PowerShell , click How do I execute the script? and follow the on-screen instructions. Alternatively, see Assigning Role (PowerShell).

      You can also assign roles manually. For more details, see Assigning Role (Without PowerShell)

    6. Select the I confirm I have executed the above powershell script checkbox and click Save .
      Repeat these steps to assign both the permissions for all the required subscriptions.
      Note: To complete the onboarding process, you must download the PowerShell script without which you cannot select I confirm I have executed the above powershell script .
  4. To manage your Azure AD application, select the Azure AD App tab.

    A table that contains the applications is displayed.

    Figure. Azure AD Applications Click to enlarge

    1. Select the application for which you want to modify the details.
    2. In the Actions columns, do the following as desired.
      • Modify the application details by clicking the Edit button.

        A Edit Azure App pop-up window appears allowing you to update the App Name , Client ID , Secret Key , and Tenant ID .

        For the on-screen instructions on how to get the application configuration details from the Azure portal, click How do I get client id? , How do I get secret key? , or How do I get tenant id? . Alternatively, see Configuring Azure MCA App (Azure Portal)

        Click Save when you are done.

      • Delete the application by clicking the Delete button.

        Click Delete in the confirmation pop-up window.

Modifying Azure MCA Billing Account, Applications and Subscriptions

This section describes how to modify the Azure MCA billing account, application details, and configure permissions for multiple subscriptions within the Azure MCA billing account as a bulk update for billing or management purposes.

Before you begin

You must have atleast an owner permission for the subscriptions you want to configure.

About this task

To modify an Azure MCA billing account, applications and subscriptions details, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
    You can view the Azure Enrollments and MCA billing accounts added in Beam.
  2. Click Manage against the Azure MCA billing account that contains the subscription or subscriptions you want to edit.
    A table that contains the subscriptions within the Azure MCA billing account is displayed.
    Figure. MCA Subscriptions - Actions Click to enlarge

  3. On the top-right corner, click Actions to display the drop-down menu.
  4. To the configure permission for multiple subscriptions as a bulk update, click Configure Subscriptions from the drop-down menu and do the following.

    In the Edit permission for Subscriptions page, the permissions and entries for the subscriptions are displayed.

    Figure. Configure Permissions - Bulk Update Click to enlarge

    1. Choose the permission under Update Configuration .
    2. Select the checkbox for the subscriptions you want to update configuration and click Next .

      You can also search for the subscription by typing its name in the search box.

    3. Do one of the following.
      • From the Use Existing Application drop-down, select an existing application.
      • Click Create a New App button to create a new Azure application.
    4. If you select Create a New App , enter the App Name , Client ID , Secret Key , and Tenant ID to configure a new Azure application.

      For the on-screen instructions on how to get the application configuration details from the Azure portal, click How do I get client id? , How do I get secret key? , or How do I get tenant id? . Alternatively, see Configuring Azure MCA App (Azure Portal)

    5. Click Download Script to download the PowerShell script to grant permissions for your subscriptions.

      To run the script in Azure PowerShell , click How do I execute the script? and follow the on-screen instructions. Alternatively, see Assigning Role (PowerShell).

      You can also assign roles manually. For more details, see Assigning Role (Without PowerShell)

    6. Select the I confirm I have executed the above powershell script checkbox and click Save .
      Note: To complete the onboarding process, you must download the PowerShell script without which you cannot select I confirm I have executed the above powershell script .
  5. To modify the application name, click Edit Account Name from the Actions drop-down menu and do the following
    1. In the Change Account Name pop-up window, update the name as desired.
    2. Click the Save button.
  6. To modify the application details, click Edit Azure Application from the Actions drop-down menu and do the following.
    1. In the Edit Azure App pop-up window, update the App Name , Client ID , Secret Key , and Tenant ID .

      For the on-screen instructions on how to get the application configuration details from the Azure portal, click How do I get client id? , How do I get secret key? , or How do I get tenant id? . Alternatively, see Configuring Azure MCA App (Azure Portal)

    2. Click the Save button.
  7. To delete your MCA billing account, click Delete MCA Account from the Actions drop-down menu.

    Click Delete in the confirmation pop-up window.

Assigning Role (PowerShell)

About this task

This section describes how to assign permissions for Azure MCA billing account using Azure PowerShell.
Note: The powershell script allows you to provide permission for multiple subscriptions as a bulk update.

To assign a permission, do the following.

Procedure

  1. Click the cloudshell button (marked as 1 in the figure) in the top-right section or click shell.azure.com.
    Figure. Azure Portal - Cloud Shell Click to enlarge
  2. If you are launching Cloudshell for the first time, select PowerShell (marked as 2 in the figure).
  3. In the Subscription drop-down menu, select a subscription. Click Create Storage .
  4. After PowerShell is up, click Upload and select the file you downloaded.
    Figure. Azure Portal - Powershell Click to enlarge
  5. Execute the command : cd | ls
  6. Execute the uploaded script with the command : .\<file-name>.ps1
    Example: .\Beam-cg-roleProvision.ps1
Assigning Role (Without PowerShell)

This section describes how to manually assign permissions from the Azure Portal.

About this task

Perform the following steps after you configure the Azure MCA application as mentioned in the Configuring Azure MCA App (Azure Portal) section of the onboarding process.

Note: To perform the following steps, you must have at least owner permissions on the subscriptions for which you want to perform role assignment.

To manually assign roles, do the following.

Procedure

  1. Log on to the Azure Portal and go to the Subscriptions page.
  2. In the Subscriptions page, select the subscription for which you want to perform role assignment.
    Figure. Role Assignment - Azure Portal Click to enlarge
  3. In the left navigation pane, click Access control (IAM) to open page for that option in the pane to the right..
  4. Click Add to open the Add role assignment page.
  5. In the Role drop-down list, select the role you want to assign.
    Beam at least needs Reader permission to be able to provide cost savings recommendations successfully and Contributor permission to perform eliminate operations from the Beam console.
  6. Click Next .
  7. Click Select members and from the selected members drop-down list, choose the application registered in the Active Directory.
    Figure. Select Memebers Click to enlarge
  8. Click Select to complete the role assignment for the selected subscription.
    Repeat these steps to perform a role assignment for all of your subscriptions.
    Note: To complete the onboarding process, you must download the PowerShell script without which you cannot select I confirm I have executed the above powershell script .

Beam Gov(US) Cost Governance

Beam Gov(US) is an isolated instance running in AWS GovCloud (US) region to cater specific regulatory compliance requirements of the US government agencies at federal, state, and local levels, government contractors, educational institutes, and other US customers who run sensitive workload in AWS GovCloud and Azure Government Cloud, and want to bring Cost Governance for their cloud infrastructure, gain visibility, optimize and control their spend. Beam Gov(US) provides a single plane consolidated view for Cost Governance.

The onboarding process for Beam Gov(US) users differs from the onboarding process for commercial account users.
  • Separate Beam instance is deployed in the AWS GovCloud region (US-West) with a separate login URL for Beam Gov(US) users.
  • Beam Gov(US) supports the cost governance for AWS, Azure, GCP, and Nutanix On-premises in the GovCloud.
  • Beam Gov(US) also supports commercial cloud accounts along with the GovCloud.
Note: Beam Gov(US) Cost Governance is an early access feature. For more information, contact Nutanix Support .

Beam Gov(US) Sign up

The Beam Gov(US) sign up includes two steps.
  1. Beam Gov(US) Sign up Request Approval - The sign up request approval process involves the following steps.
    • Sign-up Request

      Contact the sales team expressing an interest in using Beam Gov(US).

    • Customer Verification

      The request is forwarded for verification to the Nutanix GovCloud Security team. As part of the verification, you will receive a form through DocuSign that you must fill and send back.

    • Beam Account Activation

      Once the request passes through the verification, Customer Service initiates the account creation process, and the primary user is notified with login details through an email. The email contains verification link, upon verifying email, the user can login to the Beam Gov(US).

  2. Account Registration Completion - In the Beam Gov(US) login page, create a password and select a timezone. Then login into the Beam console. You can enable or disable the MFA from the Profile page in the Beam console. For more information on enabling or disabling the MFA, see User Management.

GovCloud Account Configuration

After your Beam Gov(US) signup is complete, you can start onboarding your GovCloud account in Beam.

AWS GovCloud Onboarding

In AWS, a GovCloud account is always attached to a commercial account. This commercial account can be a linked account or a payer account.

Note: An AWS GovCloud account is always associated with one AWS commercial account for billing and payment purposes. All your AWS GovCloud billing is billed or invoiced to the associated AWS commercial account. You can view the AWS GovCloud account activity and usage reports through the associated AWS commercial account only.

To start with adding the GovCloud account in AWS, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > AWS Accounts .

You can view the list of AWS commercial accounts. You can click Add Gov Account against the commercial account to which you associated your GovCloud account. The Add AWS Account page appears.

The steps to add a GovCloud account is the same as to add an AWS commercial account. To add your GovCloud account, see Adding AWS Account (step 6 to step 10) .

After you complete the steps to add the GovCloud account, you can view the GovCloud account linked to the commercial account.

Figure. Configure - AWS GovCloud Account Click to enlarge Adding AWS GovCloud account

Azure Government Account Onboarding

In the case of Azure Government account onboarding, the steps are the same as commercial account onboarding.

Make sure that you select US Government Cloud to add a GovCloud account in the Enrollment Type list.

Figure. Configure - Azure Government Account Click to enlarge Adding Azure Government account

For the remaining step-by-step details, see Adding Azure Account.

User Management

You can add and manage users for GovCloud using the Beam console.

You can perform the following operations in Beam Gov(US).
  • Add a user

    To go to the User Management page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > User Management . To add a user, see Adding a Beam User.

    In the case of AWS user management, the user may have access to the commercial account but no access to the GovCloud account linked to the commercial account. To give AWS GovCloud account access to a user, you must select the GovCloud account linked to the commercial account.

    Figure. User Management - GovCloud Account Click to enlarge Access to GovCloud Account
  • Resetting Password and MFA Management
  • Configuring Single Sign-On

Resetting Password and MFA Management

You can reset your password and enable or disable MFA from the Profile page in the Beam console.

About this task

To go to the Profile page, click the user drop-down menu on the top-right corner and select Profile .

To change your password, click the Change Password link.

Figure. Profile Page - Change Password Link Click to enlarge Change password link

To enable MFA, do the following.

Procedure

  1. In the Profile page, click the Enable MFA link.

    The Enable Multi-factor Authentication page appears.

  2. Do one of the following.
    • Scan the QR code by using the virtual MFA application (Google or Microsoft Authenticator).
    • Click the secret code link to get the secret code. Enter the secret code in your virtual MFA application.
      Figure. Enable Multi-factor Authentication Page Click to enlarge Enabling MFA

      The virtual MFA application starts generating codes.

  3. Enter the codes in Code 1 and Code 2 boxes. Then click Set MFA to enable MFA.

    To disable MFA, click the Disable MFA link in the Profile page.

    In the case your MFA device is lost or not accessible, click Login via OTP (for lost MFA)? link. Beam sends an OTP to your registered email id. Enter the OTP to login to Beam.

    Figure. Beam Login Page - Lost MFA Link Click to enlarge

Configuring Single Sign-On

The following section describes how to configure the single sign-on feature for Beam Gov(US).

About this task

To configure the single sign-on feature, do the following.

Procedure

  1. Log on to the Beam Console.
  2. Click the user drop-down menu on the top-right corner and select Single Sign On .
  3. In the Single Sign On page, in the Application Id box, enter the application id.
    Note: To get the Application Id, contact Nutanix Support .

    A success message appears, displaying that the single sign-on is successfully configured.

  4. Log out from the Beam Console to go to the login page.
  5. In the Login page, click Login with Single Sign On .
  6. In the Email box, enter your email id.
    In the case of conflict, while logging through the email address, you can click Try with Application ID to login with the application id.
  7. Click Login .

    You are redirected to your organization’s single sign-on application page.

    Enter your credentials and login to Beam Gov(US).

Getting Started With Configurations

After you onboard your cloud accounts in Beam, the following information becomes available for you to consume:
  • Dashboard – Get a graphical view of your overall spend, spend analysis, Reserved Instances (RI) utilization, and spend efficiency.
  • Analyze – Provides a deep visibility into your projected and current spend and allows you to drill down cost further based on your services, accounts, or application workload.
  • Reports – Automatically generated reports help you to track cost consumption across all your cloud accounts.
  • Purchase – Provides RI purchase recommendations based on your resource utilization patterns.
The following are the configurations to perform so that you can begin controlling your cloud consumption:
  • Saving Opportunities – After configuring the AWS payer account, ensure that you add the associated linked accounts in Beam to get cost optimization recommendations. For more information, see Configuring Linked Accounts.

    After configuring the Azure account, ensure that you add all the tenants within your account in Beam for maximum savings recommendations. For more information, see Adding Azure Tenants.

  • Create Business Units and Cost Centers for Chargeback – You can define a business unit by combining a group of cost centers. Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts.
  • Budgeting - A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.
  • Custom Reports - You can create a customized report by aggregating and filtering using various native cloud dimensions and attributes to get insights on cloud usage and cost.
  • Playbooks - You can create a playbook to schedule action on your cloud resources and automate regular workflows. For example, you can shut down your test environments during non-business hours.
  • Scope - A scope is a logical group of your cloud resources that provides you with a custom view of your cloud resources. You can define scopes using cloud, accounts, and tags.
  • Integrations - You can integrate Beam with third-party applications such as Slack and ServiceNow. For example, integrate Beam with Slack to get notifications on your Slack channels.

AWS Cost Configuration

Beam allows you to define configuration settings for your cloud cost data. Cost Configuration is a set of rules that allow you to update cost inputs manually, and also select cost presentation options, allocation model options, and reporting rules that define how the cost of cloud resources get reported in Beam.

Beam provides you with multiple options when performing AWS cost configuration to let you choose how you want to present the cloud spend in Beam.

The options provided are as follows:
  • Cost Logic - You can choose between Amortized Cost or Absolute Cost.
  • Billing - You can choose between blended cost and unblended cost.
  • Credits and Refunds - You can either choose to include or exclude the credits and refunds in the cost report.
  • AWS EDP - You can either choose to include or exclude the EDP discount in the cost report.

All cost presented through Beam is generated depending on the options you select.

Cost Logic - Amortized Cost and Absolute Cost

The Cost Logic selection is applicable for determining how AWS RI, Savings Plan, EDP, Credits, Refunds, and Taxes get reported in Beam.

Amortized Cost Logic
Cost reporting is based on accrual-based accounting. For Reserved Instances (RI) and savings plan, the cost reporting is based on the effective cost column in the AWS CUR. For EDP, Credits and Refunds, and Taxes, Beam does the amortization calculation.

In the case of taxes, AWS reports the amortized cost of the tax at the account level. Beam does the amortization calculation at the service, region, resource level, and so on.

The Amortized Cost logic is useful in cases where you want to view the cost of resources on an accrual basis considering the usage of RI, Savings Plan, Discounts, and Taxes specific to that resource.

Absolute Cost Logic
Cost reporting is based on cash-based accounting. The RIs, savings plan, EDP, Credit and Refunds, and Taxes are not amortized over the billing period.

If you select the Absolute Cost option as the cost logic, the cost of RI, Savings Plan, EDP, Credits and Refunds is shown as a separate line item, and cloud resource cost is reported without the apportioned cost of RI, Savings Plan, and so on. This results in a spike on the day monthly RI fee or upfront RI fee gets charged.

Billing - Blended Cost and Unblended Cost

You can select between blended cost and unblended cost.

Unblended Cost
If you select the Unblended Cost option, Beam reports the cost of resources based on usage. Selecting this option reports the cost of resources from the Unblended Cost column in the AWS CUR.
Blended Cost
If you select the Blended Cost option, Beam reports the cost of resources based on the average cost of usage across the consolidated billing family. Selecting this option reports the cost of resources from the Blended Cost column of the AWS CUR.
Note: If you select the Amortized Cost as the cost allocation basis, the Blended Cost option for billing cannot be selected.

Credits and Refunds

Beam provides you with the option to include or exclude the credits and refunds in the cost report irrespective of the Amortized Cost or Absolute Cost selection.

If you select the Include option, Credits and Refunds are shown in the cost report as a separate line item or amortized and apportioned to a resource depending on the selection between the absolute cost or amortized cost, respectively.

If you select the Exclude option, Credits and Refunds are not shown in the cost report to the user (for example, Scope or Cost Center viewers). It also gets excluded from the Analyze page.

AWS Enterprise Discount Program (AWS EDP)

AWS provides enterprises a discount on its services against a volume spend commitment. For example, an enterprise commits to spending $2 million on AWS services and receives a certain percentage of the discount. The enterprise must pay $2 million even if they do not spend the amount.

Beam provides you with the option to include or exclude the EDP discount in the cost report. EDP discount is applicable to both amortized and absolute cost logic.

If you select the Include option, EDP discount is shown in the cost report as a separate line item or amortized and apportioned to a resource depending on the selection between the absolute cost or amortized cost, respectively. If you select the Exclude option, EDP discount is not shown in the cost report to the user (for example, Scope or Cost Center viewers). It also gets excluded from the Analyze page.

You can add the EDP contract details in Beam. Adding the EDP contract details allows you to view the projected spend after the EDP discount in the Analyze page. Any discount from EDP from the current usage gets reported in CUR and Beam reports accordingly.

For future projections, you need to add contract details which Beam uses for projection.

Configuring the AWS Cost Logic

This section provides the steps to configure the AWS cost logic. The cost report gets generated according to the logic you configure. Also, the configured logic applies to all the available views in Beam.

About this task

To configure the AWS cost logic, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > AWS Cost Configuration .
    The AWS Cost Configuration page appears.
    Figure. Configure Cost - AWS Click to enlarge
  2. In the Cost Logic area, select Amortized Cost or Absolute Cost .
    The Cost Logic selection determines how AWS RI, Savings Plan, EDP, Credits, Refunds, and Taxes get reported in Beam. For more information, see Cost Logic - Amortized Cost and Absolute Cost .
  3. In the Billing area, select Unblended Cost or Blended Cost .
    If you select Unblended Cost , Beam reports the cost of resources based on usage. If you select Blended Cost , Beam reports the cost of resources based on the average cost of usage across the consolidated billing family.
    Note: If you select Amortized Cost as the cost logic, the Blended Cost option for billing cannot be selected.
  4. In the Credits and Refunds area, select Include or Exclude .
    • Include - Includes the credits and refunds data in the cost report for historical spend.
    • Exclude - Excludes the credits and refunds data in the cost report for historical spend.
  5. In the Enterprise Discount Program (EDP) area, select Include or Exclude .
    • Include - Includes EDP discount in the cost report for projected spend. You need to add your EDP contract details (step 7).
    • Exclude - Excludes EDP discount from the cost report for projected spend.

    For more information, see AWS Enterprise Discount Program (AWS EDP) .

  6. In the EDP area, click Add EDP Contract to add your EDP contract details.
    The Add Details for an AWS EDP Contract page appears.

    AWS provides enterprises a discount on its services against a volume spend commitment. Adding the EDP contract details in Beam allows you to view the projected spend after the EDP discount in the Analyze page.

    Do the following:

    1. In the Payer Account list, select the payer accounts under your EDP discount contract.
    2. In the EDP Contract Start Date box, select the contract start date.
    3. In the EDP Term list, select the EDP term according to your contract.
      The Yearly Discounts area appears. You can add different discounts for each year according to your EDP contract.
    4. Click Save to save your contract details and close this page.

    You can add multiple EDP contracts. Also, you can click Manage against an EDP contract to change the contract details.

  7. Click Save to apply the AWS cost logic.
    The configured logic applies to all the available views in Beam.

Currency Configuration

Configure currency to view the cloud spend in your preferred currency.

The currency configuration reduces the ambiguity in analyzing cloud spend from different cloud providers who often provide billing data in different currencies. For example, GCP might report cloud consumption in INR while AWS report cloud consumption in USD. Consolidating the billing data from these different sources without considering currency type results in ambiguous reports and dashboards. In order to simplify multicloud cost governance, Beam allows you to configure a single currency to calculate spend data across all views. You can also configure the corresponding conversion rates for the configured currency or use dynamic rates that Beam provides.

Source currency

Source currency is the currency that your cloud service provider reports billing. When you onboard multiple cloud billing accounts from different cloud providers or regions, you can see a consolidated list of all the source currencies in the respective billing data on the Currency Configuration page. If you onboard a GCP billing account reported in INR and an AWS payer account reported in USD, you will see INR and USD in the source currency list.

Target currency

Target currency is the preferred functional currency for your organization to view spend analysis across Beam in. All the spend data shown in Beam is also converted to this currency. Beam uses the target currency to provide a unified cost view when you onboard multiple cloud accounts with different source currencies. Furthermore, this configuration gives you the flexibility to set a single currency for the following views.
  • Multicloud views such as All-Clouds , Financial (Business Unit or Cost Center), or Scopes
  • Cloud overviews such as GCP Overview , AWS Overview , or Azure Overview

Currency conversion

When you select target currency, you must also select the conversion rates applicable for the selected currency. Beam supports the following currency conversion types.
  • Dynamic Currency Conversion : Beam uses the third-party API from https://exchangeratesapi.io/ to ingest the conversion rates for the corresponding source currencies.
    Note: Beam uses the conversion rate for the first of every month to calculate spend for the entire month. If the conversion rate for the first of every month is unavailable, Beam uses the last fetched monthly rates to calculate spend for that month.
  • Custom Defined Currency Conversion : Beam uses the user-defined exchange rate to calculate historic and projected spend data across all days/months/years.

Configuring Currency and Conversion Rates

You can use this configuration to select the target currency and the corresponding conversion rates applicable for spend analysis across Beam.

About this task

After you configure target currency, you see the following across Beam.
  • Spend data in the target currency for all cloud overview and mulitcloud views.
  • A currency toggle to select between source currency and target currency for individual cloud accounts.
For more information on general guidelines and considerations, see Currency Configuration Considerations .

To configure the currency, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu and then go to Configure > Currency .
    The Currency Configuration page appears.
    Figure. Configure Currency Click to enlarge
  2. Click View Source Currencies to view the list of source currencies corresponding to the cloud resource billing data.
  3. Under Target Currency , select the preferred currency in the drop-down list.
    For example, if you select AMD Armenian Dram , costs across Beam reports in AMD.
  4. In the Currency Conversion area, do one of the following to configure the conversion rates applicable for the selected currency.
    • To automatically fetch the currency exchange rates, click Dynamic Currency Conversion . Additionally, click View to see the ingested exchange rates for the last 12 months.
      Note: Currency exchange rates are updated by Nutanix on a monthly basis. Currency exchange rates are provided as estimates and for informational purposes only. The rates and conversion estimates should not be relied on for invoiced billing conversions by cloud service providers.
      Click to enlarge
    • To manually define the currency exchange rates, click Custom Defined Currency Conversion and then click Customize .

      In the text box, enter the exchange rates as shown in the following figure.

      Click to enlarge
  5. Click Save to apply the currency configuration.
    The configured currency applies to all the available views in Beam.

Currency Configuration Considerations

Limitations and guidelines to consider when using the currency configuration.

Dashboard

  • The Reserved Instances widget on the Dashboard always reports with source currency across all AWS and Azure views.
  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.

Analyze

  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.
  • The reports in the Analyze page is in the currency that is selected at the time of report generation.
    • If you select source currency in the toggle, and then click download, share, or schedule, the reports are in the source currency.
    • If you select target currency in the toggle, and then click download, share, or schedule, the reports are in the configured target currency.
      Note: If you modify the target currency after scheduling the reports, Beam always uses the target currency that is configured at the time of generating the schedule reports. For example, if the target currency is INR while scheduling the reports. And you modify the target currency as GBP, then Beam generates the scheduled reports in GBP.
    • If the currency toggle is not available, the reports you download, share, or schedule are in the configured target currency.

Chargeback

  • By default, Beam provides the spend data in the target currency. The currency toggle is not available in Chargeback .
  • The system report (Global Chargeback Report) that you can download, share, or schedule is in target currency that is configured at the time of report generation.

Budget

  • By default, Beam provides the spend data in the target currency. The currency toggle is not available in Budget .
  • Beam shows the allocated cost of the budget in the target currency.

Reports

  • System reports are in target currency that is configured at the time of report generation. If the target currency configuration is INR and you modify the target currency to GBP after 4 weeks. The system reports generated weekly are in INR for week 1, week 2, week 3, and week 4. The system reports generated for week 5 are in GBP as the target currency at the time of report generation is GBP.
    Note: The following reports are in source currency.
    • Daily reports: New EC2 RI Recommendation report and EC2 RI Utilization Report
    • Monthly reports: Expired RI Report , Expiring RI Summary Report , and On Demand vs Reserved Hours Cost
  • The currency in the Custom reports is based on the configuration in the create or edit page. The reports are in either source currency or target currency as configured while creating the reports.

Save

  • By default, Beam provides the spend data in the target currency.
  • The currency toggle is available when you select individual cloud accounts in the view selector.
  • The currency toggle is not available when you select cloud overview or multicloud views in the view selector.
  • The currency toggle is not available when the configured target currency is same as the source currency.
  • The system report (Cost Optimization Detailed Report) that you can download, share, or schedule in the Save page is in target currency that is configured at the time of report generation.
  • The drill-down reports that you can download, share, or schedule in any of the tabs on the Save page uses the selection in the currency toggle.
    • If you select source currency in the toggle, and then click download, share, or schedule, the reports are in the source currency.
    • If you select target currency in the toggle, and then click download, share, or schedule, the reports are in the configured target currency.
    • If the currency toggle is not available, the reports you download, share, or schedule are in the configured target currency.
    Figure. Save Click to enlarge

Purchase

Beam uses the source currency provided in the billing data for assessing reserved instances and displays the data in source currency. Beam does not use the target currency provided in the currency configuration page for assessing reserved instances.

Cloud Account Onboarding

  • When you onboard your first cloud account, Beam configures the source currency as the default target currency with Dynamic Currency Conversion rates.
  • When you onboard a cloud account with new currency and the currency conversion is Custom Defined Currency Conversion . Beam adds the latest dynamic currency conversion rate as a custom defined conversion rate for the new currency.

Dashboard

The following sections describe the dashboard view for your cloud accounts (AWS, Azure, or GCP), business units and cost centers (financial view), All Clouds (multicloud), and scopes.

Dashboard (AWS)

The Dashboard provides a snapshot of your overall spend, spend analysis, Reserved Instances (RI) utilization, and spend efficiency.

Dashboard Views

To view the AWS dashboard, log on the Beam console and select any of the connected AWS accounts.
Note: Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
Spend Overview
Displays a pie graph of the cost breakup summary and spend trends across AWS accounts and services. By default, this view displays the spending summary of all the services linked to your AWS account. To view a cost summary of all the linked accounts in your AWS payer account, click the drop-down list at the top-right corner of the widget and select Accounts . To view details for your overall spend, click View All .
Figure. Spend Overview Widget Click to enlarge []
Spend Analysis
Displays a bar graph of the historical spend analysis for the selected period with an estimated and actual spend for the current month. By default, this view displays the monthly projection for the spend analysis. You can change the default view to display daily or quarterly projections. To view the analytics detail, click View Cost Analysis .
Figure. Spend Analysis Widget Click to enlarge []
Reserved Instances
Displays a line chart of the RI utilization summary of EC2 instances for the selected period and provides statistics for average and current RI coverage. You can hover over the chart to view the RI usage of a specific day. You can change the default view to display the EC2 and RDS RI top recommendation. To view the recommendations, click the View More button.
Note: Beam always uses the source currency provided in the billing data for assessing reserved instances and displays the data with source currrency. Beam does not use the target currency provided in the currency configuration page for assessing reserved instances.
Figure. Reserved Instances Widget Click to enlarge
Spend Efficiency
Displays a line chart of the cost optimization efficiency for the selected period and provides statistics for potential savings, unused resources, and resizeable resources. To view the optimization details, click View Potential Savings .
Figure. Spend Efficiency Widget Click to enlarge

You can also create a custom report to analyze the cost. To generate a custom report, click Build Custom Report at the bottom of the screen. For more information, see Creating Custom Reports (AWS).

Dashboard (Azure)

The dashboard for Azure provides a holistic overview of your overall Azure spend.

Azure dashboard enables you to do the following cost management tasks.

  • View monthly usage charges.
  • Compare current and previous month Azure cost.
  • Review cost projections (month and year).
To view the Azure dashboard, log on the Beam console and select any of the connected Azure accounts.
Note: Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
Table 1. Dashboard (Azure)
View Description
Spend Overview Provides the total cost of services and subscriptions for your Azure account. You can change the view of the graph by switching between Services and Subscription views. The graph displays the top four costing services and subscriptions, and the remaining gets grouped under Others . To view the details of the cost, click the View All button, which redirects you to the Analyze page.
Spend Analysis Provides the cost trends distributed over the period. You can view the spend analysis based on the Daily or Monthly filters. To view the details of the cost, click the View Cost Analysis button, which redirects you to the Analyze page.
Top Resource Groups Provides the cost associated with the resource groups in your Azure infrastructure. This widget displays the top four costing resource groups and groups the remaining cost under Others . Click the Monthly or Daily selector icons to see monthly or daily views. To view the details of your selection, click the View All button, which redirects to the Analyze page.
Spend Efficiency

The spend efficiency widget displays the spending efficiency as percentage for the last seven days. The spend efficiency value helps to identify how well the Azure resources are getting utilized.

The spend efficiency widget also shows the cost optimization opportunities as Potential Savings , Unused Resources , and Resizable Resources . To the optimization details, click the View Potential Savings button, which redirects you to the Save page.

Dashboard (GCP)

The GCP dashboard provides you a summarized and graphical view of your overall GCP spend across the billing accounts and resources. The dashboard provides you the current spends and projections of your GCP cloud resources.

To view the GCP dashboard, log on the Beam console and select any of the connected GCP accounts.
Note: Beam provides a toggle to view spend data in your preferred currency . You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations.
Figure. Dashboard - GCP Click to enlarge
Table 1. Dashboard Views
View Description
Spend Overview - Projects Displays a pie chart of the spend breakup summary by GCP projects for the current month to date. To view details for your overall spend, click View Details , which redirects you to the Analyze page .
Spend Analysis Displays a bar chart of the historical spend analysis along with the summarized spend data based on the selected period.
  • Daily : Displays the average daily cost and cumulative spend data of the last seven days.
  • Monthly : Displays the estimated spend, actual spend, and variance. Variance is calculated based on the actual spend of the previous term and projected spend of the current term.
  • Quaterly : Displays the current year and projected yearly spend. Quaterly projections are not available when you select GCP Overview from the View Selector .
By default, this view displays the monthly projection for the spend analysis. You can change the default view to display daily or quarterly projections. To view the analytics details, click View Cost Analysis ,which redirects you to the Analyze page .
Spend Overview - Services Displays a pie chart of the spend breakup summary by GCP services for the current month to date. To view details for your overall spend, click View Details , which redirects you to the Analyze page .
Spend Overview - Usage Type

Displays a pie chart of the spend breakup summary by GCP usage types for the current month to date. To view details for your overall spend, click View Details , which redirects you to the Analyze page.

GCP Compute Engine allows you to purchase committed use contracts in return for discounted prices for VM usage. You can purchase compute resources at a discounted price in return for committing to paying for those resources for one year or three years.

Dashboard (Multicloud)

Dashboard provides a snapshot of your overall spend, spend analysis, and spend efficiency.

This section provides detail for the following views - All Clouds , Financial , and Scopes . Click View in the top-right corner to open the View Selector and select different views.

Dashboard Views

To view the multicloud dashboard, log on the Beam console and select any of the multicloud views— All Clouds , Financial , and Scopes .
Note: Currency toggle is not available for multicloud views. You can view the spend data in target currency on the multicloud dashboad. You can select the target currency in the currency configuration page. For more information on general guidelines and considerations, see Currency Configuration Considerations .
Spend Overview
Displays a pie graph of the cost breakup summary and spend trends across AWS, Azure, GCP, and Nutanix accounts and services within the business unit, cost center, or scope you select. By default, this view displays the spending summary of all the services within your business unit, cost center, or scope. To view a cost summary of all the linked accounts in your business unit or cost center, click the drop-down list at the top-right corner of the widget and select Accounts . To view details for your overall spend, click View Details , which redirects you to the Analyze page.
Figure. Spend Overview Widget Click to enlarge
Spend Analysis
Displays a bar graph of the historical spend analysis for the selected period with an estimated and actual spend for the current month. By default, this view displays the monthly projection for the spend analysis across across AWS, Azure, GCP, and Nutanix accounts and services. You can change the default view to display the daily projection. To view the analytics detail, click View Cost Analysis , which redirects you to the Analyze page.
Figure. Spend Analysis Widget Click to enlarge
Save
Displays a pie graph of the total savings realized by eliminating or optimizing AWS and Azure resources. To view the details of your total savings, click the View Potential Savings button, which redirects you to the Save page.
Figure. Save Widget Click to enlarge
Cloud Spend Efficiency (All Clouds view)
The cloud spend efficiency widget displays the spending efficiency for your AWS and Azure clouds as a percentage for the last seven days. The spend efficiency value helps to identify how well cloud resources are getting utilized. You can hover over the line graph for the AWS and Azure clouds to view the total spend, optimized spend, and potential savings information.
Figure. Cloud Spend Efficiency Widget Click to enlarge
Budget (Financial and Scopes views)
Displays a bar graph of the actual spend, budgeted spend, and estimated spend for each quarter. To view the details of your budget, click the View Details button, which redirects you to the Budget page.
Figure. Budget Widget Click to enlarge

Analyze - Cost Analysis

Analyze allows you to drill down into your cloud spend, slice, and dice all the cloud cost data. You can analyze each cloud in detail across accounts, services, purchase options, and other different attributes. You can analyze data historically across days and past months.

Beam not only provides spend analytics on past data but uses its in-built algorithms to report on the spend anomalies, and forecast future spend patterns. This projection is available for the current month and next 6 months across accounts or services.

To view the Analyze page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Analyze . Click View in the top-right corner to open the View Selector Pop-up and select the Cloud , Financial , or Scope view you want. The view selected from Dashboard , Report , Save , or Purchase gets persisted in Analyze and defaults to the last selected view.

Figure. Analyze Click to enlarge

The Analyze page displays the following:

  • Details according to the view you select.
    • All Clouds - shows detail for all the cloud accounts (AWS, Azure, GCP, and Nutanix) added in your Beam account.
    • Cloud Accounts ( AWS , Azure , and GCP ) - shows detail for the selected cloud account.
    • Financial - shows detail for the business unit or cost center you select.
    • Scopes - shows detail for the custom scopes you select.
  • Graphical and tabular view of spend data. The top portion of the Analyze page displays a graphical view of your data. The bottom portion of the Analyze page displays a tabular view of your spend data.
    Note:
    • By default, each view displays values in a line chart form. You can change the default view to a pie chart or bar graph.
    • The spend data for the last 2-3 days is faded-out in the chart and table. Because the data in the billing files from the cloud provider is generally incomplete for 2-3 days.
    • Beam displays the date and time it last updated the spend data at the top-right corner of the Analyze view. Beam updates spend data every 6 hours. However, Beam skips updating, if the data ingested from the cloud providers is the same since the last update.
  • Deep visibility into your historical, current, and projected spend based on the time period selected. You can use the drop down menus on the tab to display the spend information by time period ( Day or Month ).
  • Customized view of your spend data according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply . For more information about filters, see Filtering Options .
  • Schedule, share, and download customized reports under each view.
  • Cumulative spend analysis. To visualize the cumulative spend data, you can turn on the Cumulative toggle.
  • Anomaly detection. Beam analyzes the spending trend on your subscribed resources and reports cost spikes in your cloud accounts. For more information, see Anomaly Detection .
    Note: Beam supports anomaly detection for AWS and Azure cloud accounts only. Anomaly detection feature is not supported for GCP and Nutanix on-prem accounts.

How Spend Projection Works

Beam considers the last 150 days of cost data starting from T-3 days (T is the current date) and applies ML algorithms to project the cost for upcoming 6 months. The algorithms run daily and provide future cost projections.

In case the data is available for less than 150 days, Beam considers the available data to project the cost. If the cost data is not available, Beam shows the projected spend as zero. Beam starts displaying the projected spend as soon as data becomes available.

Cost Analysis (AWS)

Analyze allows you to track cost consumption across all your cloud resources at both the aggregate and granular level. You can drill down cost further based on your services, accounts, or application workload.

Analyze Views

This view provides deep visibility into your projected and current spend. Analyze displays a line chart to project cost consumption that helps you view and analyze your cost and usage across your organization.

To view the AWS cost analysis, select Analyze from the main menu. Then, click View in the top-right corner to open the View Selector and select AWS .

Note:
  • Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
  • You can schedule, share, or download reports under each view except for the Hourly time unit and custom time range. For more information, see Cost Reports.
  • You can create filters at the view level, and then apply filters to display the line chart according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply . For more information about filters, see Filtering Options .

    Other subscriptions charges filter - Subscription charges apart from RI charges and Marketplace charges. For example, APN annual fees.

  • If you want to see the cost allocation tags in Beam, activate the cost allocation tags in AWS. You can use the cost allocation tags to track your AWS costs on a detailed level. To activate the cost allocation tags, see Activating the AWS-Generated Cost Allocation Tags.
Current Spend
The current spend view provides the current AWS cost and its breakdown based on the different usage and resource data.
Table 1. Current Spend
View Description
Overview Displays the total AWS spend (subscription charges and taxes).
Accounts Displays the total cost of the linked accounts in your AWS payer account.
Cost Centers Displays the total spend grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your AWS account.
Charge Types Displays the total spend grouped by the AWS line item type. For example, credit, fee, tax, usage and so on.
Services Displays the total spend grouped by services. By default, this view displays the cost breakup of the top five services that are used the most in your AWS account.
Regions Displays the total cost grouped by regions.
API Operation Displays the total cost of all the API operation requests on your AWS services.
Purchase Option Displays the breakup of Reserved, Unused Reservation, Savings Plan, Unused Savings Plan, Spot and On Demand Cost.
Tags Displays the total cost based on the service tag applied to your AWS resources. You can click the tag drop-down list to select a service tag.
Projected Spend

The projected spend view provides insight into the actual and projected cost for the selected period. You can select different time ranges from the Time range drop-down list in the Filters option.

Table 2. Projected Spend
View Description
Overview Displays a line chart of the projected and actual spend for the selected period.
Accounts Displays a line chart of the projected and actual spend for all the linked accounts in your AWS account.
Services Displays a line chart of the projected and actual spend of the top five services that are used the most in your AWS account.

You can track and set alerts for your AWS cloud budget. To set alerts, click Configure Budget Alerts at the top-right corner of the view. For more information, see Configuring Budget Alerts .

Compute

The Compute view provides the cost specific to all the compute resources configured on AWS. For example, EC2-Instance, EC2-NAT Gateway.

Table 3. Compute
View Description
Overview Displays a line chart of the total spend for all the compute resources for the selected period. You can hover over the stacked bar to view the total spend for a specific day.
Account Displays a line chart of the total cost for all the linked accounts in your AWS payer account.
Cost Centers Displays the total spend grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your AWS account.
Charge Types Displays the total spend grouped by the AWS line item type. For example, credit, fee, tax, usage and so on.
Sub Services Displays a line chart of the total cost for all the sub-services activated in your AWS account.
Instances Displays a line chart of the total cost for all the instances purchased.
Instance types Displays a line chart of the total cost according to your instance type.
Tags Displays a line chart of the total cost based on the service tag applied to your AWS resources. You can click the tag drop-down list to select a service tag.
Database
The Database view provides the cost specific to your database services configured on AWS. For example, RDS, PostgreSQL.
Table 4. Database
View Description
Overview Displays a line chart of the total spend for all your database services configured in your AWS account for the selected period. You can hover over the stacked bar to view the total spend for a specific day.
Accounts Displays a line chart of the total cost for all the linked accounts in your AWS payer account.
Cost Centers Displays the total spend grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your AWS account.
Charge Types Displays the total spend grouped by the AWS line item type. For example, credit, fee, tax, usage and so on.
Sub Services Displays a line chart of the total cost for all the sub-services activated in your AWS account.
Instances Displays a line chart of the total cost for all the instances purchased.
DB Engines Displays a line chart of the total cost for all the database engines.
Instance types Displays a line chart of the total cost according to the instance type purchased.
Tags Displays a line chart of the total cost based on the service tag applied to your AWS resources. You can click the tag drop-down list to select a service tag.
Storage
The storage view provides the cost specific to your storage resources running on AWS. For example, S3.
Table 5. Storage
View Description
Overview Displays a line chart of the total spend for all your storage resources running on AWS for the selected period. You can hover over the stacked bar to view the total spend for a specific day.
Accounts Displays a line chart of the total storage cost for all the linked accounts in your AWS payer account.
Cost Centers Displays the total spend grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your AWS account.
Charge Types Displays the total spend grouped by the AWS line item type. For example, credit, fee, tax, usage and so on.
Sub Services Displays a line chart of the total storage cost split for all the sub-services activated in your AWS account.
Buckets Displays a line chart of the total storage cost grouped by all the buckets created in your AWS accounts.
Tags Displays a line chart of the total cost based on the service tag applied to your AWS resources. You can click the tag drop-down list to select a service tag.
Data Transfer
The Data Transfer view provides the cost specific to all the data transfer across your AWS services. For example, Intra Region Transfer.
Table 6. Data Transfer
View Description
Overview Displays a line chart of the total spend incurred for all the data transfer across your AWS services for the selected period. You can hover over the stacked bar to view the total spend for a specific day.
Accounts Displays a line chart of the total cost for all the linked accounts in your AWS payer account that you are analyzing.
Services Displays the total cost for all the services activated in your AWS account.
Regions Displays a line chart of the total cost incurred in moving data according to the AWS regions.
Resources Displays a line chart of the data transfer cost grouped by the resources.
Tags Displays a line chart of the total cost based on the service tag applied to your AWS resources. You can click the tag drop-down list to select a service tag.

Cost Analysis (Azure)

Beam allows you to analyze your projected and current Azure cloud spends. Beam makes this planning process easy using machine intelligence and recommendation algorithms that analyze your workload patterns.

To view the Azure cost analysis, select Analyze from the main menu. Then, click View in the top-right corner to open the View Selector and select Azure .

Note:
  • Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
  • You can schedule, share, or download reports under each view except for the custom time range. For more information, see Cost Reports.
  • You can create filters at the view level, and then apply filters to display the line chart according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply .For more information about filters, see Filtering Options.

The Analyze page has the following tabs.

Current Spend
The current spend view provides the current Azure cost and its breakdown based on the different usage and resource data.
Table 1. Current Spend
View Description
Overview Actual and projected overall cost for the selected period.
Subscriptions Cost of subscriptions on your Azure account.
Cost Centers Total spend grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your Azure account.
Services Cost of Azure services based on the service category like data management, networking, and storage.
Service Types Cost based on the type of Azure service you consume.
Regions Cost based on the region in which the resource is hosted.
Azure Cost Centers Cost based on the configured Azure cost centers.
Department Cost based on the configured department.
Resource Groups Cost based on the configured resource group.
Tags Cost based on the tags applied on the Azure resources.
Projected Spend

The projected spend view provides insight into the actual and projected cost for the selected period. You can select different time ranges from the Time range drop-down list in the Filters option.

Table 2. Projected Spend
View Description
Overview The total cost of Azure billing accounts.
Subscriptions The actual and projected cost for the chosen Azure subscription for the selected period.
Services Cost of Azure services based on the service category like data management, networking, and storage.
Virtual Machine
The virtual machine view provides the Azure cost specific to the Virtual Machine (VM) resource type.
Table 3. Virtual Machine
View Description
Overview Monthly or daily total usage cost for the VMs running on Azure.
Sub Services Monthly or daily split cost for the VM usage based on the different flavors of the VM.
Cost Centers Monthly or daily grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your Azure account.
Regions Monthly or daily cost for the usage based on the regions in which the VMs are deployed.
Resource IDs Monthly or daily cost for the usage based on the resource IDs of the VMs.
Storage
The storage view provides the cost specific to your storage resources running on Azure. For example, Files, General Block Blob.
Table 4. Storage
View Description
Overview Monthly or daily total usage cost for the storage resource running on Azure.
Sub Services Monthly or daily split cost for the VM usage based on the different types of storage option configured.
Cost Centers Monthly or daily grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your Azure account.
Regions Monthly or daily cost for the usage based on the regions in which the storage resources are deployed.
Resource IDs Monthly or daily cost for the usage based on the resource IDs of the storage option.
Data Services
The Data Services option provides the cost specific to your database services configured on Azure.
Table 5. Data Services
View Description
Overview Monthly or daily total usage cost for the database service running on Azure.
Sub Services Monthly or daily split cost for the VM usage based on the different types of database service configured.
Cost Centers Monthly or daily grouped by Beam cost centers. This view includes the unallocated costs that are not tagged in your Azure account.
Regions Monthly or daily cost for the usage based on the regions in which the database service is configured.
Resource IDs Monthly or daily cost for the usage based on the resource IDs of the database service.

Cost Analysis (GCP)

Analyze allows you to drill down into your cloud spend, slice and dice all the cloud cost data. You can analyze the GCP cloud spend in detail across projects, services, usage types, regions, and other attributes. You can analyze data historically across days and past months.

Beam not only provides spend analytics on past data but uses its in-built algorithms to forecast future spend patterns. This projection is available for the current month and next six months across projects or services.

To view the GCP cost analysis, select Analyze from the main menu. Then, click View in the top-right corner to open the View Selector and select GCP .

Analyze provides cost analysis capability for the GCP Billing account and the Project that you select from the View Selector . The view that you select from Dashboard and Report gets persisted in Analyze and defaults to the last selected view.

Note:
  • Beam provides a toggle to view spend data in your preferred currency . You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
  • Schedule, share, and download reports under each view. For more information, see Cost Reports.
  • Create and apply filters to visualize the spend data according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply . For more information about filters, see Filtering Options .

    The Folder list displays the information in the Project/Folder/Sub-folder format. For example, 952341568867/664245889765/897640334598 .

Current Spend

The current spend view provides the current GCP cost and its breakdown based on the different usage and resource data.

Table 1. Current Spend
View Description
Overview

Displays the total GCP spend.

You can view the total spend for each day or month. Use the drop-down list in the top-right corner of the page to select Day or Month .

Projects Displays the total spend grouped by Projects associated with the GCP billing account.
Cost Centers Displays the total spend grouped by cost centers defined in Beam. This view includes the unallocated costs that are not tagged in your GCP billing account.
Charge Types Displays the total spend grouped by the GCP line item type.
Service Types Displays the total spend grouped by service type. For example: network, compute, and storage.
Services Displays the total spend grouped by services.
Usage Types Displays the total spend grouped by usage types. For example: OnDemand and Commit1Yr.
Regions

Displays the total spend grouped by regions.

Note: A region is a specific geographical location where you can host your resources. Regions have three or more zones. For example, the us-west1 region denotes a region on the west coast of the United States that has three zones: us-west1-a, us-west1-b, and us-west1-c.
Zones Displays the total spend grouped by zones.
Labels
Displays the total spend grouped by labels that you added for your GCP resources.
Note: You can use labels to group resources that are related or associated with each other. For example, you can label resources intended for production, staging, or development separately, so you can easily search for resources that belong to each development stage when necessary. You always add labels as key and value pairs.
Projected Spend
The projected spend view provides insight into the actual and projected cost for the selected period. You can select different time ranges from the Time range drop-down list in the Filters option.
Table 2. Projected Spend
View Description
Overview Displays the projected and actual spend for the selected period.
Projects

Displays the projected and actual spend for all the projects in your GCP billing account.

Services Displays the projected and actual spend for all the services in your GCP billing account or project.
Compute
The Compute view provides the cost specific to all the compute resources configured on GCP. For example, Compute Engine, Cloud Dataflow, and so on.
Table 3. Compute
View Description
Overview Displays the total spend for all the compute resources for the selected period. You can hover over the graph to view the total spend for a specific day.
Projects Displays the total compute spend for all the linked projects in your GCP billing account.
Cost Centers Displays the total spend for all the compute resources grouped by cost centers.
Charge Types Displays the total spend for all the compute resources grouped by the GCP line item type.
Services Displays the total compute spend for all the services activated in the projects linked to the selected GCP billing account.
Usage Types Displays the total compute spend based on the usage types (OnDemand, Committed, and so on).
Regions Displays the total compute spend incurred across the GCP regions.
Zones Displays the total compute spend incurred across the GCP zones.
Labels Displays the total compute spend based on the labels applied to your GCP resources. You can click the Labels drop-down list to select a label.
Storage
The storage view provides the cost specific to your storage resources running on GCP. For example, Cloud Storage.
Table 4. Storage
View Description
Overview Displays the total spend for all your storage resources running on GCP for the selected period. You can hover over the graph to view the total spend for a specific day.
Projects Displays the total storage spend for all the linked projects in your GCP billing account.
Cost Centers Displays the total storage spend for all the resources grouped by cost centers.
Charge Types Displays the total storage spend for all the resources grouped by the GCP line item type.
Services Displays the total storage spend split for all the services activated in the projects linked to the selected GCP billing account.
Usage Types Displays the total storage spend based on the usage types (OnDemand, Committed, and so on).
Regions Displays the total storage spend incurred across the GCP regions.
Zones Displays the total storage spend incurred across the GCP zones.
Labels Displays the total storage spend based on the labels applied to your GCP storage resources. You can click the Labels drop-down list to select a label.
Network
The network view provides the cost specific to your networking resources running on GCP. For example, Cloud DNS, Virtual Private Network (VPN), and so on.
Table 5. Network
View Description
Overview Displays the total spend for all your network resources running on GCP for the selected period. You can hover over the graph to view the total spend for a specific day.
Projects Displays the total network spend for all the linked projects in your GCP billing account.
Cost Centers Displays the total network spend for all the network resources grouped by cost centers.
Charge Types Displays the total network spend for all the network resources grouped by the GCP line item type.
Services Displays the total network spend split for all the services activated in the projects linked to the selected GCP billing account.
Usage types Displays the total network spend based on the usage types (OnDemand, Committed, and so on).
Regions Displays the total network spend incurred according to the GCP regions.
Zones Displays the total network spend incurred according to the GCP zones.
Labels Displays the total network spend based on the labels applied to your GCP network resources. You can click the Labels drop-down list to select a label.

Cost Analytics (Multicloud)

The Analyze page allows you to track cost consumption across all your cloud resources (AWS, Azure, GCP, and Nutanix) at both the aggregate and granular level. You can drill down cost further based on the accounts, services, locations, or tags within your business unit or cost center.

Analyze

This view provides deep visibility into your projected and current spend. Analyze displays a line chart to project cost consumption that helps you view and analyze your cost and usage across all clouds, business unit, cost center, or scopes you select.

To view the Analyze page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Analyze . Click View in the top-right corner to open the View Selector pop-up and select All Clouds .

Note: Currency toggle is not available for multicloud views. You can view the spend data in target currency on the multicloud dashboad. You can select the target currency in the currency configuration page. For more information on general guidelines and considerations, see Currency Configuration Considerations .
Current Spend
The current spend view provides the current costs across all configured clouds (AWS, Azure, GCP, and Nutanix) and its breakdown based on the different usage and resource data.
Table 1. Current Spend
View Description
Overview Displays the total spend across all configured clouds.
Accounts Displays the total cost of the linked accounts in your AWS payer account, subscription of your Azure account, and projects associated with your GCP billing account.
Cost Centers Displays the total spend grouped by cost centers across all configured clouds.
Services Displays the total spend grouped by services.
Regions Displays the total cost grouped by regions.
Tags Displays the total cost based on the service tag applied to your resources. You can click the tag drop-down list to select a service tag.
Projected Spend

The projected spend view provides insight into the actual and projected cost for the selected period. You can select different time ranges from the Time range drop-down list in the Filters option.

Filtering Options

The Analyze page allows you to track cloud spend across resources through various views—Current Spend, Projected Spend, Compute, and more. Clicking a view provides various group by selections to analyze in aggregation across accounts, services, regions, cost centers, and other different attributes. Each view also includes drill-down options to analyze granular information of that view.

In a view, you can drill down the cloud spend by clicking Filters . The Filters pane includes a set of fields that vary according to the cloud type. Select the desired resources to drill down your cloud spend and analyze only the resources of your interest. Further review the cloud spend analytics or download the reports specifically for these resources.

Figure. Analyze - Filters Click to enlarge

Time Period Type

You can select between Usage or Invoice to view the cloud spend analysis.
  • Usage . Beam uses the actual usage and the cost data incurred during the date range selected. Beam uses Usage to analyze spend-data by default.
  • Invoice . Beam uses the charges on invoices issued for months selected. When you select Invoice as the time period type, Beam uses Month as the time unit and disables the Day or Month selection.
Note:
  • For All Clouds , Financial , or Scopes view, you can select the Time Period Type filter only after you select the specific cloud—AWS, Azure, or GCP.
  • For AWS, the time period type depends on the AWS cost configuration.
    • Absolute Cost : The time period selection is configurable. You can either select Usage or Invoice .
    • Amortized Cost : The time period selection is Usage by default and the Invoice option is disabled.
  • The time period selection is not available for the following views.
    • Data Transfer for AWS
    • Projected Spend for all clouds
  • You can either apply Include or Exclude option to the resources selections in the filters.
  • You can do a keyword search to select the relevant resource from the drop-down list.
  • Your selections in filters are available across all group by in that view.

Anomaly Detection

The anomaly detection feature in Beam is an intuitive cost governance mechanism for public clouds (AWS and Azure).

Anomaly detection identifies and reports cost spikes in your cloud account by analyzing the spending trend on your subscribed resources.

You can use anomaly detection to discover cost anomalies and the resources that are causing the anomalies to take corrective actions, like, optimizing or shutting down resources that are not legitimate. Anomaly detection is useful for dynamic environments and where static budget thresholds are less efficient since they do not process the historical consumption pattern.

The cost anomalies are displayed in the timeline view of the Analyze page. Also, you can view the list of anomalies (if you have access to the AWS payer account or Azure billing account) as a separate widget in the Dashboard .
Note: AWS cost change reported in spend anomalies does not follow cost configuration and is based on unblended- and absolute-cost.

Viewing Cost Anomalies

Beam allows you to view anomalies detected in your AWS and Azure cost.

About this task

Anomaly detection identifies and reports cost spikes in your cloud account by analyzing the spending trend on your subscribed resources.

To view cost anomalies detected by Beam, do the following.

Procedure

  1. In the Beam console, click the View option in the top-right corner to open the View Selector pop-up and select the AWS or Azure account.
  2. Do one the following.
    • Go to the Analyze > Current Spend tab.

    • Go to the Dashboard and view the Spend Overview widget.

    Cost anomalies detected on your spending pattern are highlighted with an arrow sign pointing to the date on which the spike is detected. You can hover over the arrowhead icon to see the percentage increase in cost from the previous day.

    Figure. Viewing Cost Anomaly Click to enlarge anomaly detection

  3. Click the arrowhead hyperlink to go to the Anomaly Details dialog box to see the cloud accounts and resources that contributed to the spike. Anomaly Details shows anomaly details like date, resource name, and percentage increase in the cost from the previous date.
    Note:
    • Beam requires the complete cost data for a particular date to detect an anomaly. It may take up to 48 hours for the complete cost data from AWS and Azure.
    • An anomaly detected at a linked account- and subscription-level may not be bubbled up to the payer account- and billing account-level, respectively. This is because the anomaly detected at the linked account and subscription may be insignificant compared to anomalies at the payer account- and billing account-level, respectively.

Cost Reports

Beam generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

The reports are generated automatically and are available to view, download, or share from the Beam console. The daily, weekly, and monthly reports are also sent to the registered Beam administrator email address automatically. To customize your email notification preferences, select Profile from the user menu in the top right corner. Then, select Email Preferences in the Preferences section.

Note:
  • For Custom reports, the time period type is configurable. See Custom Reports.
  • For System reports, the time period type is usage by default.
  • The System reports are in target currency that is configured at the time of report generation. However, the following system reports are in source currency.
    Note: The following reports are in source currency.
    • Daily reports: New EC2 RI Recommendation report and EC2 RI Utilization Report
    • Monthly reports: Expired RI Report , Expiring RI Summary Report , and On Demand vs Reserved Hours Cost
  • The currency in the Custom reports is based on the configuration in the create or edit page. The reports are in either source currency or target currency as configured while creating the reports.
Table 1. Cost Reports
Report Description Clouds Supported
System Reports
Cost Report by Account Contains month to date detailed spend information based on accounts, regions, and services. The latest report is generated by the end of the day according to your timezone every day. AWS, Azure, and GCP
New EC2 RI Recommendation Report Contains instance-wise details such as region, platform, type, and reservation coverage. The latest report is generated by the end of the day, according to your timezone. AWS
EC2 RI Utilization Report Contains EC2 reservation planning information based on the instance type, region, platform with a complete aggregated account view. The latest report is generated by the end of the day, according to your timezone. AWS
Cost Optimization Detailed Report Contains detailed information about unused resources and rightsizing opportunities for the infrastructure of the selected account. The latest report is generated by the end of the day, according to your timezone. AWS, Azure
Tag Based EC2 Instance Type-wise cost Report Contains tag-wise cost separation, which will help you monitor spend based on resource tags. It also contains a spending breakdown based on the instance type. The latest report is generated by the end of the day, according to your timezone. AWS
Bandwidth Summary Report Contains transfer quantity in GB segregated by transfer type, account, service, resource, and region. The latest report is generated by the end of the day, according to your timezone. AWS
EC2 Detailed Insight Report Contains resource-wise cost optimization details like incurred cost, rightsizing, RI, recommended action, and other information. The latest report is generated by the end of the day, according to your timezone every day. AWS
Weekly Reports
Cost Report by Account Contains detailed spend information based on accounts, regions, and services for the last week. The latest report is generated by Tuesday of every week. AWS, Azure
Spend Report by Tags Contains spend information for the last week based on various resource tags for the selected account. The latest report is generated by Tuesday of every week. AWS
Cost Comparison Report Contains cost comparison data according to accounts, regions, and services for the past two weeks. The latest report is generated by Tuesday of every week. AWS
Monthly
Cost Report by Account Contains detailed spend information based on accounts, regions, and services for the last month. The latest report is generated by the 7th of every month. AWS, Azure, and Nutanix
Spend Report by Tags Contains spend information of last month based on various resource tags for the selected account. The latest report is generated by the 7th of every month. AWS
EC2 Spend Report Contains spend information based on the EC2 instance type, platform, accounts, regions, and purchase option for the last month. The latest report is generated by the 7th of every month. AWS
Cost Comparison Report Contains cost comparison data as per accounts, regions, and services for the last two months. The latest report is generated by the 7th of every month. AWS
Expired RI Report Contains information regarding the RIs that expired in the last 90 days. The latest report is generated by the 7th of every month. AWS
Expiring RI Summary Report Contains information about RIs expiring in less than 30 days, between 31 to 60 days, and between 61 to 90 days as well as their renewal cost. The latest report is generated by the 7th of every month. AWS
On Demand vs Reserved Hours Cost Contains a comparison between On-Demand and Reserved hours based on the instance type. The comparison also includes savings incurred on reserved hours for the last month. The latest report is generated by the 7th of every month. AWS

Guidelines for the Share , Schedule , and Download report options.

  • Share and Download options are available for all the views - Global view, Scopes, Business Unit, Cost Center, AWS (Overview, Payer, and Linked accounts), Azure (Billing Accounts and Subscription), and GCP (Overview and Billing accounts).
  • The Schedule option is available at the AWS, Azure, and GCP individual cloud level (Payer and Linked accounts, Billing Accounts and Subscription).

Scheduling Reports

Beam application generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

The application allows you to schedule reports at a desired time.

To schedule reports, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View in the top-right corner to open the View Selector, and select the cloud account or Scope for which you want to schedule reports.
  3. Hover over any of the Daily , Weekly , or Monthly generated reports and click the schedule icon.
  4. Enter the Schedule Report Sharing details and click Schedule .
    All the schedule reports are available under the Reports > Scheduled Reports . You can edit, disable, or delete any scheduled report.

Downloading Reports

Beam application generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

The application allows you to download selected reports for offline consumption.

To download reports, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View in the top-right corner to open the View Selector, and select the cloud account or Scope for which you want to download reports.
  3. Hover over any of the Daily , Weekly , or Monthly generated report and click the download icon.

Sharing Reports

Beam generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels, like a functional unit, workloads, and applications.

About this task

Beam allows you to share selected reports with stakeholders over email.

To share reports, do the following:

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
    The System tab appears.
  2. Click View in the top-right corner to open the View Selector, and select the cloud account or Scope for which you want to share reports.
  3. Hover over any of the Daily , Weekly , or Monthly generated reports and click the share icon.
  4. Enter the Recipients , Report Name , and Message .
  5. Click Share to complete.

Custom Reports

Custom reports help in strategic decision making and planning in the cloud. You can create a customized report by aggregating and filtering using various native cloud dimensions and attributes to get insights on cloud usage and cost. The Custom Reports page displays a list of all the custom reports created.

You can create a new report by clicking Add Custom Report at the top-right corner of the Reports > Custom . You can view, edit, schedule, share, download reports (CSV) or delete a previously created report by selecting the options next to the report name. To find a previously created report, start typing the report name in the search bar and select from the auto-populated options.

For more information, see Creating Custom Reports (AWS) and Creating Custom Reports (Azure).

Note:
  • A custom report can be created for AWS payer- and linked-accounts, Azure billing accounts and subscriptions.
  • To create a custom report, the user must have a Write access to the specified AWS payer- and linked-accounts, Azure billing accounts and subscriptions.
  • The saved reports are always available on the Beam console. For every custom report, a maximum of 100 entries gets displayed. To get the complete list of entries or offline consumption of the custom reports, you can download the custom report using the download icon.

Creating Custom Reports (AWS)

You can create a customized report to analyze the cost and usage across your AWS Accounts, Services, Region, and so on based on the defined range and filters.

About this task

To create a custom report, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu and then, click Reports .
  2. Click the Custom tab.
  3. Click View in the top-right corner to open the View Selector, and select the cloud account or Scope for which you want to create a custom report.
  4. Click Add Custom Report in the top-right corner.
    Figure. Custom Report Details Click to enlarge

  5. In Custom Report Name , enter a name for the report.
  6. In the Dimensions list, select a dimension for the required report output. Click Add Dimension if you want to select multiple dimensions.
    Note: A report must have at least one dimension. You can select up to ten dimensions.
  7. In Time area, you can select from the following options:
    1. In the Period Type list, click the drop-down list to select either Usage or Invoice type.
    2. In the Unit list, click the drop-down list to select either Day or Month to specify the time unit that you want to generate a report for.
      Note: The Invoice period type is only available for the Month time unit.
    3. In the Range list, click the drop-down list to select either date or month range based on the time unit selected. You can generate a custom report for the last 3 months.
  8. In Metrics area, you configure the following:
    1. Cost - After you select the Cost checkbox, the following the report cost presentation options appear.
      • A currency toggle. Select the currency with which you want to generate the custom reports. For more information, see Currency Configuration .
        Note: If the target currency is same as source currency, the toggle is not available.
      • Amortized Unblended
      • Absolute Blended
      • Absolute Unblended
      • Include Credit and Refunds
      • Include EDP Discounts
      Note:
      • The default selection is based on your AWS cost-logic configuration. For more information, see AWS Cost Configuration .
      • The Amortized Unblended option is only available when you select Usage as Period Type .
    2. Usage - Specify the usage quantity. By default, the Any filter is applied. You can use the drop-down list to apply the filter according to your requirement. The available options are Any , Greater Than Equal To , Less Than Equal To , and Range . The usage quantity unit corresponds to the selected service. For example, the usage unit is hours for EC2, bytes for S3, and so on.
    Note: Ensure that you select at least one of the following metrics - Amortized Unblended , Absolute Blended , Absolute Unblended , and Usage .
  9. To add more filters to your report, click Filters and do the following.
    1. In the Accounts list, select the accounts for which you want to generate a custom report.
    2. In the Service list, select the services from the available list.
    3. In Sub Service , select the sub-services you want.
    4. In Availability Zone list, select the availability zones you want.
    5. In API Operation list, select the API operations you want to include in the custom report.
    6. In Purchase Options list, select the purchase options you want.
    7. In Tag Key and Tag Value lists, select a key and value pair. Click Add another Tag Key Value Pair to add multiple keys and values.
  10. Click Generate .

Creating Custom Reports (Azure)

You can create a customized report to analyze the cost and usage across your Azure Subscriptions, Products, Region, and so on based on the defined range and filters.

About this task

To create a custom Azure report, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports .
  2. Click the Custom tab.
  3. Click View in the top-right corner to open the View Selector Pop-up , and select the cloud account or Scope for which you want to create a custom report.
  4. Click Add Custom Report in the top-right corner.
    Figure. Custom Report - Create Click to enlarge
  5. In Custom Report Name , enter a name for the report.
  6. In Dimensions , select the dimensions for the desired report output. Click Add Dimension if you want to select multiple dimensions.
    Note: A report must have at least one dimension. You can select up to 10 dimensions.
  7. In Time area, you can select from the following options:
    1. In the Period Type list, click the drop-down list to select either Usage or Invoice type.
    2. In the Unit list, click the drop-down list to select either Day or Month to specify the time unit that you want to generate a report for.
      Note: The Invoice period type is only available for the Month time unit.
    3. In the Range list, click the drop-down list to select either date or month range based on the time unit selected. You can generate a custom report for the last 3 months.
  8. In Metrics area, do the following to configure Cost .
    1. In the currency toggle, select the currency with which you want to generate the custom reports. For more information, see Currency Configuration.
    2. In the cost filter drop-down list, select the cost range by using one of the following logical operators.
      • Any . By default, the Any filter is applied.
      • Greater Than Equal To
      • Less Than Equal To
      • Range
  9. To add more filters to your report, click Filters and do the following:
    1. In the Subscriptions list, select the subscriptions for which you want to generate a custom report.
    2. In the Service Categories list, select the categories from the available list.
    3. In the Regions list, select the regions you want.
    4. In the Products list, select the products you want to include in your custom report.
    5. In the Cost Centers list, select the cost centers you want.
    6. In the Resource Groups list, select the resource groups from the available list.
    7. In the Departments list, select the departments you want.
    8. In Tag Key and Tag Value lists, select a key and value pair. Click Add another Tag Key Value Pair to add multiple keys and values.
  10. Click Generate . A preview of the report is displayed. You can click Save to save the report or click Edit to change the report parameters.

Report (Multicloud)

Beam generates reports to track cost consumption across all your cloud accounts at both aggregate and granular levels like a functional unit, workloads, and applications.

The reports are generated automatically and are available to view, download, or share from the Beam console. Beam sends the daily and yearly reports to the registered Beam administrator email address automatically.
Note: You can view daily reports only at the business unit and cost center levels.

To view the reports, click the Hamburger icon in the top-left corner to display the main menu. Then, click Reports . You can use the View Selector to select the business unit, cost center, or scope.

You can click the drop-down menu at the top-right corner to select the Daily Reports or Yearly Reports .

Table 1. Multicloud Cost Reports
Report Description
Cost Summary Reports Contains month to date detailed spend information based on accounts, regions, and services. Beam generates the latest report by the end of the day according to your timezone every day.
Cost Optimization Detailed Reports Contains detailed information about unused resources and rightsizing opportunities for the infrastructure of the selected business unit or cost center. Beam generates the latest report by the end of the day according to your timezone every day.
Historical Budget Reports Contains past financial year budget reports along with the quarterly summary data and service-wise cost breakdown for each quarter. Click View All to view the list of reports for the past financial years.

Chargeback

The Chargeback feature enables the financial control of your cloud spends by providing the ability to allocate cloud resources to departments based on definitions. Chargeback also provides a consolidated view across all your cloud accounts in a single pane of finance view.

Note:
  • Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts.
  • You cannot build chargeback on the custom scope. You define a scope using accounts and resources and can add a resource across different scopes.
  • For AWS, the time period type selection depends on the cost configuration.
    • If Absolute Cost type is selected, the time period type is invoice-based.
    • If Amortized Cost type is selected, the time period type is usage-based.
  • For Azure and GCP, the selection of time period type is invoice-based.
  • You can view the spend data in the configured target currency. For more information on general guidelines and considerations, see Currency Configuration Considerations .

Business Units

A business unit is defined as a collection of cost centers. You can use the business units to define hierarchies in your organization between different departments. It is not necessary to define a business unit to view chargeback. You can also define chargeback only based on cost centers.

Cost Center

A cost center is a collection of resources within a single or multiple cloud accounts (AWS, Azure, and GCP). You can assign the resources to the cost center based on tags. You can either allocate a complete account or resources within an account to a cost center.

Note:
  • If a cloud account is assigned to a cost center with the tag definition as All Tags , then you cannot share this account with another cost center.
  • An account or tag once used in a Cost Center definition cannot be reused. This is to prevent double-counting of the cost of resources.
  • The resources that do not belong to any of the cost centers are grouped under Unallocated Resources . You can manually allocate any unallocated AWS and Azure resources into a cost center. Beam does not support manual allocation of unallocated GCP resources.

Unallocated Resources

Unallocated resources are the resources that do not belong to any of the cost centers based on definitions.

Unallocated resources include the following.

  • Accounts not allocated to any cost center (includes all the resources within the account)
  • Resources within an account not allocated to any cost center.

The following image describes an example of a multicloud configuration. The cost center consists of Nutanix, AWS, Azure, and GCP resources.

Figure. Multicloud Configuration Click to enlarge

Adding a Business Unit

Before you begin

You can create a business unit only if you have an Admin role in Beam.

About this task

You can define a business unit by combining a group of cost centers. Chargeback is built on the business unit and cost center configuration construct that you define for the resources across all your cloud accounts. You can select the owners and viewers for the business unit. Both owners and viewers have read-only access to the business unit.

To add a business unit, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
  2. In the Business Unit Configuration page, click the Create list and select Business Unit .
    The Create Business Unit page appears.
  3. In the Name box, enter a name for the business unit.
  4. In the Owners list, click to select owners for the business unit you are creating.
    Note: The business unit owner is financially accountable for the business unit. You can select multiple owners for the business unit.
  5. In the Viewers list, click to select viewers for the business unit you are creating.
  6. In the Cost Centers list, select the cost centers that you want to map to your business unit.
  7. Click Save Business Unit to complete.
    The business unit you just created appears in the Business Unit Configuration page. You can use the business unit to build chargeback.

Editing a Business Unit

You can edit (or delete) an existing business unit.

Before you begin

You can edit or delete a business unit only if you have an Admin role in Beam.

About this task

To edit a business unit, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
    You can view the list of business units in the Business Unit configuration page. You can use the drop-down to filter the list by the business unit.
  2. Click Edit against the business unit that you want to edit.
    The Edit Business Unit page appears.
  3. Click Save Business Unit after you edit the fields according to your requirement.

Adding a Cost Center

Before you begin

You can create a cost center only if you have an Admin role in Beam.

About this task

A cost center is a department to which you can allocate cloud accounts and resources based on the definition. You can define a cost center by selecting resources by accounts and tags across different clouds. You can select the owners and viewers for the cost center. Both owners and viewers have read-only access to the cost center.

To add a cost center, do the following.

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
  2. In the Business Unit configuration page, click the Create list and select Cost Center .
    The Create Cost Center page appears.
  3. In the Name box, enter a name for the cost center.
  4. In the Owners list, click to select owners for the cost center you are creating.
    Note: The cost center owner is financially accountable for the cost center. You can select multiple owners for the cost center.
  5. In the Viewers list, click to select viewers for the cost center you are creating.
  6. Click Define Cost Center to open the Define Cost Center page.
    You define the cost center by selecting the accounts and tags across different clouds.
  7. In the Define Cost Center page, do the following.
    1. In the Cloud list, select the cloud type ( AWS , Azure , or GCP ).
    2. In the Parent Account list, select the parent account.
    3. In the Sub Accounts list, select the sub accounts. You can select multiple sub accounts.
    4. In the Tag Pair area, select the key and value pairs to further refine the definition of your cost center. You can click the plus icon to add more key and value pairs.
    5. Click Save Filter to save the filter. You can click Add Filter to add more filters.
    6. Click Save Definition to save your cost center definition and close the Define Cost Center page.
  8. Click Save Cost Center to complete.
    Beam takes around 24 to 48 hours to display the cost data of the newly added cost center.
    Note: If cost allocation tags for an AWS account are not visible in Beam, activate the cost allocation tags in AWS. To activate the cost allocation tags, see Activating the AWS-Generated Cost Allocation Tags.

Editing a Cost Center

Beam allows you to edit (or delete) an existing cost center.

Before you begin

You can edit a cost center only if you have an Admin role in Beam.

About this task

To edit a configured cost center, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Chargeback .
    You can view the list of cost centers in the Business Unit configuration page. You can use the drop-down list in the top-right corner to filter the list by the cost center.
  2. Click Edit against the cost center that you want to edit.
    The Edit Cost Center page appears.
  3. Click Save Cost Center after you edit the fields according to your requirement.
    Beam takes around 24 to 48 hours to display the cost data of the edited cost center.

Allocating Unallocated Resources To Cost Center

Before you begin

You can allocate an unallocated resource only if you have an Admin role in Beam.

About this task

Unallocated resources are the resources that do not belong to any of the cost centers based on definitions. Beam allows you to allocate an unallocated AWS or Azure resource to a cost center.
Note: Beam does not support manual allocation of unallocated GCP resources.

To allocate an unallocated resource for chargeback, do the following.

Procedure

  1. In the Beam console, select Finance from the View Selector pop-up and go the Chargeback page using the Hamburger icon in the top-left corner.
  2. In the top-right corner of the page, click the Unallocated button.
  3. In the Unallocated Cost table, click the expand icon against the account or subscription to view the list of services.
  4. Click View Details against the service item to view the list of resources.
  5. Click the Allocate button against the resource item that you want to allocate to a cost center.
    The Select a cost center pop-up window appears.
  6. In the Cost Center Name list, select the cost center for the resource.
  7. In the Percentage Split box, enter the resource cost percentage you want to allocate to the cost center you selected.
    If you want to split the cost of the resource between two or more cost centers, click Add a split to specify the cost centers and the percentage of cost split between the cost centers.
  8. Click Allocate Resource to complete.
    Beam takes upto 24 hours to display the cost changes after allocating an unallocated resource.
    The resource you just allocated appears in Unallocated table with the status as Allocating .
    Figure. Allocating Unallocated Resource Click to enlarge

Chargeback Views

Chargeback View (Administrator)

An administrator can do the following.

  • Assign unallocated resources to a cost center
  • Create a business unit and cost center

You can select the Allocated and Unallocated options in the top-right corner of the page to view the cost details for allocated and unallocated resources.

In the Unallocated Cost table, you can use the Allocate button to allocate resources to a cost center. To allocate the unallocated resources to cost centers, see Allocating Unallocated Resources To Cost Center.

In the Allocated Cost table, an administrator can browse through all the business units and cost centers (created by the administrator) to view detailed information.

Figure. Chargeback Click to enlarge Chargeback - Global View

Table 1. Chargeback Views (Administrator)
View Description
Spend Overview Displays a pie graph of the cost breakup summary for the allocated and unallocated resources.
Spend Analysis - Unallocated Cost

Displays a bar graph of the historical spend analysis for the unallocated resources with an actual and projected spend for the last three months.

Spend Analysis - Allocated Cost Displays a bar graph of the historical spend analysis for the allocated resources with an actual and projected spend for the last three months.
Top Spend Displays the services and accounts that are consumed the most in your business unit or cost center. You can use the drop-down list in the right corner to select Top services or Top accounts .
Unallocated Cost Displays detailed information for the unallocated resources that include the following.
  • Cloud type ( AWS , Azure , GCP , or Nutanix )
  • Name of the account, subscription, or cluster
  • Account, subscription, or cluster ID
  • Total cost incurred for the account, subscription, or cluster.

You can also allocate unallocated resources to a cost center. For more information, see Allocating Unallocated Resources To Cost Center.

Allocated Cost Displays detailed information for the allocated resources that include the following.
  • Business unit or cost center
  • Owner of the business unit and cost center
  • Total cost of the allocated resources
  • Definition of the business unit or cost center. For example, a business unit constituted of four cost centers.

In the Actions column, you can click the Edit and Delete buttons to edit the resource details or delete the resource.

You can use the drop-down list in the top-right corner of the Allocated Cost table to filter the resources by business unit, cost center, or business unit and cost center.

You can also click the share and download icons in the top-right corner to share or download the resource details.

Chargeback View (Owners and Viewers)

The owners and viewers can view the business units and cost centers for which they have access.

You can use the View Selector (public cloud) and View Selector (Nutanix) to select the business unit or cost center.

Note: Owners and viewers have only view access to the business units and cost centers unless they are an administrator. Owners help in identifying the financial owner for the business unit or cost center.

The following table describes the widgets available for business unit and cost center views.

Table 2. Chargeback Views (Owners and Viewers)
View Description
Spend Overview Displays the total spend cost (month to date) and the projected spend cost for the business unit or cost center.
Spend Analysis - Allocated Cost Displays a bar graph of the historical spend analysis for the allocated resources with an actual and projected spend for the last three months.
Top Spend Displays the services and accounts that are consumed the most in your business unit or cost center. You can use the drop-down list in the right corner to select Top services or Top accounts .
Allocated Cost Displays detailed information for the allocated resources (within a cost center) that includes the following.
  • Cloud type ( AWS , Azure , GCP , or Nutanix )
  • Name of the account, subscription, or cluster
  • Account, subscription, or cluster ID
  • Total cost incurred for the account, subscription, or cluster.
You can use the expand icon to view details about services within the account or subscription.
Note: If the pulse was not available for a Nutanix cluster for a given number of days in a month, the Unmetered Cluster Cost line item appears showing the cluster cost incurred for those days. For more information, see Cost Analysis (Nutanix).

You can click View Details against each service to view the resource details.

You can also click the share and download icons in the top-right corner to share or download the detailed report for the cost center.

Budget

The Budget feature in Beam extends the budgeting capability for a business unit, cost center, or scope. A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined. You can also create custom budgets.

Beam monitors and tracks the consumption continuously, and you can track any threshold breaches to the budgets that you have configured using the Budget page in the Beam console. Organization level budgets can be tracked at the quarterly or yearly level basis for the selected business unit, cost center, or custom budgets.

Note:
  • For AWS, the time period type selection depends on the cost configuration.
    • If Absolute Cost type is selected, the time period type is invoice-based.
    • If Amortized Cost type is selected, the time period type is usage-based.
  • For Azure and GCP, the selection of time period type is invoice-based.

The Budget page allows you to view the budget cards based on the budgets that you have configured for your cloud accounts.

Figure. Budget View Click to enlarge

The budget card for a business unit or cost center displays the Financial Year , Budgeted , Current Spend , Estimated , and Data Updated information. It also displays the budget spend and the actual spend in a graphical format. You can also edit the budget details or delete an existing budget using the Budget page.

You can define and track a budget based on the following resource groups.

  • Business Unit/Cost Centre based Budget – Only a Beam administrator or the owner of the Business Unit/Cost Centre can create a budget for which they have access to.
  • Custom Budget – You can use a combination of the cloud (AWS, Azure, and GCP), billing account, services, regions, and tags to define the custom resource group.
  • Scope based Budget - Only a Beam administrator or the creator of the scope can create a budget. Only one budget can be created for each scope.

You can add a threshold for your budget alerts. When the budget reaches the threshold you specified, Beam sends a notification to the email addresses you enter when creating the budget alerts.

Expired Budgets

In the top-right corner, you can click the Expired button to view the expired budgets.

You can do the following in the expired budgets view.

  • Click the View Details option in any of the expired budget cards to view the details.
  • Click the Renew option to renew the latest expired budget for the current financial year.
    Note: The Renew option is available only for the latest expired budget. In case a budget for a resource group is already created for the current financial year, the renew option for the expired budget of the same resource group is not available.

Creating a Budget Goal

Beam allows you to create Budgets based on the resource groups that you have defined.

About this task

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

To add a global Budget, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Budget .
  2. Click Create a Budget .
  3. In the Select Type page, select Custom Scopes based Budget , Business Unit/Cost Centre based Budget , or Custom Budget . Then click Next to go to the Define Resource Group page.
  4. If you select Business Unit/Cost Centre based Budget or Custom Scopes based Budget as the resource group type, do the following.
    1. Select the business units and cost centers or Scopes from the resource drop-down menu to define the scope for the budget and click Next to go to the Allocate Budget page.
      Note: You can select the business unit and cost center from the drop-down menu only if you have access to the resource group item.
    2. Go to step 8.
  5. If you select Custom Budget as the resource group type, click Define Custom Resource Groups for Budget Allocation to create a resource group.
    The resource group represents a combination of the cloud (AWS, Azure, and GCP), billing account, services, regions, and tags.
  6. In the Create Resource Group page, do the following.
    1. In the Cloud list, click to select the cloud type you want.
    2. Select the payer account (AWS), enterprise/MCA billing account (Azure), billing account (GCP) from the Parent Account drop-down menu.
      Note: You must select a cloud type and account to create a resource group.
    3. Select the Sub Accounts , Services , Regions , and Tags linked to the payer account (AWS), enterprise/MCA billing account (Azure) or Projects (GCP).
      Note: If you want to see the cost allocation tags in Beam, activate the cost allocation tags in AWS. To activate the cost allocation tags, see Activating the AWS-Generated Cost Allocation Tags.
    4. Click Add another filter to add multiple resource group.
    5. Click Save Filter .
      You can use Edit and Remove buttons to edit and remove the resource groups that you have added.
    6. Click Next to go to the Allocate Budget page.
  7. In the Budget Name box, enter a budget name you want.
  8. In the Financial Year list, select the financial year session.
  9. In the Allocation Type area, select Automatic Allocation or Manual Allocation .
    Note: Selecting Automatic Allocation allows Beam to allocate budget based on your spend. Beam uses the last 40 days of data to project the budget for the current and next month.
  10. If you select Manual Allocation , do the following.
    1. In the Set Annual Budget field, enter the Annual Budget for the selected business center and cost center.
    2. To distribute the annual budget equally for all the quarters, click Distribute Equally .
      Alternatively, you can set the budget for each quarter manually. Similarly, you can either distribute the quarterly budget manually for each month or click Distribute Equally to distribute the quarterly budget equally for all the months. The monthly and quarterly budgets must add up to the annual budget that you have entered.
  11. Click Next .
    The Add Alerts to your Budget page appears.

    You can add percentages of the total budget value. When the budget reaches the threshold value you entered, Beam sends a notification via an email to the email addresses you enter.

    You can add alerts for the following periods.

    • Monthly Budget Alerts
    • Quarterly Budget Alerts
    • Yearly Budget Alerts
  12. Click Create against the period for which you want to create a budget alert.
    The Budget Alert page appears.
  13. In the Threshold box, enter the threshold value in percentage. Then click Save .
  14. In the Alert Notification box, enter the email addresses to which you want to send the alerts. Then click Save to create the budget.
    Note: Beam sends the budget alert notifications to the owners of the business unit or cost centers by default.

Budget Details

You can view your budget details from the Budget page in Beam. To view budget details, click View Details option in any of the budget cards. The Year Breakup and Cost Breakup for the budget is displayed.

Figure. Budget Details Click to enlarge

To download or share the cost breakup report, click the download or share icon.

Editing Budget Alerts

You can edit the budget alerts for the cost centers and business units in the Budget page. In the Budget page, you can view the budget cards for your cost centers and business units.

About this task

A budget allows you to centralize the budget at the business unit, cost center, or scope levels to ensure that your consumption is within the budget that you have defined.

To edit the budget alerts for your cost centers, business units, or scopes, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Budget to open the Budget page.
  2. Click Edit against the business unit, cost center, or scope to edit the budget alerts.
    The Edit the Budget page appears.
  3. Click Budget Alert tab.
    You can view the details of the alerts in the Categories area.
    You can view the following icons in the Action column.
    • Disable Alert or Enable Alert icon - Click to disable or enable the alert.
    • Edit Alert icon - Click to edit the alert.
    • Remove Alert icon - Click to remove the alert.
  4. Click the Edit Alert icon to edit the alert.
    The Budget Alert page appears.

    You can edit the period and threshold values.

  5. Click Save to save your changes.
  6. In the Alert Notifications box, enter the email addresses to which you want to send the alerts.
  7. Click Update to update the budget alert.

Save - Cost Saving

The following sections describe the Save page for the AWS and Azure accounts.

Cost Saving (AWS)

Save allows you to optimize your cloud usage by saving money on your total AWS bill. Beam helps you to identify underutilized resources for EBS, EC2, ElastiCache, RDS, EBS, and Redshift.

Save Views

This view helps you in saving money for your current incurring spends. Beam identifies unused and underutilized resources and provides a specific recommendation for optimal consumption.

Note:
  • Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For more information on general guidelines and considerations, see Currency Configuration Considerations .
  • You can schedule, share, or download reports under each view. For more information, see Cost Reports.
  • You can create filters at the view level, and then apply filters to display the line chart according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply .
  • If you want to see the cost allocation tags in Beam, activate the cost allocation tags in AWS. You can use the cost allocation tags to track your AWS costs on a detailed level. To activate the cost allocation tags, see Activating the AWS-Generated Cost Allocation Tags.
  • Cost change reported in spend anomalies does not follow cost configuration and is based on unblended- and absolute-cost.
  • All the cost data shown in Save are for 30 days (720 hours). For example, current cost and projected cost for an underutilized EC2 instance.
  • Save supports EC2 instances with shared tenancy or dedicated instances. It also supports bare metal instances. Bare metal instances are cross-family and shown as a part of underutilized EC2 recommendations.
  • Save recommendations are not provided for AWS EC2 instances with hyper-threading as enabled. Ensure that you disable hyper-threading for your EC2 instances.
Table 1. Overview
View Description
Projected Savings Displays a line chart of the cost-efficiency in terms of cost, saving opportunities, and the resources being utilized. This view provides the following metrics.
Spend Efficiency
The effective spending of your account, excluding the saving amount from your monthly cost projection.
Potential savings
The cumulative sum of the savings which you will incur monthly either by deleting or optimizing the unused and underutilized resources.
Opportunity Count
Number of resources where you can reduce or resize the resources for cost savings.
Money saved till date
Money that you have saved till today.
Time saved till date
Time that you have saved till today.
Optimization Opportunities Displays total savings of the unused resources and underutilized resources. This view helps you in optimizing your AWS resources. This provides the following metrics:
Unused Cloud Resources
Displays the total count of the unused resources across multiple audits and the savings that incur after you delete them. To delete an unused resource, click Eliminate .
Cloud Rightsizing
Displays the total count of underutilized resources across multiple audits and the savings that incur after you optimize them. To optimize an underutilized resource, click Optimize .
Savings History
Displays the total time and money savings after deleting your unused resources. To view the savings history, click Savings History .
Table 2. Eliminate
View Description
Eliminate Unused Resources Displays a list of the resource category and the total saving incurred in each category. You can delete unused resources to reduce your AWS expenditures. You can view the unused resources under each category by clicking the View List option. The detailed list displays all the resources along with resource name, region, and potential savings per month accrued if you eliminate the resource.
Edit Cost Policy You can configure individual audits by clicking the Edit Cost Policy option.
Eliminate If you want to delete an unused resource, select the check box in front of the resource name and click Eliminate . You can delete multiple resources at once by selecting the check box for the resources that you want to delete and click Eliminate Selected .
Suppress If you want to remove the suggestions for deleting an unused resource, select the check box in front of the resource name. Under Reason for suppress , type a reason to suppress the suggestion and click Suppress . You can remove suggestions for multiple resources at once by selecting the check box for the resources that you want to suppress and click Suppress Selected .
Table 3. Optimize
view Description
Optimize Cloud Resources Displays a list of underutilized resources and the total savings accrued after rightsizing the underutilized resources. To view a detailed list of the underutilized resources, select a resource category and click View List . The detailed list displays all the resources along with resource name, region, and potential savings per month accrued if you optimize the resource according to the suggestion.
Edit Cost Policy You can configure individual audits by clicking the Edit Cost Policy option.
How to optimize If you want to optimize a resource, select the check box in front of the resource name and click How to optimize . This displays the details for optimizing the resource.
Suppress If you want to remove the suggestions for deleting an underutilized resource, select the check box in front of the resource name. Under Reason for suppress , type a reason to suppress the suggestion and click Suppress under the resource name and enter a brief message. You can remove suggestions for multiple resources at once by selecting the check box for the resources that you want to suppress and click Suppress Selected .
Optimization Chart You can expand the resource item to view the optimization chart. For more information, see Optimization Recommendation Metrics Chart (AWS).
Table 4. Suppressed
View Description
Suppress Displays a list of resources that you have suppressed. You can restore these resources if required. To restore, click Restore next to the resource name that you want to restore.

You can view potential savings on the suppressed resources. You can also expand the suppressed items to view the resource information.

Table 5. History
View Description
History Displays the cost and time saved after deleting your unused AWS resources. You can also view the user who carried out the one-click fix. You can filter the list based on regions and audits. History also displays the potential savings on your unused resources that you have deleted. You can also expand the History items to view the resource information.

Configuring Memory Metrics for EC2 Instances

This section describes the steps to configure memory metrics for the underutilized EC2 instances in Beam. Memory metrics data allows Beam to provide better recommendations for the underutilized EC2 instances.

Before you begin

Ensure that you configure Amazon CloudWatch Agent to collect memory metrics from the operating systems for your EC2 instances. For more information, see Collecting Metrics and Logs from Amazon EC2 Instances and On-Premises Servers with the CloudWatch Agent in the AWS Documentation . When performing the following task in Beam, you need to provide the key names that you have configured for the memory metrics in Amazon CloudWatch Agent. These keys enable Beam to fetch your CloudWatch data and provide better recommendations.

About this task

To configure memory metrics for the EC2 instances, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Save > Optimize .
    You can view the Optimize Cloud Resouces list.
  2. Click Underutilized EC2 to view the list of underutilized EC2 instances recommended based on CPU, Network I/O, and so on.
  3. Click Configure Memory Metrics in the top-right corner.
    The Configure EC2 Memory Metrics window appears.
  4. Enter the key names for the memory metrics.
    The available metrics are memory used, memory available, and memory utilization.
    Note:
    • Ensure that the key name for each metric is the same for all the underutilized EC2 instances. Suppose there are five EC2 instances in the list, and you enter a key name for the memory used metric. If this key name corresponds to the first three instances, the memory metrics will only get pulled from CloudWatch for those instances.
    • Ensure that the value of the namespace field in the metrics section is CWAgent (default value). For more information, see Manually Create or Edit the CloudWatch Agent Configuration File in the AWS Documentation .
  5. Click Submit to complete.
    You can enter the memory usage (in percentage) in the Underutilized EC2 field of your cost policy. For more information, see Configuring Cost Policy.
    Note: Ensure that you configure the memory metrics for each linked account.

Optimization Recommendation Metrics Chart (AWS)

You can expand the resource item to view the optimization chart. Beam provides utilization for the metrics that it considers for the recommendation.

You can view the optimization chart based on the following parameters.
  • Number of days you configure in the Cost Policy.
  • Metrics and the corresponding aggregations (minimum, maximum, or average), which Beam uses for the recommendation.

This option is available for the Underutilized EC2 audit.

Figure. Optimization Recommendation Metrics Chart Click to enlarge display of optimization chart

Cost Saving (Azure)

Beam allows you to save on your currently incurring spends on Azure based on the deep visibility it has into your Azure resources and subscriptions. Beam helps you to identify underutilized resources for Virtual Machines, Redis Cache, SQL DataWarehouse, Azure SQL Databases, PostgreSQL Server, Managed Disks, and MySQL Server.

Beam proactively identifies idle and underutilized resources, and delivers specific recommendations to right‑size infrastructure services and ensure optimal consumption.

In addition to cost monitoring, Beam allows the following actionable insights with the Save option:

  • Implement effective cost efficiency of allocated resources.
  • Identify unused and underutilized resources using the default and custom cost policy.
  • One-click fix to eliminate unused resources.
Note:
  • Beam provides a toggle to view spend data in your preferred currency. You can select the currency toggle at the top right to switch between target and source currency. For information on general guidelines and considerations, see Currency Configuration Considerations .
  • All the cost data shown in Save are for 30 days (720 hours). For example, current cost and projected cost for an underutilized virtual machine.
  • In the case of Windows virtual machine, the OS license cost is included in all the cost data (for example, current or projected cost) shown in Save . You can identify the Windows VM by viewing the Current VM Meter Name and Recommended VM Meter Name metadata in the Optimize tab. The meter name contains the Windows keyword. For example, Virtual Machines BS Series Windows - B4ms - EU West .
    Figure. Save - Viewing Meter Name Click to enlarge []
Table 1. Current Spend
View Description
Overview The Overview option provides actionable insights with a graphical representation of the cost-saving opportunities determined by looking into all the running resources like compute, storage, and services.
Eliminate

The Eliminate option provides the list of unused resources that you can delete to reduce your Azure expenditure. Click any of the resource category, or select View List , to see the detailed list of unused resources within that category.

The detailed list provides the resource name, region, and potential savings per month accrued if the resource is eliminated. You can also drill down further to see the details of the resource.

Click Eliminate to delete unused resources from the list of recommendations. You can view the list of deleted resources in the History tab.
Note:
  • The Eliminate one-click fix is available for the following resource types.
    • Unused Disk
    • Unused Public IP
    • Old Snapshots
    • Unused Redis Cache
    • Unused Application Gateways
    • Unused Standard Load Balancers
    • Old VM Images
    • Paused SQL Data Warehouse
    • Unused SQL Data Warehouse
    • Unused PostgreSQL Servers
    • Unused Azure SQL Databases
    • Unused MySQL Servers
  • To execute eliminate actions on the resources associated with a subscription, you must have Write permissions for the Azure subscription.

Click Suppress to suppress the particular resource from the being suggested in the Eliminate analytics.

Click Edit Cost Policy to configure individual audits.

Optimize

The under-utilization of the resources is determined by analyzing the metrics of the resources and comparing them against a standard set of criteria for different Azure resources and services.

The Optimize option shows a list of all the underutilized resources you can optimize for saving cost. It also shows the total savings accrued after rightsizing such resources.

Click any of the resource category, or select View List , to see the detailed list of unused resources within that category. The detailed list provides the resource name, region, and potential savings per month accrued if the resource is optimized per the suggestion. You can also drill down further to see the details of the resource or click Suppress to suppress the particular resource from the being suggested in the optimize list.

Click Edit Cost Policy to configure individual audits.

You can expand the resource item to view the optimization chart. For more information, see Optimization Recommendation Metrics Chart (Azure).

Suppressed

The Suppressed option shows the list of resources that are suppressed from eliminate and optimize analytics. While suppressing a resource, you are prompted to add a reason for suppressing the particular resource.

To restore a particular resource from the suppressed list of resources, click Restore .

You can view potential savings on the suppressed resources. You can also expand the suppressed items to view the resource information.

History Displays the cost and time saved after deleting your unused Azure resources. You can also view the user who carried out the one-click fix. You can filter the list based on regions and audits. History also displays the potential savings on your unused resources that you have deleted. You can also expand the History items to view the resource information.

Enabling Write Permissions for Azure Subscription

About this task

To enable write permissions for a subscription, do the following.

Procedure

  1. In the Beam console, click the Collapse menu in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
  2. Click the edit icon against the tenant for which you want to enable the write permissions on your subscription.
  3. Click to enable the check box in the Write column.
  4. Click Download Script to download the PowerShell script.
  5. Click How do I assign role? and follow the on-screen instructions to run the script in the Azure PowerShell .
    Alternatively, see Assigning Role (PowerShell). Ensure that you observe the guidelines related to the PowerShell script created by Beam. For more information, see PowerShell Script Guidelines.
  6. Click to enable the I confirm I have executed the above powershell script check box and click Done .

Optimization Recommendation Metrics Chart (Azure)

You can expand the resource item to view the optimization chart. Beam provides utilization for the metrics that it considers for the recommendation.

You can view the optimization chart based on the following parameters.
  • Number of days you configure in the Cost Policy.
  • Metrics and the corresponding aggregations (minimum, maximum, or average) which Beam uses for the recommendation.
This option is available for the following audits.
  • Underutilized Managed Disks
  • Underutilized PostgreSql Servers
  • Underutilized Azure SQL Databases
  • Underutilized Virtual Machine
  • Underutilized MySql Servers
Figure. Optimization Recommendation Metrics Chart Click to enlarge display of optimization chart

Cost Saving (Multicloud)

The Save page allows you to optimize your cloud usage by saving money on your total AWS and Azure bills within your business unit, cost center, or scope. In the financial and scopes views, you can only view the resources that are unused or underutilized since scopes have read-only access. You need account-level access to perform eliminate or suppress actions.

This section provides detail for the following views - All Clouds , Financial , and Scopes . Click View in the top-right corner to open the View Selector pop-up and select different views.

Save Views

This view helps you in saving money for your current incurring spends. Beam identifies unused and underutilized resources and provides a specific recommendation for optimal consumption.

Note:
  • You can share or download reports under each view.
  • You can create filters at the view level, and then apply filters to display the line chart according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply .
  • Currency toggle is not available for multicloud views. You can view the spend data in target currency . You can select the target currency in the currency configuration page. For information on general guidelines and considerations, see Currency Configuration Considerations .
Table 1. Save views
View Description
Overview The Overview option provides actionable insights with a graphical representation of the cost-saving opportunities determined by looking into all the running resources like compute, storage, and services.
Eliminate

You can view the list of unused resources along with the potential savings by eliminating those resources. The Eliminate option is not available for business units, cost centers, and scopes.

Optimize

The under-utilization of the resources is determined by analyzing the metrics of the resources and comparing them against a standard set of criteria for different cloud resources and services.

The Optimize view shows a list of all the underutilized resources you can optimize for saving cost. It also shows the total savings accrued after rightsizing such resources.

The Optimize option is not available for business units, cost centers, and scopes.

History Displays the cost and time saved after deleting your unused resources. You can also view the user who carried out the one-click fix. You can filter the list based on accounts, regions, and audits. History also displays the potential savings on your unused resources that you have deleted. You can also expand the history items to view the resource information.

Configuring Cost Policy

An audit is a set of rules to filter resources based on the usage parameters used to determine if a specific resource is unused or underutilized. Beam application allows you to define various parameters of an audit. A collection of these audits form a cost policy.

About this task

The application allows you to define a specific policy for each cloud account. By default, the application provides you with a system policy for each cloud. You can clone the system policy to create custom policies. You can create multiple custom policies with different configurations and assign them to different cloud accounts.

Note: For existing users, the current cost policy is considered as custom policy and is available for the users to modify according to their requirements.
To access the Cost Policy page, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Cost Policy .
Note: Only an administrator can configure cost policies.

In the top-right corner of the page, there are two types of views available for you to select.

  • Policy View - Lists all the policies along with other details, for example, accounts assigned, cloud type, count of accounts assigned, and so on. To create a custom policy, clone the system policy or an existing custom policy, edit the available fields according to your requirement, and assign single or multiple accounts to that policy. In this view, you can view, clone, edit, and delete a policy.
  • Accounts View - Lists all the accounts along with other details, for example, the attached policy name, cloud type, and so on. In this view, you can click the Reassign button to assign a cost policy to a particular account. You can also select multiple accounts and click the Bulk Reassign button to assign a cost policy to those selected specific cloud accounts.

Configuring a custom cost policy includes the following steps.

  • Step 1 - In the Policy View , create a custom policy by cloning an existing policy (system or custom) and editing the available fields according to your requirements. Click the Clone button on the right side to create a policy. The policy you created appears in the list.
  • Step 2 - In the Accounts View , click the Reassign button against an account to assign the custom policy you just created to that account. You can also select multiple accounts and click the Bulk Reassign button to assign a cost policy to those selected specific cloud accounts.

To configure a cost policy, do the following.

Procedure

  1. Select Beam from the application selection menu in main menu bar.
  2. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Cost Policy .
    The Cost Policy page appears.
  3. To create a policy, do the following.
    1. In the top-right corner, click Policy View .
      The list of the existing policies appears.
    2. Click Clone against the policy that you want to clone and use to create a custom policy.
      The Clone Cost Policy page appears.
    3. Enter a name and description for your custom policy and edit the available fields according to your requirements.
      Note: The description must contain at least one alphanumeric character. Optionally, you can add spaces and hyphens.
    4. Click Clone Policy to create the policy.
      The policy appears in the list. Now, you can assign the cost policy you created to a single or multiple accounts.
  4. To assign the cost policy to accounts, do the following.
    1. Click Accounts View .
    2. Select the account to which you want to assign the cost policy and click the Reassign button on the right side.
      The Assign To Different Policy page appears.
    3. In the Select new policy list, select the policy you want to assign to the selected account. Then click Apply to complete assigning the policy to the selected account.
      You can also select multiple accounts and click the Bulk Reassign button in the top-right corner to perform a bulk assignment.

What to do next

You can designate a custom cost policy or system policy as a default policy. When you on-board an account in Beam, the policy you selected as a default policy gets assigned to that account. In the Policy View , click the More list on the right side against a policy and click Make Default . The policy gets set as a default policy.

Also, you can modify a policy. In the More list, click Edit to open the Edit Cost Policy page. Edit the fields you want and click Update Policy to save your changes.

Purchase - Recommendations

The following sections describes the Purchase page for the AWS, Azure, and Nutanix accounts.

Note: View Selector in the Purchase page only shows AWS, Azure, and Nutanix.
The purchase (reserved instances) recommendations are visible only at a cloud level.
Note: The following are the reasons:
  • The resources in different scopes can overlap, which leads to incorrect recommendations.
  • The scope of a reservation is limited to a location or an account. That means a reservation can be applied within the account or location where it was purchased. This would lead to incorrect reporting.
  • Change recommendations cannot be done at a scope level since the resources can overlap, and reservation is limited to a location or an account.
  • Beam uses the source currency provided in the billing data for assessing reserved instances and displays the data with source currrency. Beam does not use the target currency provided in the currency configuration page for assessing reserved instances.

Purchase - Reserved Instances (AWS)

Reserved Instances (RI) allows you to reserve resources for a specific availability zone within a region.

Note: You can create filters at the view level, and then apply filters to display the line chart according to the selected filter options. To create a filter, you can select the required options under Filters and click Apply .

Overview Tab

The following table describes the information you can view in the Overview tab.

Table 1. Overview
View Description
RI Coverage Displays a line chart of the RI coverage for the selected period by comparing the average RI coverage with the current RI coverage.
RI Status Displays the used, unused, and underutilized reservations status.
RI Calculator Displays the potential savings, on demand cost, total RI cost, and percentage saving. To view more details, click View RI Recommendations .
Top Beam Recommendations Displays the top three cost saving recommendations. To view all the recommendations, click View All Recommendations .

Coverage Tab

The following table describes the information you can view in the Coverage tab.

Table 1. Coverage
View Description
RI Coverage Displays a line chart of the RI usage for the time range selected and provides a percentage of average RI coverage and current RI coverage over the selected time range. The RI Coverage report helps you to discover how much of your overall instance usage is covered by RIs to help you make informed decisions about when to purchase or modify a RI to ensure maximum coverage.

Portfolio Tab

The following table describes the information you can view in the Portfolio tab.

Table 1. Portfolio
View Description
Portfolio Displays detailed information about the EC2 reserved instances in your AWS account. In the Utilization column, you can view if the reserved instance is used, unused, or underutilized.

Change Tab

Beam provides three RI change recommendations for unused RIs (convertible type) along with the steps to exchange RI.

The RI Change recommendations depend on the following.
  • Upfront cost already paid for the existing RI
  • New upfront cost to be paid for the changed RI
  • Remaining term of the existing RI
Beam identifies the usage of your existing RIs (convertible type) and provides a recommendation to change them to a new type along with potential savings incurred during the change process.

Example

Consider that you purchased a convertible type RI and paid $800 upfront for one year. However, the RI is not used actively, say after 3 months. Beam identifies the potential to change this unused RI to a different type of RI that you can use continuously.

Amount you already consumed: (3 months / 12 months) * $800 = $ 200

Amount of consumption left: ($800 - $200) = $600

The new RI change recommendation identified by Beam requires you to pay $700 upfront for a further one year. The remaining credit of $600 applies towards the new RI recommendation, and you are required to pay another $100 for the new RI recommendation. The new RI recommendation is available for use for 12 months from the date of RI change and aligns better with your ongoing consumption of instances. Thus, the RI recommendations dynamically provided by Beam helps you to realize significant savings in the long term based on your real consumption.

Savings Calculation
Upfront cost paid: $800
Additional cost paid according to Beam recommendation: $100
Total cost paid for RIs: $900
RI value used for the first three months: $200
RI value available for the next twelve months: $700
Savings realized by using Beam’s recommendations: ($800+$700) - ($900) = $600 over a 15-month period
The three RI change recommendation types are as follows.
  • Best-Fit - This option identifies the maximum savings per dollar of the amount you spend further (true-up cost).
  • Min True-Up Cost - This option minimizes your true-up costs while still identifying some reasonable savings.
  • Max Savings - This option identifies the maximum savings regardless of the amount of upfront true-up spend.
Figure. Change Tab - RI Exchange Recommendations Click to enlarge Exchange Options

Beam also displays the potential savings per year for each change recommendation type. If a Reserved Instance is not in use for more than seven days, Beam considers the RI as unused and provides change recommendations (wherever applicable) for the RI.

To view the change recommendations, go to Change tab.

You can view the list of RI change recommendations. You can expand each row to view further details. You can view the three recommendation types - Best-Fit , Min True-Up Cost , and Max Savings .

You can click View Details for each recommendation type to view the following details.
  • Your existing reservation details
  • Target reservation details
  • Savings calculation
  • Steps to change
Note: True-Up cost is the difference between the value of the new RI configuration you are buying and your existing RI value. The new RI configuration must be of equal or greater value than the remaining value of your existing RIs.

The following table describes the information you can view in the Change tab.

Table 1. Change
View Description
Change Displays a list of RI change recommendations. You can view the account and the reserve instance ID within that account, which Beam recommends for exchange.

You can expand each line item to view the recommendation types ( Min True-up Cost , Max Savings , and Best-Fit ).

View Details You can click View Details for each recommendation type to view detailed information (contract details, savings, steps to exchange RI etc.).
Existing Contract Displays detailed information about the existing contract that includes the type of RI, family, location, remaining time, remaining upfront value, and so on.
Target Contract Displays detailed information about the target contract that includes the recommended RI type, family, quantity, true-up cost, and so on.
Savings Displays the savings detail. Beam calculates the savings based on the difference between the on-demand cost and the RI cost of the recommended RI type.
Steps to Exchange RI Displays the steps to perform the RI change.
Utilization Chart Displays the utilization of reserved versus on demand hourly coverage for the selected time range.
Breakeven Analysis Displays the spend comparison of the selected RI payment option versus on-demand considering complete RI utilization.
Matching Instance List Displays the resources that are running or ran in the time range selected. You can use the View Options area in the bottom-right corner to filter the matching instance according to the accounts and tags.

Buy Tab

The following table describes the information you can view in the Buy tab.

Table 1. Buy
View Description
RI Purchase Recommendation Provides a list of instances that you can purchase and displays on-demand cost, reservation cost, and savings for those instances. You can view further details about instances by clicking the arrow next to the instance name.
Utilization Chart Displays the utilization of reserved versus on demand hourly coverage for the selected time range.
Breakeven Analysis Displays the spending comparison of the selected RI payment option versus on-demand, considering the RI is completely utilized. It also shows the exact date when the on-demand cost overtakes the RI cost.
Matching Instance List Displays the resources that are running or ran in the time range selected. You can use the View Options area in the bottom-right corner to filter the matching instance according to the accounts and tags.

You can use the Filters section at the right corner to select the recommendation parameters you want. Click Apply after you select the recommendation parameters you want.

The following are the available recommendation parameters.
  • Term : You can purchase reserved instances for one year or three years term.
  • Payment Type : There are three payment options available - No Upfront , Partial Upfront , and All Upfront . For more information, see Reserved Instance Payment Options section on the AWS Documentation website .
  • Offering Type : AWS offers two types of reserved instances offering classes - Standard Reserved Instance and Convertible Reserved Instance. For more information, see Types of Reserved Instances (Offering Classes) section on the AWS Documentation website
  • Lookback Period : Beam provides reserved instances recommendations based on the past few days of data. The available options for lookback period are 7 days , 14 days , and 30 days .

Purchase - Reserved Instances (Azure)

Azure reservations provide a billing discount on reserving resources or capacity for either one-year or three-year periods.

Beam provides you a single pane of glass management to realize maximum savings on your reserved VM instances (RIs) on Azure. Beam reduces the complexity involved in planning, budgeting, and managing RIs by providing insights on overall RI usage statistics, recommended RI purchases, managing existing RI portfolio, and RI coverage.

You can view the Azure RI option in Beam at the department, account, or subscription levels.

Table 1. Overview tab
Task Description
RI Coverage The RI Coverage widget displays a graph of RI usage. The RI usage graph provides a percentage of your infrastructure covered under reservation benefits. This widget also displays average RI coverage and current RI coverage over the selected time range (default 7 days). You can click View Coverage to be redirected to the Coverage tab where you can view the RI coverage in detail by using the Filters .
RI Status The RI Status widget displays the status of used, unused, and underutilized reservations as percentages and reservation units. The units of reservations shown here are the sum of the smallest VM size in the same VM series.
RI Calculator The RI Calculator is an interactive widget for pricing comparison of on-demand and reserved instances in Azure. The widget provides the potential savings, on-demand cost, Total RI cost, and percentage savings based on the period (1 year or 3 years) and scope (single or shared) of reservation parameter that you select.
RI Top Recommendations You can click View RI Recommendations to be redirected to the RI Recommendations tab to see detailed RI purchase recommendations.

Coverage tab

The RI Coverage tab displays a graph of existing RI usage. In the RI tab, you can view the percentage of your infrastructure covered under reservation benefits.

RI Coverage displays the average RI coverage and current RI coverage over the selected time range. The RI Coverage report allows you to discover how much of your overall instance usage is covered by RIs to help you make informed decisions about when to purchase or modify a RI to ensure maximum coverage.

Also, you can use Filters to sort the coverage report for specific reservation parameters.

Portfolio Tab

The Portfolio tab displays the detailed information about the VM reserved instances in your account. In the Status column, you can view if the reserved instance is used, unused, or underutilized.

Buy tab

Beam provides Reserved Instance (RI) purchase recommendations for your virtual machines instances running on Azure. The recommendations are based on your past usage and indicate potential opportunities for savings as compared to on-demand usage.

To refine the available recommendations, you can adjust the Recommendation Parameters by using the View Options .

The following are the available recommendation parameters.
  • Term : You can purchase Azure reserved instances for one year or three years term.
  • Lookback Period : Beam provides reserved instances recommendations based on the past few days of data. The available options for lookback period are 7 Days , 14 Days , 30 Days , and 60 Days .
  • Scope : Azure offers two types of reservation scope - Single and Shared . If you select Shared scope when purchasing reserved instances, reservation applies to all the subscriptions in your Azure billing accounts. If you select Single subscription scope when purchasing reserved instances, reservation applies only to your subscription. For more information, see What are Azure Reservations? section on the Azure documentation website .

Also, use Filters to sort the recommendations based on Sizes and Regions .

You can click View Details to view utilization chart, breakeven analysis, cost comparison, matching instance list, and applicable filters.

Table 2. Buy tab
Task Description
RI Purchase Recommendation Provides a list of instances that you can purchase and displays on-demand cost, reservation cost, and savings for those instances. You can view further details about instances by clicking the arrow next to the instance name.
Utilization Chart Shows the utilization of on-demand versus reserved hours for a chosen time period.
Breakeven Analysis Displays the spending comparison of the selected RI payment option versus on-demand, considering RI is completely utilized. It also shows the exact date when the on-demand cost overtakes the RI cost.
Cost Comparison Shows the spending comparison of all upfront RI versus on-demand considering complete RI utilization.
Matching Instance List Shows the list of VM instances to which the selected RI may apply.

Playbooks

Playbooks let you automate actions on your public cloud environment for better cloud management. It helps you to improve your operational efficiency by reducing manual intervention. You can create a playbook to schedule action on your cloud resources and automate regular workflows. For example, you can shut down your test environments during non-business hours.

Creating a playbook consists of the following three steps:
  1. Define a trigger: The period at which the playbook will get executed. You can select daily, scheduled, or a CRON expression.
  2. Define the resource group: The resource group represents a combination of the cloud (AWS or Azure), account, regions, service, and tags. Select the resources on which the playbook will get executed.
  3. Define the Action:
    • Select system or custom action templates.
    • Select the notification action (email).

Refer to the following diagram to get an understanding of the Playbooks workflow.

Figure. Playbooks Workflow Click to enlarge []

You must perform a few configurations before starting to create a playbook. Let us understand each of the configurations in more detail.

Enable Playbooks

To execute a playbook, Beam needs permission to create and invoke certain functions in your public cloud environment.

The functions and required permissions are:
  • AWS Lambda: Permission to create and invoke a Lambda function.
  • Azure Function: Permission to deploy and invoke functions in an Azure Function Application.
You need to perform separate onboarding steps (AWS and Azure) for enabling accounts for Playbooks. For more information, see Playbooks Enablement.

Permissions Required for Playbook Action Execution

Beam does not require write permissions on your cloud resources. You can add the action logic in the action template and it gets executed in your configured runtime environment. For example, you can add logic to start or stop an AWS EC2 instance in the action template. The specified function is invoked in the AWS Lambda environment according to the schedule (time-based trigger) you define when creating a playbook in the Beam console.

The function logic is defined in the system action templates provided by Beam. You can also create custom templates according to your requirements. The action template corresponds to the script that would take action when deployed in a runtime environment, that is, AWS Lambda or Azure Function. For more information, see Action Template.

Define the Required Permissions for Action

To successfully execute the Playbooks on the defined resource group, you must provision permissions for write operations in the respective cloud environment that corresponds to the action logic defined in the action template.
  • For AWS Playbooks, you must provide the ARN with the necessary permissions while creating a playbook. For more information, see Creating an AWS IAM Role .
    Figure. AWS Playbooks - Permissions Click to enlarge []
  • For Azure Playbooks, you must create an Active Directory application and provide the corresponding Tenant ID, Client ID, and secret key while creating a Playbook. For more information, see Creating an Active Directory App for Playbooks .
    Figure. Azure Playbooks - Permissions Click to enlarge []

Playbooks Enablement

Beam needs certain permissions on the AWS and Azure environments to execute Playbooks. You need to perform separate onboarding steps (AWS and Azure) for enabling accounts for Playbooks.

AWS Enablement for Playbooks

Beam needs permission to create and invoke the AWS Lambda function in your AWS environment to execute a playbook.

The required code is written in the Action Template. You have to select an action template when creating a playbook. Beam provides the code for system templates. Provide the code if you are using a custom template. For more information, see Action Template.

Beam requires the following set of permissions to create and invoke the Lambda function:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:CreateFunction",
                "lambda:TagResource",
                "lambda:InvokeFunction",
                "lambda:GetFunction",
                "lambda:ListAliases",
                "lambda:UpdateFunctionConfiguration",
                "lambda:GetFunctionConfiguration",
                "lambda:UntagResource",
                "lambda:UpdateAlias",
                "lambda:UpdateFunctionCode",
                "lambda:GetFunctionConcurrency",
                "lambda:ListTags",
                "lambda:DeleteAlias",
                "lambda:DeleteFunction",
                "lambda:GetAlias",
                "lambda:CreateAlias"
            ],
            "Resource": "arn:aws:lambda:*:*:function:*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": ["lambda:ListFunctions","iam:PassRole"],
            "Resource": "*"
        }
    ]
}

To provide these sets of permissions to Beam, you must execute the CloudFormation Template (CFT) when onboarding an AWS account. Also, ensure that you select the write access permissions when performing the account onboarding workflow.

There are the following two scenarios here:
  • If you are onboarding a new account, executing the CFT is a part of the onboarding workflow. Hence, Beam gets the required permissions.
  • In the case of existing AWS accounts in Beam, you must execute the CFT for the AWS accounts in which you want to use Playbooks.
For more information, see Adding AWS Account.

Azure Enablement for Playbooks

In the case of Azure, you must create an Azure Function App in the Azure portal and configure the same application in Beam for the onboarded accounts.

Note: Currently, Playbooks are enabled only for subscriptions from Azure enterprise account.
Azure Functions are the individual functions created in an Azure Function App. You must have an Azure Function App to host the execution of your functions. Playbooks need access to the Azure Function app to create Azure functions.
Note: Azure Function App is chargeable on the Azure account of the user.
The following steps have to be performed:
  1. Create an Azure Function application in the Azure portal.
  2. Fetch the information that has to be added when configuring the Azure Function app in Beam. The required pieces of information are Function App Name , Deployment User , Deployment Password , and Function App Master Key .
  3. Configure the Azure Function application in Beam.
Creating the Azure Function App (Azure Portal)

Playbooks need access to the Azure Function app to create Azure functions. Refer to this section to create the Azure Function App in the Azure Portal.

Before you begin

Before you start creating an Azure Function App, review the following notes:
  • Beam Administrator access to the Azure account is required to create an Azure Function App.
  • Suppose you create a function app in the Azure account 1. In the Beam console, you register the function app under the Azure account 1. You can also register the same function app for other Azure accounts onboarded in Beam.
  • Ensure that the function app that you are creating is utilized only for Beam. Do not add custom functions that you will be using for other purposes or applications because Beam may override your custom functions.

About this task

To create an Azure Function App in the Azure portal, do the following:

Procedure

  1. Log in to the Azure Portal with your Azure account.
  2. In the Home page, in the top-right corner, click the hamburger menu. Then, click Create a resource .
  3. In the New page, select Compute > Function App .
    The Create Function App page appears.
  4. In the Basics page, enter the following information:
    Option Description
    Subscription Select the subscription under which you want to create the function app.
    Resource Group Select the resource group in which you want to create the function app.
    Function App name Enter a name that identifies your new function app. Valid characters are a-z (case insensitive), - and 0-9.
    Note: The function app name must be globally unique.
    Runtime Stack Select Python .
    Version Select python version 3.8.

    Click Next: Hosting to go to the Hosting page.

  5. In the Hosting page, enter the following information:
    Option Description
    Storage account Create a storage account to be used by your function app.

    Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the storage account requirements. For more information, see Azure Functions Documentation .

    Operating system An operating system is pre-selected for you based on your runtime stack selection. Linux is selected for the python runtime stack. You must not change the default selection that is Linux.
    Plan Type Select Premium or App service plan .

    Note that a function app with the plan type as Consumption (Serverless) is not supported by Beam. This plan type does not support the deployment service that Beam uses to deploy functions.

    Click Next: Monitoring to go to the Monitoring page.

  6. In the Monitoring page, in the Enable Application Insights area, click No .
    Currently, Beam does not support the Application Insights as it would incur additional costs to the user.

    Then, click Next : Tags to go to the Tags page.

  7. In the Tags page, enter a name and value pair for the function app.

    Then, click Next : Review + create .

  8. Review the app configuration selections. Then, click Create to provision and deploy the function app.
    Now, you must perform a few additional configurations in the Azure Function App.
  9. Go to your function app home page and do the following:
    1. Under Settings , click Configuration .
    2. In the Application Settings area, click New application setting .
      The Add/Edit application setting page appears.
    3. Enter the following name and value pair one by one:
      • Name as SCM_DO_BUILD_DURING_DEPLOYMENT , value as true
      • Name as ENABLE_ORYX_BUILD , value as true

      These entries are needed to perform a remote build when deploying functions in the Function App.

    4. Select the Deployment slot setting checkbox.
    5. Click OK to add the name and value pair.
    Figure. Azure Function App - Additional Configurations Click to enlarge []
  10. In the top of the function app page, click Save to complete.
Configuring Azure Function App in Beam

Refer to this section to configure the Azure Function App for an Azure account in the Beam console.

About this task

To configure the Azure Function App for an Azure account, do the following:

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Azure Accounts .
  2. Click Manage against the Azure account in which you want to configure the function app.
  3. Click the Function App tab. Then, click Add Function App .
    Figure. Azure Function App - Configure Click to enlarge []

    The Create Azure Function App window appears.

    Figure. Azure Function App - Create Click to enlarge []
  4. Enter the function app name, deployment user and password, and leader key in the respective fields. For more information about how to get these values, see Fetching Azure information .
  5. Click Save to complete.
Fetching Azure Information

When configuring the Azure Function app in Beam, you need to enter the function app name, deployment user, deployment password, and function app leader key. This section describes the steps to fetch these information from the Azure portal.

About this task

Refer to the following points to know how to fetch the information required to configure the Azure Function App in Beam.
  • Function App Name : Name of the function app that you created in the Azure portal.
  • Deployment User and Deployment Password are the credentials using which you will be deploying functions in the function app. Navigate to the function app home page. Then, click Get publish profile to download the XML file.
    Figure. Azure Portal - Download Function App Profile Click to enlarge []

    The XML file contains the deployment user and deployment password.

    Figure. Azure Portal - Deployment User and Password Click to enlarge []
  • Function App Master Key is required to invoke the functions within the function app. The leader key is pre-generated in the function app and has authority over all the functions created within the function app.

    In the function app home page, click App Keys . Click on the hidden value and copy the leader key. Refer to the following image for better understanding.

    Figure. Azure Portal - Function App Master Key Click to enlarge []

Action Library

The Action Library page provides you a wide range of action templates that can be used in Playbooks.

Figure. Playbooks - Action Library Click to enlarge []
You can do the following on this page:
  • Create custom-action templates. For more information, see Creating a Custom Action Template .
  • Clone or delete a custom template. Note that the delete option is not available for the system template.
    Note:
    • A template once created cannot be modified. You can only clone or delete the template.
    • A template attached with an existing playbook cannot be deleted.
  • Use the drop-down list in the top-right corner to filter system- or custom-templates.
  • View the details of each template.
    Figure. Playbooks - View Template Details Click to enlarge []

Action Template

The action template corresponds to the script that would take action when deployed in a runtime environment, that is, AWS lambda or Azure function. For example, an action can be Stop AWS EC2 Instances , Stop AWS RDS Instances , or Start Azure VM Instances .

An action template consists of the following:
  • Required input parameters
  • A python script or code that takes actions on the resources according to the specified logic
  • Name and description of the template.
Note: The action template is reusable and can be inserted into multiple playbooks instances.
Figure. Action Template - Details Click to enlarge []

Let us understand the input parameters in detail.

Input Parameters

The Input Parameters consist of static inputs and dynamic resource-level inputs.

Refer to the following table to understand the various input types and their use cases.
Input Parameter Description
Params The static inputs correspond to each playbook execution. This means that these inputs remain the same for each execution of the playbook.

There are two types of Params: Plain Text and Protected .

Plain Text

The value of the key gets stored as plain text. When creating the template, you can define multiple keys. You have to enter the value for each key while creating a playbook.

Example: For the Start Azure VM Instances template, the defined keys are tenant Id, client Id, and secret key. When creating an Azure playbook for starting a VM, you have to enter the values of these keys. These values get stored in plain text.

Protected

The value of the key is stored as a ciphertext. For example, select Protected as the input type for the secret key.

Note:
  • The protected value is always stored as encrypted in the system. This value only gets decrypted while invoking the function.
  • The encrypted value cannot be viewed once entered in the playbook. You can override the value using the Edit option in the playbook.
Dimensions Corresponds to the data for each resource within the defined resource group.

Example: The dimensions defined for the Start Azure VM Instances template are VM name , Subscription Id , and resource group name . Suppose you are creating an Azure playbook to start VM instances using this system template. While executing the playbook, Beam searches for all the VMs within the defined resource group and adds the corresponding information to the input object of the script. The input object will contain the dimensions as a list with the name resourceList .

See the following sample input object:
{
	"params": {
	    "tenantId": "t_values",
	    "clientId": "c_values"
	},
	"resourceList": [
         {
             "vmName": "v1",
             "subscriptionId": "subsId1",
             "resourceGroupName": "rgn1"
         },
         {
             "vmName": "v2",
             "subscriptionId": "subsId1",
             "resourceGroupName": "rgn2"
         },
         {
             "vmName": "v3",
             "subscriptionId": "subsId1",
             "resourceGroupName": "rgn3"
         }
      ] 
}

Supported Dimensions

The following table lists the supported dimensions for each service.
Resource Type Supported Dimensions
AWS EC2 instanceId , region , instanceName , accountId , serviceType , tags , instanceType , state .
AWS EC2 AMI imageId , name , creationDate , imageState , region , imageState , imageOwnerAlias , tags , accountId
AWS EC2 EBS Snapshot snapshotId , accountId , serviceType , region , tags , description , isEncrypted , snapshotStartTime , ownerId , ownerAlias , state
AWS EC2 Auto Scaling Group name , desiredCapacity , maxSize , minSize , region , createdTime , autoScalingGroupARN , accountId , serviceType , tags
AWS RDS region , accountId , dbInstanceIdentifier , serviceType , tags , dbInstanceArn , dbInstanceClass , engine , dbInstanceStatus .
Azure VM instanceId , accountId , serviceType , region , tags , powerState , osType , size , resourceGroupName , name .
System Template

Beam provides predefined templates that already contain a python script for executing a specific action.

Note: System templates use asynchronous APIs for taking actions. This means that Beam submits the execution request to the cloud and considers it a success when the request gets accepted. Beam does not check periodically whether the operation got completed in AWS or Azure environments.

Refer to the following table to know about the supported system templates and their corresponding input parameters.

Table 1. Beam System Templates
Resource Type System Template Description Additional Parameters
AWS EC2 Start AWS EC2 Instances Used to start EC2 instances. None
Stop AWS EC2 Instances Used to stop EC2 instances. None
Create AMI For EC2 Instance Used to create AWS EBS-backed AMI for the selected EC2 instance.
  • CopyEC2InstanceTags

    [default: true]

  • NoReboot

    [default: false]

Create EBS Snapshots Used to create snapshots of all the EBS volumes attached to the selected EC2 instance.
  • ExcludeBootVolume
  • CopyVolumeTagsToSnapshot
  • CopyEC2InstanceTagsToSnapshot

    [default: true]

Cleanup AMIs by Max Images To Retain Used to cleanup backed-up EC2 AMIs, retaining only the specified number of latest AMI(s) and deleting the remaining images.
Note: This action only considers the AMIs created through the Create AMI For EC2 Instance system template.
maxImageToRetain
Cleanup EBS Snapshots By Snapshots To Retain Used to cleanup EBS snapshots of all the EBS volumes attached to the selected EC2 instance.

This action retains the specified number of latest snapshots for all volumes attached to the EC2 instance

Note: Playbook will not delete the snapshot of the root device of an EBS volume that is used by a registered AMI. To delete those snapshots, you must unregister the AMI first.
maxSnapshotsToRetain
AWS EC2 AMI Cleanup AMIs by Retention Period Used to cleanup backed-up EC2 AMIs.

This action deregisters the AMI if its creation date is older than the specified retention period (in days).

retentionTimePeriodInDays
AWS EC2 EBS Snapshot Cleanup EBS Snapshots by Retention Period Used to cleanup EBS snapshots older than the specified retention period (in days).
Note: Playbook will not delete the snapshot of the root device of an EBS volume that is used by a registered AMI. To delete those snapshots, you must unregister the AMI first.
retentionTimePeriodInDays
AWS EC2 Auto Scaling Group Update Configs for EC2 Auto Scaling Groups Used to update configurations (minSize, maxSize, and desiredCapacity) for EC2 auto-scaling groups.
  • desiredCapacity
  • minSize
  • maxSize
AWS RDS Start AWS RDS Instances Used to start RDS instances. None
Stop AWS RDS Instances Used to stop RDS instances. None
Azure VM Start Azure VM Instances Used to start VM instances. None
Stop Azure VM Instances Used to stop VM instances. None
Custom Template

Beam provides you with the option to create custom templates according to your needs and use cases.

You can define static input parameters and dimensions according to your needs and use cases to create a custom template.

Let us take an example to understand the use case of Dimensions.

Suppose you want to create a playbook to stop Windows VMs in the Azure cloud. Start with creating a custom action template for this playbook. In the template, select osType as the parameter and define the value accordingly. Beam will search all the VMs within the resource group and will pass the instance Id and osType data as input to the script. The input parameters and the input object structure are explained in the Action Template section .
Figure. Custom Template - Use Case Click to enlarge []

Script Format - Guidelines

This section contains important points for you to consider when planning to write the python script in the custom action template.

Note: You must adhere to our recommendations. If not, the script will not get executed correctly, and you will not get the desired result.
  • Beam provides a basic template for the script body. We recommend you to write your code on top of the provided format.
  • (Azure) You can only import the following supported libraries:
    • azure-identity~=1.4.0
    • azure-mgmt-compute~=17.0.0
    • azure-mgmt-network~=16.0.0
    • azure-mgmt-resource~=15.0.0
    • azure-mgmt-storage~=16.0.0
    • requests~=2.24.0
  • (AWS) You can import the libraries that are present in the AWS Lambda-python environment by default. For example, JSON and boto3 .
  • In the return statement, we recommend you to add totalResourceCount and failedResourceCount fields. These fields correspond to the total number of resources upon which the python script got executed and the number of failed instances. Beam uses these fields to decide if the execution was a success or a failure. The values that the script returns get displayed in the output section of the Run Logs page in a JSON format.
    Note:
    • If the value of the failedResourceCount field is higher than zero, the execution attempt is marked as failed.
    • If the return object is an invalid JSON, the failedResourceCount field is missing, or the value of this field is not an integer, Beam will only consider function invocation to decide the execution status. For more information, see Run Logs .
    Figure. Custom Script - Guidelines Click to enlarge []
  • (Azure) To display the log data in the Run Logs page, you need to create a log variable in the return statement. The log data that the script returns get displayed in the Run Logs page .

Creating a Custom Template

You can use the templates to apply actions on the resources in your public cloud (AWS and Azure) environment. Refer to this section to create a custom template.

About this task

To create a custom template, do the following:

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Playbooks .
  2. Click Action Library to view the system- and custom-templates.
  3. In the top-left corner of the page, click Create Action Template .
    The Create Action Template page appears.
  4. In the Code screen, do the following:
    1. In the Input Parameter area, select the inputs according to your requirement.
    2. In the Python Script box, enter the action logic code.

      The code gets executed in your cloud environment through the AWS lambda function or Azure function.

      Caution: Ensure that you enter the correct logic code. Beam does not check your code and is only responsible for invoking the AWS lambda function or Azure function.
    3. Click Next to go to the Details screen.
  5. In the Details screen, do the following:
    1. In the Name box, enter a name that identifies your custom template.
    2. In the Description box, enter a short description for the custom action.
      AWS Lambda or Azure Function gets auto-selected according to the cloud type you selected in the Code screen.
    3. Click Save to complete.
      The custom template appears in the Action Library page.

Creating a Playbook

This section explains how to create a playbook using the system or custom action templates. The template consists of the python script that would take action when deployed in a runtime environment, that is, AWS lambda or Azure function. You can use an action template (system or custom) with multiple playbook instances.

Before you begin

  • You need write permissions on the accounts in which you want to create a playbook.
  • Ensure that you enabled the accounts in which you want to create a playbook. For more information, see Playbooks Enablement .

About this task

An action template is the building block of the playbook. The template gets defined through the input parameters and the python script. Before you start creating a playbook, we recommend you to get an understanding of the action template. For more information, see Action Template .

To create a playbook for automating your regular workflows, do the following:

Procedure

  1. In the Beam console, click the Hamburger icon in the top-left corner to display the main menu. Then, click Playbooks .

    The Playbooks page appears.

  2. Click Create a Playbook .
    The Create Playbook page appears.
    Playbook creation consists of the following three steps:
    • Defining the schedule for the execution of the playbook
    • Defining the resource group on which the playbook will get executed
    • Defining the action which includes selecting the action template and email notification.
  3. In the Schedule screen, do the following:
    1. Select the schedule type according to your requirements.
      The available options are Once , At a recurring time interval , and CRON Expression .
    2. If you select the Once option, enter the details as shown in the following image.
      Figure. Schedule the Playbook - Once Option Click to enlarge []
    3. If you select the At a recurring time interval option, enter the details as shown in the following image.
      Figure. Schedule the Playbook - Recurring Option Click to enlarge []
      Note:
      • Daily : The Every Days value is limited to a month. Suppose you enter 5 days as the value. The trigger will happen on the following dates of the month: 1, 6, 11, 16, 21, 26, and 31. If you enter the value as 40 days, the trigger will only happen on the first day of every month.
      • Monthly : The Day of every month value is limited to a year. Suppose you enter Day 10 of every 5 month as the value. The trigger will happen on the 10th of January, June, and November.
    4. If you select the CRON Expression option, select the time zone, and enter the expression.
      Note:
      • Six field cron-expressions are not supported.
      • Minute level crons are not supported.
      • For the Day of the week field, the allowed values are 0-6 or Mon , Tue , Wed , Thu , Fri , Sat , and Sun .
      • For the Month field, the allowed values are 1 to 12.

      After you define the schedule, click Next in the top-right corner of the page.

  4. In the Resource Group screen, do the following:
    1. In the Cloud list, select AWS or Azure .
    2. In the Account list, select the account that contains the resources on which you want to automate actions.
      Note: Beam allows you to select a single account to avoid issues with cross-account access while executing the function.
    3. In the Service list, select the cloud service for which you want to create a playbook.
      The list only shows the services for the cloud you select. The available services are AWS RDS , AWS EC2 , and Azure Virtual Machine .
    4. In the Region list, select the regions you want.
      The playbook execution will happen only on the resources belonging to the selected regions.
    5. In the Tag Pair list, select at least one key-value pair to further narrow down the resources.
    6. Click Next .
  5. In the Select Action screen, do the following:
    1. Select an action template.
      This page contains all the system and custom templates for the cloud and service defined in the Resource Group screen.

      You can use the drop-down list in the top-right corner for filtering the system or custom templates. If not already created a custom template from the Action Library page, use the Create Action Template option. For more information, see Creating a Custom Action Template .

    2. To successfully execute the playbook on the defined resource group, you must provision permissions for write operations in the respective cloud that corresponds to the action logic defined in the action template.
      • For AWS Playbooks, you must provide the ARN with the necessary permissions.
      • For Azure Playbooks, you must create an Active Directory application and provide the corresponding tenant Id, client Id, and secret key while creating the playbook.
    3. If you are creating an AWS playbook, do the following in the action template:
      • Select the region in which you want to create the lambda function.
      • In the Input an ARN with the required permissions box, enter the ARN. For more information, see Creating an AWS IAM Role .
      • In the Input Parameter field, select the required AWS dimensions. For more information, see Action Template
      Figure. AWS Playbook - Required Inputs Click to enlarge []
    4. If you are creating an Azure playbook, do the following in the action template:
      • In the Azure Function App list, select the application in which you want to deploy the function.
      • In the Input Parameter field, enter the tenant Id, clientId, and secret key. For more information, see Creating an Active Directory App for Playbooks .
        Figure. Azure Playbook - Required Inputs Click to enlarge []
    5. (Optional) In the left-side pane, click Add Notification and then select Email Notification .
      Do the following:
      • In the Recipients list, select the Beam users to whom you want to send a notification upon the execution of a playbook instance.
      • Enter the subject and message body.

        You can use the Parameters option to add tags in your message body.

        Figure. Playbook - Configure Email Notification Click to enlarge []
    6. Click Save & Close in the top-right corner of the page.
    7. Enter a name and a description for the playbook.
    8. Click the Status toggle button to decide on the state of the playbook after it gets created and initialized.
      If marked as active, the playbook automatically gets activated after initialization. If it is marked inactive, you need to manually activate the playbook using the toggle button available in the Dashboard .
    9. Click Save Playbook to complete the workflow.
      The playbook will appear in the Dashboard .
      Note: After saving the playbook, the initialization process will start. For AWS, this process takes around 1 to 5 minutes. For Azure, it takes around 10 to 15 minutes.

Creating an AWS IAM Role

The lambda function uses an IAM role to fetch the permissions that are required to execute the python script. You need to enter the ARN information while creating a playbook.

About this task

To create an IAM role for executing the python script, do the following:

Procedure

  1. Log on to your AWS Management Console and open the IAM console.
  2. In the navigation pane of the console, click Roles and then click Create role .
    The Create Role page appears.
  3. Select the AWS service role type.
  4. In the Choose a use case area, click Lambda . Then, click Next:Permissions at the bottom right side of the page.
  5. Click Create Policy to open a new browser tab and create a new policy from scratch.
    1. In the JSON tab, paste the IAM policy required to successfully execute the python script defined in the Beam’s action template.
      You can copy the required IAM policy from the Create Playbook page.
      Note: If you are creating a custom action template in the Beam console and using it in the Playbook, make sure you create an AWS IAM policy with the required permissions to execute your custom python script.
      Figure. AWS IAM Role - Copy IAM Policy Click to enlarge []
    2. Click Review policy to proceed.
    3. In the Review policy page, enter a name that identifies your policy.
      Adding a description is optional.
    4. Click Create policy .
      After you create the policy, close the tab and return to your original tab. The policy you created appears in the list. You can use the search box to find your policy if the list is long.
  6. Select the check box next to the policy that you created and click Next: Tags .
  7. (Optional) In the Add Tags page, add metadata to the role by attaching tags as key-value pairs. Then, click Next: Review .
  8. In the Review page, enter a role name that identifies your IAM role.
  9. Click Create role to complete.
    The IAM role appears in the list.
    Figure. AWS IAM Role - List Click to enlarge []

    Click on the IAM role to open the Summary page. This page displays the Role ARN which you require to enter in the Beam console.

    Figure. AWS IAM Role - Copy ARN Click to enlarge []
    Note:
    • You can create multiple policies for each action type (for example, start AWS EC2 instance, stop AWS EC2 instance, start AWS RDS instance, and so on) and attach these policies to an IAM role.
    • You can also add the required permissions for executing the python script to a single policy and attach the policy to an IAM role.
    • You can also create different IAM roles for each policy.

Creating an Active Directory App

To execute the python script added in the action template of the playbook, you need to create an Active Directory (AD) application with write permissions and provide the corresponding tenantId, clientId, and secret-key values in the Beam console.

About this task

To create an AD application for playbook, do the following:

Procedure

  1. Log on to the Azure Portal, and go to Azure Active Directory .
  2. In the left side pane, click App registrations .

    The App registrations page appears.

  3. Click New registration .
    The Register an application page appears.
    Figure. Azure Portal - Registering an Application Click to enlarge []

    Do the following:

    1. Enter a name that identifies your application.
    2. In the Supported account types area, click Accounts in this organizational directory only (Nutanix only - Single tenant) .
    3. Click Register to create the AD application.

      The AD application page appears. Now, you need to add the Azure Service Management API permissions.

  4. In the left side pane, click API permissions .

    The Request API permissions pop-up appears.

    Figure. Azure Portal - Request API Permissions Click to enlarge []
  5. Scroll down and click Azure Service Management .
  6. Select the Access Azure Service Management as organization users check box. Then, click Add permissions .
    The next step is to create a client secret.
  7. To add a client secret, do the following:
    1. In the left side pane, click Certificate & secrets > New client secret .
    2. Enter a description and select an expiry period. Then, click Add .
      You can view the secret key for your Azure AD application.
      Figure. Azure Portal - Viewing Secret Key Click to enlarge []

      You can view the client ID and tenant ID in the Overview page.

      Figure. Azure Portal - Viewing Tenant Id and Client Id Click to enlarge []

      You need to add these three values in the Input Parameter fields of the action template while creating a playbook.

Playbooks Dashboard

The Playbooks dashboard allows you to view the instances of playbook execution and perform various actions.

You can do the following:
  • Create a playbook using the button in the top-left corner.
  • View detailed information of a playbook. Click on a playbook to view its details and log information.
  • Last Run column : Displays the status of the last execution of the playbook. Hover over the status (success or failed) to view detailed information such as last run time, success rate, total runs, and so on.
  • Status column : Use the toggle button to change the status of a playbook from active to inactive or otherwise.
  • Perform various operations (clone, rename, edit, and delete) on a playbook using the icon in the right side.
    Clone option : Suppose there is an existing playbook, and you want to create a new playbook with a similar configuration but a different schedule. Clone the existing playbook and change the schedule accordingly to create a new playbook.
    Note: A playbook with an error has to be deleted. This would only happen when Beam is unable to create the Lambda function or the Azure function according to the defined playbook. In such cases, you can clone the playbook after a few minutes and delete the playbook with the error.
    Figure. Playbooks Dashboard Click to enlarge []

Playbook Details

In the Playbooks dashboard, click on a playbook to view its details, including run- and change-log.

The Details page displays the configuration details such as schedule, resource group, and action. You can perform edit and delete operations and change the status of the playbook. For example, active to inactive.
Figure. Playbook - Details Page Click to enlarge []

Run Logs

The Run Logs page displays the list of the executions that occurred according to the configured schedule and the status of each execution, that is, success or failure.
Figure. Playbook - Run Logs Click to enlarge []

You can click View Details against an execution instance to view detailed information such as input (JSON), output (for example, status code, total resource count, failed resource count), and logs.

Suppose a user changes the definition of a playbook (for example, resource group). The next trigger gets executed according to the new definition, that is, on the new resource group. Each execution instance displays the exact configuration (schedule time and resource group) at the time of the trigger. You can go to the Change Logs page to view the details, that is, the user who made the changes and the modifications.

The Attempts section displays the number of attempts made before the playbook gets executed successfully.

Each action item has an Attempts section which displays the following information:
  • Execution Status : The status displayed depends on the following two factors:
    • The function got invoked successfully or failed in the cloud environment.
    • The response displayed in the Output field is in a valid JSON format, and the value of the failedResourceCount field value is an integer. If the value is higher than zero, the attempt is unsuccessful, and the status is displayed as failed.
      Note:
      • We recommend you to add totalResourceCount and failedResourceCount fields in the return statement of the python script. For more information, see Script Format - Recommendations.
      • If the JSON format is invalid, the failedResourceCount field is missing, or the value of this field is not an integer, Beam will only consider function invocation and display the status accordingly.
  • Input : Displays the input that you provided in the AWS lambda or the Azure function.
  • Output : Displays the response received after the provided script gets executed.
  • Logs : Displays the log generated by the AWS lambda or the Azure function for the selected execution instance of the Playbook.
    Note:
    • For AWS, the last 4 KB of logs get fetched from the lambda execution.
    • For Azure, we do not fetch any logs from the function execution. Only the data present in the logs variable in the response of the script gets displayed as the execution log.

Change Logs

The Change Logs page displays the historical data of the changes made to the selected playbook.

A line-item gets added to the table for each change. You can expand each line item to view the change made to the expanded version compared to the previous version in the list. The changes made by a user are highlighted.
Figure. Playbook - Change Logs Click to enlarge []

Scopes

A scope is a logical group of your cloud resources that provides you with a custom view of your cloud resources. You can define scopes using cloud, accounts, and tags. The administrator can assign read-only access to a user. The user gets read-only access for the resources within the scope and not the cloud accounts that constitute the scope. After you create a scope, click View in the top-right corner to open the View Selector and select the scope.

Creating a Scope

A scope is a logical group of your cloud resources that provides you with a custom view of your cloud resources.

About this task

This section describes the procedure to create a scope.
Note: Only a Beam administrator can add viewers in the scope.

To create a scope, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Scopes .
  2. Click Create a Scope to open the Scope Creation page.
    In the Scope Creation page, you need to enter a scope name and add viewers from the list available. Then you need to define the scope based on cloud, account, and tags.
  3. In the Name box, enter a name for the scope.
  4. In the Viewers list, select the viewers for the scope you are creating.
  5. Click Define Scope to select the cloud, account, and tags.
  6. In the Define Scope page, do the following.
    1. In the Cloud list, select the cloud type.
      The available options are AWS , Azure , or GCP .
    2. In the Parent Account list, select the parent account for which you want to create a scope.
    3. In the Sub Accounts list, select the sub accounts from the list available.
      The Sub Accounts list displays all the sub accounts within the parent account you selected.
    4. In the Tag Pair area, select the key and value pairs.
      You can click the plus icon to add multiple key and value pairs.
    5. Click Save Filter to save your filter.
      You can click Add Filter to create more filters.
    6. Click Save Resource Group to save your scope definition and close the Define Scope page.
  7. In the Scope Creation page, click Save Scope to create the scope.
    You can select and view the scope you just created using the View Selector.

Editing a Scope

Before you begin

Only a Beam administrator or the creator of the scope can edit a scope.

About this task

To edit a configured scope, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Scopes .
    You can view the list of scopes in the Scopes Configuration page. You can use the Search to list the desired Scope to edit.
  2. Click Edit against the scope that you want to edit.
  3. Update the field values as desired in the Scope Creation page and then click the Save Scope button.

Deleting a Scope

Before you begin

Only a Beam administrator or the creator of a scope can delete a scope.

About this task

To delete a configured scope, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Scopes .
    You can view the list of scopes in the Scopes Configuration page. You can use the Search to list the desired Scope to edit.
  2. Click Delete against the scope that you want to delete.
    Click to enlarge
  3. Do one of the following:
    • Click Delete to delete the scope.
    • Click Cancel to return to the Scope Configuration page.

Cloud Invoice (AWS)

The Invoice page generates invoices for your customers as a reseller or for the business units within your company based on the AWS accounts. This page displays a list of account names with the account ID and bill amount.

For information on configuring invoices, see Configuring Invoices.

You can use the search bar to view the bill for a specific account. If you want to generate reports, you can use generate, share, or download report options. For more information on reports, see Cost Reports.

Note: By default, Beam updates the invoices for the previous month on the seventh of every month. To change configurations and generate invoices on demand, click the refresh button at the top of the page.

Configuring Invoice (AWS)

You can configure settings for invoices generated for all the linked accounts under a payer account.

About this task

To configure invoice settings, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Invoice Configuration .
  2. In Logo Preview , click Upload to upload a picture.
  3. In Company Address , enter your company address.
  4. To select a way to add cost, choose one of the following.
    • Extra line
    • Distribute proportionally
  5. If you selected the Extra line item option in step 5, do the following.
    1. Move the slider to the right to change commission type from fixed to the percentage.
    2. Enter a commission value.
    3. Enter the line description.
  6. If you selected the Distribute proportionally option in step 5, do the following.
    1. Move the slider to the right to change commission type from fixed to the percentage.
    2. Enter a commission value.
  7. To do more settings, click the Additional Settings (optional) drop-down to expand the menu.
    1. In Currency Type , enter the value for currency type and commission rate.
      Note: By default, currency type is USD, enter the commission value in the same currency.
    2. Enable Include AWS credits in invoices , if necessary.
    3. To select the cost type for generating invoices, select either Blended Cost or Unblended Cost .
  8. In Customize for linked account , select a linked account from the drop-down list.
  9. Click Submit .

Licensing

This section provides information about the supported licenses for your Beam accounts, license entitlement for Nutanix on-premises, and subscription entitlement for public clouds.

The supported licenses are as follows. If you have any of the following licenses, you can convert your Beam account to a paid state.
  • NCM Ultimate
  • NCM Pro
  • Prism Ultimate
The Licensing page allows you to:
  • View information about the license entitlement (required for accessing cost management features for Nutanix On-premises).
    • Status can be In Trial , License Applied , Trial Expired , or License Expired .
  • Activate your license ( Apply License or Validate License ) to convert from In Trial or Expired to the License Applied status.
  • View information about Beam public cloud subscription (required for accessing cost management features for Public Cloud).
    • Status can be In Trial , Subscription Active , Trial Expired , or Subscription Expired .
  • Purchase public cloud subscriptions using the Purchase Online Now option.
Note: Only the Beam account owner can view the details on the Licensing page.

To access the Licensing page, click Hamburger Icon > Configure > Licensing .

Figure. Licensing Page Click to enlarge

License Entitlement for Nutanix - Status

Refer to the following table to know about the various statuses available for your Beam account.

Status Description
In Trial A banner gets displayed notifying you about the trial period and suggesting you purchase a license.
Trial Expired or License Expired The Expired status appears after your trial period gets over or the applied license gets expired. You would not be able to use the Beam features and need to purchase the license.
License Applied Indicates that your license is applied. You can also view the expiry date for the license.

License Entitlement for Nutanix - Actions

The following two cases may occur after the license gets applied:

System detects a valid license
Beam checks if the supported license is available for the Beam account in which you are logged in.
  • If a license gets detected, the License is Found status gets displayed along with the Apply License option to apply for the license manually. After you perform this action, the status changes to License Applied along with the expiry date.
  • You need to perform the Apply License action only for the first time after you buy a license and log on to Beam. Beam periodically checks for any new licenses that you purchased, then applies the license or updates the expiry date accordingly.
    • Scenario 1: Beam detects L1 (expiry date as 31 December 2021) on 1 January 2021 and you click Apply License to activate the license. On 31 December 2021, Beam detects L2 (expiry date as 31 December 2022) and automatically applies the license.
    • Scenario 2: Suppose you buy multiple licenses with different expiry dates. For example, L1 and L2 with expiry dates as 21 June and 21 December. Beam will show the license with the farthest expiry date (21 December) once the data about the new purchases get discovered through periodic checks.
System does not find a valid license
Suppose you have any one of the supported license and Beam is unable to identify or recognize your license due to some reason and shows the License Not Found status. The Validate License option allows you to manually enter and validate your license key.
The validation may fail due to the following reasons:
  • You entered an invalid license key.
  • You entered a valid license key that belongs to another MyNutanix account. The reason for validation failure is that the Beam account can only be linked to a single MyNutanix account. Contact Nutanix support for further help.
.

Subscription Entitlement for Public Clouds - Status

For public clouds (AWS and Azure), one of the following statuses get displayed on the Licensing page:

Status Description
In Trial A banner gets displayed to notify you about the trial period and an option to purchase a subscription. To purchase a subscription, click Purchase Online Now , which redirects you to the Nutanix Billing Center.
Subscription Active This status indicates that you have an active subscription. The expiration date of the active subscription is also displayed.
Trial Expired or License Expired The Expired status appears after your trial period gets over or the active subscription gets expired. You would not be able to use the Beam features and need to purchase or renew the subscription.

Feature Violations

You must have a valid license and subscription to use the Beam features for your Nutanix on-premises and public clouds. If you do not purchase a license or subscription, a banner gets displayed at the top of the Beam page indicating that there are feature violations.

Feature violations occur in the following cases:
  • You have an active license but you are using the public cloud features in Beam without an active subscription.
  • You have active public cloud subscriptions but you are using the Nutanix features in Beam without a license.

Beam API

Beam provides API services to allow you to programmatically retrieve data from the Beam's platform and perform different configurations.

The following are a few scenarios for using the API services:
  • You can programmatically retrieve analyze data for your cloud accounts using API. You can select the resource group to raise a query. For example, you can raise a query for a business unit or a cost center. For more information, see the Analysis v2 or Analyze sections in the Beam API Reference Guide.
  • You can use APIs for remote onboarding of your cloud accounts without the need to log in to the Beam GUI. For more information, see dev.beam.nutanix.com.
  • You can perform CRUD operations on business units, cost centers, and scopes using APIs. For more information, see the Cost Governance - Multicloud section in the Beam API Reference Guide .

External Integrations

You can integrate Beam with third-party applications such as Slack and ServiceNow.

The following sections describe the integrations you can perform in the Beam application.

Integrating Slack with Beam

You can integrate Beam application with Slack to get notifications on your Slack channels. This section describes how to integrate Slack to Beam.

About this task

To integrate the Slack application with Beam, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Integrations .
  2. Click Add Integration in the Slack dialog box. Then click Add Slack .
  3. Enter a name for your Slack integration.
  4. Select the accounts for which you want to receive notifications on Slack .
  5. Log on to your Slack account and do the following.
    Make sure that you have the permissions to add incoming webhooks.
    1. Click the drop-down menu at the top-left corner (next to the name of your team).
    2. Click Apps .
    3. Search for Incoming Webhooks and select it.
      Figure. Slack - Incoming Webhooks Click to enlarge

    4. Click Add Configuration .
    5. Select the Channel on which you want to receive your alerts. Alternatively, create a slack channel.
    6. Click Add Incoming Webhooks Integration .
    7. Copy the WebHook URL so that you can enter this URL in Beam.
  6. Paste the copied URL in the Webhook field.
  7. Click Save Settings .
  8. Select the events for which you want to receive the notifications.
  9. Click Integrate to complete the setup.

ServiceNow Integration with Nutanix Beam

ServiceNow provides IT services management (ITSM), and IT operations management (ITOM) as a software service.

Nutanix Beam & Security Central Plugin (ServiceNow Beam application) helps to integrate your ServiceNow instance with your Beam account. You can create a Change Request for Beam's cost optimization recommendations. Your Beam account also receives updates on any changes made to the change requests created through this integration.

ServiceNow integration in Beam consists of the following steps.

  1. Install Beam application in ServiceNow

    Go to store-servicenow to get the Nutanix Beam & Security Central Plugin .

    To install the Beam application in your ServiceNow instance, see ServiceNow Documentation.

  2. Create ServiceNow integration in Beam

    The Integrations page allows you to create a ServiceNow integration in Beam. Beam generates Webhook URL and Security Token . You need to enter the webhook URL and security token when creating an Integration Definition in the ServiceNow Beam application .

    After the Integration Definition in your ServiceNow instance is complete, you can click Verify and Save to complete the ServiceNow Integration. The Default Change Request Template gets created by default. You can add more templates if you want.

  3. Create Integration Definition in the ServiceNow Beam Application

    In your ServiceNow instance, you need to create an Integration Definition. When creating the definition, you need to enter the webhook URL and security token (auto-generated by Beam in the Integrations page) in the Endpoint and Token fields.

    Note: In case you did not copy the webhook URL and security token when creating the ServiceNow integration in Beam , go to the Integrations page and click Edit against the ServiceNow integration you created. The Edit ServiceNow Integration page appears. Then you can copy the webhook URL and security token.
    Figure. Integrations Page - Webhook URL and Security Token Details Click to enlarge
  4. Creating a Template in Beam

    After you click Verify and Save in the Integrations page to complete creating the ServiceNow Integration in Beam, the Default Change Request Template gets created by default.

    You can also add more templates. In the Integrations page, click the templates link against the ServiceNow Integration to go to the ServiceNow Templates page. In the top-left corner, click Add Template .

    You can do the following:
    • Select the task type - Change Request .
    • Add more fields and parameters by using the Add optional fields list and Parameters link respectively.
  5. Create a ServiceNow ticket

    You can create a ticket for the selected resources from the Optimize tab.

Creating ServiceNow Integration in Beam

You can use the Configure menu to go to the Integrations page.

About this task

In the Integrations page, you can create a ServiceNow integration.

To create a ServiceNow integration in Beam, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Configure > Integrations .
    The Integrations page appears.
    Figure. Integrations Page - Creating ServiceNow Integration Click to enlarge
  2. In the ServiceNow area, click Add Integration .
    The Add ServiceNow Integration page appears.
  3. In the Name and Description fields, enter a name and description for your ServiceNow integration.
    Note: The Description field is optional.
    The Webhook URL and Security Token fields are auto-populated by Beam. Copy and enter the webhook URL and security token details in the Endpoint and Token fields when creating an Integration Definition in the ServiceNow Beam application. For more information, see Creating Integration Definition in ServiceNow Beam Application .
  4. Select I confirm that I've installed the Beam’s ServiceNow app with above mentioned parameters. check box after you complete creating an Integration Definition in the ServiceNow Beam application in your ServiceNow instance. Then click Verify and Save to complete the ServiceNow integration in Beam.
    The Default Change Request Template gets created by default.
    You can also create a template. To create a template, see Creating a ServiceNow Template .

Creating an Integration Definition in the ServiceNow Beam Application

You need to create an integration definition for your ServiceNow Beam application in your ServiceNow instance. You need to enter the webhook URL and security token (auto-generated by Beam in the Integrations page) in the Endpoint and Token fields.

About this task

To create an Integration Definition in the ServiceNow Beam application, do the following.

Procedure

  1. In the ServiceNow home page, in the Filter navigator search box, search for the Nutanix Beam application.
  2. Under Configuration , click Integration Definitions .
    The Integration Definitions page appears.
    Figure. ServiceNow Beam Application - Integration Definitions page Click to enlarge
  3. Click New to create an integration definition.
  4. In the Name box, enter a name for your integration definition.
  5. Select the Active check box.
    Note: If you do not select this check box, the integration between your Beam account and ServiceNow instance will not get established.
    Figure. New Record - Integrations Definitions Click to enlarge
  6. In the Poll Interval (in secs) box, enter a value.
    The system checks for new messages after the interval value you enter in this field. The default value is 120 seconds.
    Note: The recommended value is from 30 to 240 seconds. If you enter a value less than 30 seconds, the system throws an invalid insert error.
  7. In the Description box, enter a short description for your integration definition.
  8. In the Endpoint box, enter the Webhook URL that Beam auto-generated when creating the ServiceNow integration in Beam.
    Note: You need to click the lock icon in the right side to unlock the Endpoint box.
  9. In the Token box, enter the Security Token that Beam auto-generated when creating the ServiceNow integration in Beam.
  10. Click Submit to complete creating the integration definition.
    The integration definition you created appears in the Integration Definitions page.
    Now, go to the Integrations page in Beam and click Verify and Save ( step 4 of Creating ServiceNow Integration in Beam ).

Creating a ServiceNow Template in Beam

You can create a template by adding parameters and fields according to your requirements.

About this task

After you click Verify and Save in the Integrations page to complete creating the ServiceNow Integration in Beam , the Default Change Request Template gets created by default.

You can also create a template.

To create a template, do the following.

Procedure

  1. In the Integrations page, expand the ServiceNow area to view the list of integrations already created.
  2. In the Templates column, click the link to open the ServiceNow Templates page.
    Figure. Integrations page - ServiceNow Click to enlarge
  3. In the top-left corner, click Add template to open the Add ServiceNow Integration Template page.
  4. In the Name and Description boxes, enter a name and description for your template.
    Figure. Add ServiceNow Integration Template page Click to enlarge
  5. In the Task Type list, click Change Request .
  6. Click the Add Optional fields list to add more fields in your template.
    You can add various parameters in the fields.
  7. Click the Parameters link to view the list of triggers. Then click the required items to add in the field.
    Example: For an underutilized EBS volume, you can add a description and summary as follows.
    • Description - The EBS volume, {resourceId}, in {account}, is not attached to any EC2 instance, and deleting it can potentially save {currency}{potentialSavings} per month.
    • Summary - Delete this unused EBS volume.
      Note: You can use the Summary parameter in the Short description field.
  8. Click Save to complete adding a template.
    The template appears in the ServiceNow Templates list.

What to do next

Now, you can create ServiceNow tickets for your resources. For more information, see Creating a ServiceNow Ticket .

Creating a ServiceNow Ticket

You can create ServiceNow tickets for your resources in the Optimize tab.

About this task

To create a ServiceNow ticket, do the following.

Procedure

  1. Click the Hamburger icon in the top-left corner to display the main menu. Then, click Save > Optimize .
  2. In the Optimize tab, click View List against a cloud resource. You can view the list of resources.
    Select the resources for which you want to create a ticket. In the top-right corner, in the Actions list, click Create ServiceNow Ticket .

    The Create ServiceNow Ticket page appears.

    Note: If you select multiple issues, a ServiceNow ticket gets created for each issue.
  3. In the Select Integration list, select the ServiceNow integration you created.
    The Import from template to fill the form and Task Type lists appear after you select an integration.
  4. (Optional) In the Import from template to fill the form list, select the template you created before.
    For more information, see Creating a ServiceNow Template in Beam.
    The Task Type list gets populated with the task type you selected when creating the template.
    Note: You can also edit the form after you import the template. Editing of the form will not change the original template.
  5. In the Task Type list, select Change Request.
    If you selected a template in the step 4, skip this step.

    The Optional fields area appears after you select a template or a task type.

  6. In the Optional fields area, you can do the following.
    • Click the Add Optional fields list to add more fields.
    • Click the Parameters link to view the list of triggers. Then click the required items to add in the field.
      Example: For an underutilized EBS volume, you can add a description and summary as follows.
      • Description - The EBS volume, {resourceId}, is not attached to any EC2 instance, and deleting it can potentially save {currency}{potentialSavings} per month.
      • Summary - Delete this unused EBS volume.
        Note: You can use the Summary parameter in the Short description field.
  7. Click Create Ticket to complete.
    Note: The ticket creation process takes some time depending on the poll interval value you enter in Step 6 of Creating Integration Definition .
    You can hover over the ticket link to view the status of the ticket.
    • Ticket Number - Click the link to open the ticket in your ServiceNow instance.
    • Ticket State - Shows the ServiceNow status of the ticket. After the ticket gets created, the status is shown as New . If the status gets updated in your ServiceNow instance, the same status is reflected in this column.
    • Ticket Creation State - Shows the Beam status of the ticket. After the process is complete, the status changes from In Progress to Created .
    Figure. ServiceNow Ticket - Status (In Progress) Click to enlarge
    Figure. ServiceNow Ticket - Status (Created) Click to enlarge
Read article
Calm Administration and Operations Guide

Calm 3.5

Product Release Date: 2022-04-27

Last updated: 2022-11-10

Introduction

Introduction to Calm

Calm allows you to seamlessly select, provision, and manage your business applications across your infrastructure for both the private and public clouds. Calm provides application automation, lifecycle management, monitoring, and remediation to manage your heterogeneous infrastructure, for example, VMs or bare-metal servers.

Calm supports multiple platforms so that you can use the single self-service and automation interface to manage all your infrastructure. Calm provides an interactive and user-friendly graphical user interface (GUI) to manage your infrastructure.

Figure. Calm Model Click to enlarge

Calm Key Benefits

Calm is a multi-cloud application management framework that offers the following key benefits:

  • IT Agility and Human Error Elimination

    Calm simplifies the setup and management of custom enterprise applications by incorporating all important elements, such as the relevant VMs, configurations, and related binaries into an easy-to-use blueprint. These blueprints make the deployment and lifecycle management of common applications repeatable and help infrastructure teams eliminate extensive and complex routine application management.

  • Unified Multi-Cloud Orchestration

    Calm unifies the management of all your clouds into a single-pane-of-glass, removing the need to switch between portals. Calm automates the provisioning of multi-cloud architectures, scaling both multi-tiered and distributed applications across different cloud environments, including AWS, GCP, Azure, and VMware (on both Nutanix and non-Nutanix platforms).

  • Automated Self-Service with Centralized Control

    Calm empowers different groups in the organization to provision and manage their own applications, giving application owners and developers an attractive alternative to public cloud services. Calm provides powerful, application-centric self-service capabilities with role-based access control. All activities and changes are logged for end-to-end traceability, aiding security teams with key compliance initiatives.

  • Nutanix Marketplace

    The marketplace offers preconfigured application blueprints that infrastructure teams can instantly consume to provision applications. The marketplace also provides the option to publish sharable runbooks. A runbook is a collection of tasks that are run sequentially at different endpoints. Infrastructure teams can define endpoints and use runbooks to automate routine tasks and procedures that pan across multiple applications without the involvement of a blueprint or an application.

  • Cost Governance

    With native integration into Beam, Calm also shows the overall utilization and true cost of public cloud consumption to help you make deployment decisions with confidence.

  • Application Development and Modernization

    Combined with Nutanix Karbon or your choice of certified Kubernetes, Calm provides the tools required to modernize applications without losing control of policy. Additionally, Calm natively integrates with Jenkins to empower CI/CD pipelines with automatic infrastructure provisioning or upgrades for all applications.

  • Calm DSL - Infrastructure-as-Code (IaC)

    Calm DSL describes a simpler Python3-based Domain Specific Language (DSL) for writing Calm blueprints. DSL offers all the richness of the Calm user interface along with additional benefits of being human readable and version controllable code that can handle even the most complex application scenario. DSL can be also used to operate Calm from a CLI.

    As Calm uses Services, Packages, Substrates, Deployments and Application Profiles as building blocks for a blueprint, these entities can be defined as Python classes. You can specify their attributes as class attributes and define actions on those entities (procedural runbooks) as class methods.

    Calm DSL also accepts appropriate native data formats such as YAML and JSON that allow reuse into the larger application lifecycle context of a Calm blueprint.

    For technical articles, videos, labs and resources on Calm DSL, see Nutanix Calm DSL on Nutanix.dev.

Calm Prerequisites

Pre-configuration for Using Calm

You must configure the following components before you start using Calm.

  • Image configuration for VM: To use Nutanix as your provider, you must add and configure the image for your VMs. Images are provider-specific. You can upload images to Prism Central or Prism web console and use those images when you configure your blueprints. For more information, see Image Management section in the Prism Central Guide .
  • SSH key (optional): Generate SSH keys so that you can use them for authentication when you configure or launch your blueprints. For more information, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
  • Lightweight Directory Access Protocol (LDAP): Configure LDAP to authenticate users or group of users to use Calm. For more information about LDAP, see Configuring Authentication section in the Prism Central Guide .
  • SSP: The Prism Self Service feature allows you to create projects within an enterprise. Users or group of users can provision and manage VMs in a self-service manner without involving IT in their day-to-day operations. For more information, see Prism Self Service Administration section in the Prism Central Guide .

Prerequisites to Enable Calm

Before you enable Calm from Prism Central, ensure that you have met the following prerequisites.

  • The Prism Central version is compatible with Calm.

    You can go to the Software Product Interoperability page to verify the compatible versions of Calm and Prism Central.

  • The Prism Central in which Calm is running is registered with the same cluster.
    Note:
    • Calm is supported only on AHV and ESXi hypervisors on a Nutanix cluster.
    • Calm is not supported on Hyper-V cluster.
    • Calm does not support vSphere essential edition. If you try to enable Calm with vSphere essential edition license, you get a failure notification because hot-pluggable virtual hardware is not supported by vSphere essential edition.
  • A unique data service IP address is configured in the Prism web console cluster that is running on Prism Central. For more information about configuring data service IP address, see the Modifying Cluster Details in the Prism Web Console Guide .
    Note: Do not change the data service IP address after Calm enablement.
  • A minimum allocation of 4 GB of memory for a small Prism Central and 8 GB of memory for large Prism Central. For more information, see Calm Benchmarks.
  • All the required ports are open to communicate between a Prism web console and Prism Central. For more information about ports, see Port Reference.
  • The DNS server is reachable from Prism Central. Unreachable DNS server from Prism Central can cause slow deployment of Calm.

Calm Benchmarks

Nutanix certifies the following benchmarks for single-node deployment profiles (non-scale-out) and three-node deployment profiles (scale-out). Each benchmark contains scale numbers across different entities of Calm. Because the scaling properties of these entities often depend on each other, changes to one entity might affect the scale of other entities. For example, if your deployment has smaller number of VMs than the benchmarked number, you can have a higher number of blueprints, projects, runbooks, and so on.

Use these guidelines as a good starting point for your Calm installation. You might have to allocate more resources over time as your infrastructure grows.

Calm Single-Node Profile

The following table shows the Calm benchmarks for a single-node Prism Central profile.

Prism Central size Prism Central configuration Number of VMs Number of single-VM blueprints Number of single-VM applications Number of projects Number of runbooks
Small (1 node)

6 vCPUs and 30 GB of memory for each node.

2000 400 2000 50 250
Large (1 node)

10 vCPUs and 52 GB of memory for each node.

7000 1400 7000 250 500

Calm Three-Node (Scale-Out) Profile

The following table shows the Calm benchmarks for a three-node Prism Central profile. If high-availability is preferred, it is recommended to use the scale-out deployment.

Prism Central size Prism Central configuration Number of VMs Number of single-VM blueprints Number of single-VM applications Number of projects Number of runbooks
Small (3 nodes, scale out)

6 vCPUs and 30 GB of memory for each node.

3500 700 3500 100 500
Large (3 nodes, scale out)

10 vCPUs and 52 GB of memory for each node.

12500 2500 12500 500 1000

Benchmark Considerations

The following considerations are applicable for both Calm single-node and three-node (scale-out) profiles:

  • When you enable Calm, an additional 4 GB memory per node is added to the Prism Central small deployment profile and 8 GB of memory per node is added to the Prism Central large deployment profile.
  • The listed application and blueprint numbers include both running and deleted applications and blueprints. Data related to deleted applications and blueprints is cleaned up after 3 months.
  • All the listed VM numbers are for the VMs that are managed by Calm and not Prism Central overall.
  • These performance tests are done by using single-service blueprints and applications. Results might be lower when blueprints with multiple services are deployed.
  • Calm automatically archives the logs every 3 months to clean up the database and to free up the resources. You can also increase the memory (RAM) of your deployed Prism Central VM to increase the Calm capacity.

Calm Throughput

The maximum throughput on a large three-node (scale-out) deployment profile is 400 VMs per hour.

Note: For customized higher capacity Calm configurations, work with your Nutanix sales representative.

Port Information in Calm

For a list of required Calm ports, see Port Reference. The Port Reference section provides detailed port information for Nutanix products and services, including port sources and destinations, service descriptions, directionality, and protocol requirements.

Calm Enablement

Enabling and Accessing Calm

Calm is integrated into Prism Central and does not require you to deploy any additional VMs. To start using Calm, you only have to enable Calm from Prism Central.

Before you begin

Ensure that you have met the prerequisites to enable Calm. For more information, see Prerequisites to Enable Calm.
Note:

If the Prism web console is not registered from a Prism Central and the application blueprints have subnet, image, or VMs on the Prism web console, the Calm functionality is impacted.

Procedure

  1. Log on to Prism Central with your local admin account.
    For detailed information on how to install and log on to Prism Central, see the Prism Central Guide .
  2. From the Prism Central UI, click Services > Calm .
  3. Click Enable .
    Note: The Enable option appears only when you log on to Prism Central with a local admin account.
    When you enable Calm, an additional 4 GB memory per node is added to the Prism Central small deployment profile and 8 GB of memory per node is added to the Prism Central large deployment profile.
  4. To access Calm, click Services > Calm from the entities menu.

What to do next

With Calm, you get an out-of-the-box blueprint, a default project, and a preconfigured application profile with your Nutanix account. You can use the blueprint, project, and application profile to instantaneously launch your first application. For more information, see Launching a Blueprint Instantaneously.

Checking Calm Version

You can check the version of your Calm instance from the Calm user interface.

Procedure

  1. From the Prism Central entities menu, click Services > Calm .
  2. Click the icon on the bottom-left corner.
    The About Nutanix Calm page appears displaying the version number of Calm. For example:
    Figure. Version Number Click to enlarge

Calm VM Deployment

Calm VM is a standalone VM that you can deploy on AHV and ESXi hypervisors and leverage calm functionality without the Nutanix infrastructure.

You can deploy Calm using the image at the Nutanix Support Portal - Downloads page and manage your applications across a variety of cloud platforms. Calm VM deployment eliminates the need of the complete Nutanix infrastructure to use Calm features.

Note:
  • Calm VM currently supports Calm version 3.5.
  • Calm VM supports scale-out deployment. See Setting up Scale-Out Calm VM.
  • The supported ESXi versions are 6.0.0 and 6.7 (vCenter version 6.7).
You can deploy Calm VM on ESXi in one of the following ways:
  • Using the vSphere Web Client. See Deploying Calm VM using the vSphere Web Client
  • Using the vSphere CLI. See Deploying Calm VM using the vSphere CLI

For information on Calm VM deployment on AHV, see Deploying Calm VM on AHV.

Deploying Calm VM on AHV

This section describes the steps to deploy a Calm VM on AHV.

Before you begin

Ensure that you have the URL of the OVA file.

Procedure

  1. From the Prism Central entities menu, click Compute & Storage > OVAs .
  2. Click Upload OVA .
    The Upload OVA page appears.
  3. Under OVA Source, select the URL radio button and do the following.
    Figure. Upload OVA Click to enlarge

    1. Provide a name in the Name field.
    2. Provide the URL of the OVA file of the Calm VM in the OVA URL field.
    3. Click Upload .
  4. On the OVAs page, select the uploaded OVA from the list.
  5. From the Actions list, select Deploy as VM .
    Figure. Deploy VM Click to enlarge

  6. On the Deploy as VM page, do the following:
    Figure. Deploy as VM - Configuration Click to enlarge

    1. Under the VM Properties section, specify the values for CPU , Cores Per CPU , and Memory .
      The CPU and Memory requirements of the Calm VM Deployment is equivalent to the Calm single-node profile. For the benchmark values, see Calm Benchmarks.
    2. Click Next .
    3. To configure the networks, associate the required subnets for your Calm VM instance.
    4. Click Next .
    5. Click Create VM to start the deployment of the Calm VM instance.
  7. From the Prism Central entities menu, go to Compute & Storage > VMs and do the following:
    Figure. VM Power On Click to enlarge

    1. Select the Calm VM instance that you deployed.
    2. Click Actions and then click Power On to power on the Calm VM instance.
    3. Wait for the Calm services to be up and running.

What to do next

To manage and administer your applications, use the Calm VM IP address and the following default credentials to log in to the Calm VM for the first time:
  • Username: admin
  • Password: Nutanix/4u
You can change the default credentials after you log in.

Deploying Calm VM using the vSphere Web Client

You must create a VM with a specific Open Virtualization Format (OVF) image to access the Calm UI.

Before you begin

Ensure that you have the URL of the OVF file, or you have saved the OVF file on your local machine.

Procedure

  1. Log on to the vSphere web client.
  2. Click the cluster on which you want to deploy the Calm VM.
    Figure. vSphere Web Client - Deploying OVF Template Click to enlarge
  3. Click Actions > Deploy OVF Template .
    Figure. Deploy OVF Template - Window Click to enlarge
  4. In the Deploy OVF Template window, do the following.
    1. Click the Local File option to browse and upload the OVF file from your local machine.
      Go to Nutanix Support Portal - Downloads and download the OVF file.
    2. Click Next and select the cluster where you want to deploy the Calm VM.
    3. Click Next and configure the storage and networks for the VM. Then, click Finish .
    The system starts importing and deploying the file on the selected VMware cluster. After the deployment is complete, the Calm VM needs to be powered on.
    Note: Calm services takes around 30 min to start after the VM is powered on.

    For more information, see Deploying OVA Template on VMware vSphere section in the VMware documentation .

What to do next

To manage and administer your applications, use the Calm VM IP address and the following default credentials to log in to the Calm VM for the first time:
  • Username: admin
  • Password: Nutanix/4u
You can change the default credentials after you log in.

Deploying Calm VM using the vSphere CLI

This section describes the steps to deploy a Calm VM by using the vSphere CLI (govc).

Before you begin

Ensure that you have installed all the libraries of the vSphere CLI client (govc). For more information, click here.

Procedure

  1. Login to the SSH client.
    Note: Ensure that you have installed the govc libraries.
  2. Run the following command.
    $ govc import.ova -name 5.17.1-prismcentral -3.0.0.1 http://endor.dyn.nutanix.com/GoldImages/calm-vm

    If you have downloaded the OVF file on your system, replace http://endor.dyn.nutanix.com/GoldImages/calm-vm with the location of the OVF file.

    Figure. vSphere CLI (govc) - Deploy Calm Click to enlarge

    Running the command starts the uploading process. Once the uploading is complete, power on the Calm VM from the vSphere web client.

    Note: Calm services takes around 30 min to start after the VM is powered on.

What to do next

To manage and administer your applications, use the Calm VM IP address and the following default credentials to access Calm VM for the first time:
  • Username: admin
  • Password: Nutanix/4u
You can change the default credentials after you log in.

Setting up Scale-Out Calm VM

Use the following procedure to set up Scale-out version of Calm VM.

Before you begin

Ensure that:
  • Your VMware vCenter version is later than 6.0.
  • You downloaded the Calm VM OVA file from the Downloads page.
  • You uploaded the Calm VM OVA file on ESXi using the vSphere CLI (govc) or vSphere Web Client and marked it as a template. See Deploying Calm VM using the vSphere Web Client and Deploying Calm VM using the vSphere CLI.
  • You uploaded the Calm VM OVA file on AHV using the Prism Central User Interface. See Deploying Calm VM on AHV.
  • You have taken a backup of Calm data in case you are planning to extend an existing Calm VM.

Procedure

  1. Create three Calm VMs using the templates you uploaded.
  2. Do one of the following:
    • To assign static IP addresses to the VMs, see Launching Calm VM with a Static IP Address.
    • If DHCP is enabled, continue to step 3.
  3. SSH to the three Calm VMs and wait until all the cluster services, including Calm and Epsilon, are up and running.
  4. Run the following commands on all three VMs.
    cluster stop
    cluster destroy
    Note: Run these commands only after taking a backup of the Calm data in case you are extending an existing
  5. Run the cluster create command on one of the three VMs.
    • To create a simple cluster, run the following command:
      #cluster --cluster_function_list="multicluster" -s <ip1>,<ip2>,<ip3> create

      For example:

      cluster --cluster_function_list="multicluster" -s 10.46.141.71,10.46.138.20,10.46.138.26 create
    • To create an advanced cluster with cluster name and virtual IP, run the following command:
      cluster --cluster_function_list="multicluster" --cluster_name "<Cluster Name>" -s <ip1>,<ip2>,<ip3> --cluster_external_ip=<vip> create

      For example:

      cluster --cluster_function_list="multicluster" --cluster_name "Demo" -s 10.46.141.71,10.46.138.20,10.46.138.26 --cluster_external_ip=10.46.141.70 --dns_servers 10.40.64.15,10.40.64.16 create
  6. Run the following command on one of the three VMs.
    cd /home/nutanix/bin
    python enable_calm.py
  7. Run the following command to verify epsilon and Calm services status:
    cluster status
  8. Enable policy engine for Calm VM. For more information, see Enabling Policy Engine for Calm VM.
  9. Run the following command to set up policy engine on one of the VMs.
    docker cp /home/nutanix/bin/set_policy_calmvm.pyc nucalm:/home
    docker cp /home/nutanix/bin/set_policy.sh nucalm:/home
    docker exec nucalm /bin/sh -c '/home/set_policy.sh <POLICY_VM_IP> <POLICy_VM_UUID>'
    

What to do next

To manage and administer your applications, use the Calm VM IP address and the following default credentials to log in to the Calm VM for the first time:
  • Username: admin
  • Password: Nutanix/4u
You can change the default credentials after you log in.

Enabling Policy Engine for Calm VM

Use the following steps to enable policy engine for Calm VM.

Before you begin

  • Ensure that you have the accounts configured. For more information, see Provider Account Settings in Calm.
  • Ensure that you have created a project for the blueprint. For more information, see Projects Overview.
  • The project to which you upload the blueprint must have the account in which you want to create the policy engine VM.

Procedure

  1. To enable policy engine on a new deployment of Calm VM, do the following:
    1. Download the blueprint from one of the following locations.
      • To download the blueprint for a single-node Calm VM, click here.
      • To download the blueprint for a scale-out Calm VM, click here.
    2. Upload the downloaded blueprint to a default project.
      The project to which you upload the blueprint must have the account in which you want to create the policy engine VM.
      Note: You can add any string in the credential password and save the blueprint to avoid any blueprint errors after the upload.
    3. Set values for the following variables in the blueprint.
      • Desired policy engine IP address. This IP address must be in the same network as that of the Calm VM.
      • VIP of the Calm VM
      • Netmask of the Calm VM network
      • Gateway of the Calm VM network
      • DNS IP of the Calm VM
      • NTP IP of the Calm VM
      • Public key of CALM VM. For scale-out Calm VM, provide the public key of all VMs.
      • Disable check-login in case the check-login is enabled by default.
    4. Launch the blueprint.
    5. Wait for the application that you created by launching the blueprint to get into the running state.
    6. SSH into the Calm VM as a Nutanix user and run the following command after making the required changes.
      docker cp /home/nutanix/bin/set_policy_calmvm.pyc nucalm:/home
      docker cp /home/nutanix/bin/set_policy.sh nucalm:/home
      docker exec nucalm /bin/sh -c '/home/set_policy.sh <POLICY_VM_IP> <POLICy_VM_UUID>'
  2. To enable policy engine on an upgraded Calm VM, do the following:
    1. SSH to the policy engine VM as a Nutanix user.
    2. Wget the policy-engine.tar.gz file from the Downloads page on to the policy engine VM.
    3. Untar (extract) the policy-engine.tar.gz file.
    4. Locate and run upgrade.sh .
    5. Run the docker ps command to check the status of policy containers, and wait for the containers to get healthy.

Launching Calm VM with a Static IP Address

By Default, Calm VM uses DHCP IP address. You can use the following procedure to launch Calm VM using a static IP address.

Procedure

  1. Log on to vCenter.
  2. In the Navigator pane, go to Home > Policies and Profiles > Customization Specification Manager .
  3. Click Create a new specification and do the following:
    Figure. Create New Specification Click to enlarge

    1. In the Target VM Operating System drop-down menu, select Linux .
      Figure. Specify Properties Click to enlarge

    2. In the Customization Spec Name field, enter a name for the specification.
    3. (Optional) Add a description in the Description field.
    4. Click Next .
  4. On the Computer Name page, do the following.
    Figure. Set Computer Name Click to enlarge

    1. Select the Use the virtual machine name radio button.
    2. Enter a domain name in the Domain name field.
    3. Click Next .
  5. On the Time Zone page, specify the time zone and then click Next .
  6. On the Configure Network page, do the following:
    Figure. Configure Network Click to enlarge

    1. Select the Manually select custom settings radio button.
    2. Click the Edit icon for the NIC to open the IP details page.
      Figure. IP Details Click to enlarge

    3. On the IPv4 page, select the Prompt the user for an address when the specification is used radio button.
    4. Enter the subnet mask in the Subnet Mask field.
    5. Enter the default gateway in the Default Gateway field.
    6. Click OK .
    7. On the Configure Network page, click Next .
  7. On the Enter DNS and Domain Settings page, enter the DNS and DNS search path details, and then click Next .
  8. On the Ready to complete page, verify the details you entered, and then click Finish .
  9. To launch the VM, do the following:
    1. Click Actions and then select New VM from This Template... .
      Before you launch the VM, ensure that you have uploaded the Calm VM OVA file to vCenter and converted it as a template.
    2. On the Select a name and folder page, enter a name for the VM, select the datacenter location for the VM, and then click Next .
      Figure. Name and Datacenter Click to enlarge

    3. On the Select a compute resource page, select a cluster or node, and then click Next .
    4. On the Select storage page, select the datastore and click Next .
    5. On the Select clone options page, select the following check boxes.
      • Customize the operating System
      • Customize this virtual machine’s hardware
      • Power-on virtual machine after creation
    6. On the Customize guest OS page, select the customization spec that you created.
      Figure. Customization Spec Click to enlarge

    7. On the User Settings page, enter the IP address that you want to assign to the VM, and then click Next .
      The User Settings page appears because you selected the Prompt the user for an address when the specification is used radio button during the setup.
      Figure. User Settings Click to enlarge

    8. On the Customize hardware page, update the CPU, memory, and network requirements, and then click Next .
    9. On the Ready to complete page, verify the details you entered, and then click Finish .
    Use these steps to launch other VMs in case of a scale-out Calm VM.

What to do next

To set up scale-out Calm VM, see Setting up Scale-Out Calm VM.

Getting Started with Calm

Calm Overview

The following table lists the different tabs in Calm, their icons, and their usage:

Table 1. Calm Tabs
Icons Tab Usage
Marketplace tab To instantly consume application blueprints to provision applications. See Marketplace Overview.
Blueprint tab To create, configure, publish, and launch single-VM or multi-VM blueprints. See Calm Blueprints Overview.
Application tab To view and manage applications that are launched from blueprints. See Applications Overview.
Library tab To create and use variable types and tasks. You use variables and tasks while configuring a blueprint. See Library Overview.
Runbooks tab To automate routine tasks and procedures that pan across multiple applications without involving any blueprints or applications. See Runbooks Overview.
Endpoints tab To create and manage target resources where the tasks defined in a runbook or in a blueprint can run. See Endpoints Overview.
Settings tab

To enable or disable general settings. See General Settings in Calm.

To configure and manage provider account. See Provider Account Settings in Calm.

To configure and manage credential provider. See Configuring a Credential Provider.

Policies tab To schedule application actions and runbook executions. See Scheduler Overview.
Marketplace Manager tab To manage approval and publishing of application blueprints. See Marketplace Manager Overview.
Projects tab To create users or groups and assign permissions to use Calm. Projects tab also allows you to configure environment for your providers. See Projects Overview.

Exploring Calm

You can use the following procedure to explore Calm user interface and get an overview of the Calm components.

Procedure

  1. Click the Tour icon on the bottom-left pane and click Explore Calm .
  2. Do one of the following.
    1. To navigate through the Calm components, click the right or left arrow.
    2. To skip the tour, click Skip tour .

Accessing Calm REST API Explorer

You can use the following procedure to access the Calm REST API explorer console from the Calm user interface.

Procedure

  1. Click the icon on the bottom-left corner.
    The About Nutanix Calm page appears.
  2. Click the Rest API Explorer link.
    The Calm REST API explorer interface appears.

Role-Based Access Control in Calm

Calm manages the role-based access control using projects. Projects are logical groupings of user roles, accounts, VM templates, and credentials that are used to manage and launch blueprints and applications within your organization. For more information, see Projects Overview.

Users or groups are allowed to view, launch, or manage applications based on the roles that are assigned within the projects. Calm has the following roles for users or groups:

  • Project Admin

    Project admins have full control of the project. They can perform reporting and user management, create blueprints, launch blueprints, and run actions on the applications.

  • Developer

    Developers can create blueprints, launch blueprints, and run actions on the applications. They are, however, not allowed to perform reporting and user management.

  • Consumer

    Consumers can launch new blueprints from the marketplace and run actions on the applications. They are, however, not allowed to create their own blueprints.

  • Operator

    Operators have minimum access and are allowed only to run actions against existing applications. They are not allowed to launch new blueprints or edit any existing blueprints.

Note: A Prism Admin is a super user within Calm and within the rest of Prism Central who has full access to all the features and functionalities of Calm.

The following table details the roles and responsibilities in Calm:

Table 1. Roles and Responsibilities Matrix
Prism Admin Project Admin Developer Consumer Operator
Marketplace Enable and Disable X
Manage X
App publishing request X X X
Send App publishing request to the Administrator X X
Clone and edit App blueprint X X X
Blueprint Create, update, delete, and duplicate X X X
Read-only X X X X
Launch X X X X
Applications Complete App summary X X X X X
Run functions X X X X X
App debug mode X X X X X
Function edit X X X
Create App (brownfield import) X X X
Delete App X X X X
Settings CRUD X
Task Library View X X X X X
Create and Update X X X
Delete X
Sharing with Projects X
Projects Add project X
Update project X X
Add VMs to projects X
Custom roles
Users Add users to the system and change roles X
Add and remove users to or from a project X X
Change user roles in a project X X
Create Administrator X
Create Project Administrator X X
Runbooks Create and Update X X X
View X X X X X
Delete X X X
Execute X X X X X
Endpoints Create and Update X X X
View X X X X X
Delete X X X
Scheduler Create, delete, and clone jobs X X X X
Read job and view execution status X X X X X
Update job name, schedule, executable, and application action X X X X
Edit operations on a blueprint launch X X X X
Edit operations on runbook executions X X X X
Edit operations on application actions X X X X
Edit operations on Marketplace launch X X X X
Note: Scheduler does not support custom roles in this release.

Calm: Quick Start

Launching a Blueprint Instantaneously

When you enable Calm, you get an out-of-the-box blueprint, a default project, and a preconfigured application profile with your Nutanix account. You can use the blueprint, project, and application profile to instantaneously launch your first application.

About this task

Video: Launching a Blueprint Instantaneously

Procedure

  1. Click the Tour icon in the bottom-left pane and click Launch Blueprint .
    The Quick Launch a Blueprint page appears.
  2. On the Select Blueprint tab, click Next .
    The default selection for launch is ExpressLaunch.
  3. On the Select Project tab, select the Default project and click Next .
    Projects are logical groupings of user roles, providers, VM templates, and credentials used to manage and launch blueprints within your organization. For more information, see Projects Overview.
    Note: The Default project is configured with Nutanix account.
  4. On the Select App Profile tab, click Next .
    The default selection for launch is an application profile that is configured with your Nutanix account.
    Application profiles are profiles for different datacenters or cloud services where you want to run your application. For more information, see Calm Blueprints Overview.
  5. On the Preview and Launch tab, type a name for your application and click Launch and Create App .
    The Preview and Launch tab displays the VM details of your application.
    The blueprint application is created and provisioned.

What to do next

You can view the details of the blueprint application on the Applications tab. For more information, see Applications Overview.

Provisioning a Linux or Windows Infrastructure as a Service (IaaS)

To quickly provision a Linux or Windows Infrastructure as a Service (IaaS) for your end users, you can configure and launch a single-VM blueprint in Calm.

Before you begin

Ensure that you have enabled Calm from your Prism Central instance. For more information, see Enabling and Accessing Calm.

About this task

Provisioning a Linux or Windows IaaS involves configuring the single-VM blueprint VM specifications and launching the blueprint.

Procedure

  1. Log on to Prism Central.
  2. From the Prism Central UI, click Services > Calm .
  3. Click the Blueprint icon in the left pane.
  4. Click + Create Blueprint > Single VM Blueprint .
  5. On the Blueprint Settings tab, enter a name and description for your blueprint, and select a project and an environment.
    Figure. Blueprint Settings Click to enlarge

    If you have not configured any project and environment, keep the default project and environment selected. For detailed information about creating projects and configuring environments, see Projects Overview.
  6. On the VM Details tab, enter a VM name, and then select an account and an operating system.
    Figure. VM Details Click to enlarge

    If you have not configured any account, then keep the default Nutanix account selected. For detailed information about configuring provider accounts, see Provider Account Settings in Calm.
    You can select Linux or Windows as the operating system of your IaaS.
  7. On the VM Configuration tab, do the following:
    1. Enter the number of vCPU, cores of each vCPU, and total memory to configure the processing unit of the VM.
    2. Provide the guest customization, if required.
    3. Based on the operating system you selected, select a Linux image or a Windows image from the image service under the Disks section.
    4. Select a network configuration under the NICs section.
    Figure. VM Configuration Click to enlarge

    For detailed information about the VM configuration for different provider accounts, see VM Configuration.
  8. Click Save to save the blueprint.
  9. Click Launch to launch the blueprint.
    Figure. Blueprint Launch Click to enlarge

  10. Provide an application name, description, environment, and application profile, and then click Deploy .
    If you have not configured any environment or application profile, keep the default environment and application profile selected.
  11. Navigate to the Applications tab to view and access the application.

What to do next

  • You can publish the blueprint to the marketplace so that other project users can also launch the blueprint and use the IaaS. For more information, see Submitting a Blueprint for Approval.
  • You can create single-VM blueprints with different provider accounts and launch them. For more information, see Creating a Single-VM Blueprint.

General Settings in Calm

The Settings tab allows you to control the overall administrative functionalities of the Calm instances. You must be a Prism Central administrator to access the Settings tab.

You can use the Settings > General tab to control the following functionalities:

  • Enable the Default Landing Page option to make Calm as the default landing page in Prism Central.
  • Enable the availability of ready-to-use application blueprints in the marketplace manager. For more information, see Marketplace Manager Overview.
  • Enable showback to estimate the overall service cost of the applications running on your on-prem cloud. For more information, see Showback.
  • Enable the policy engine to enforce resource quota policy for the infrastructure resources (compute, memory, and storage) on Nutanix and VMware. For more information, see Policy Engine Overview.
  • Set up quota defaults for vCPU, memory, and disk so that they can populate automatically when you allocate quotas to your provider accounts. For more information, see Setting up Quota defaults.
  • Disable policy engine quota enforcement for your Calm instance in case the policy engine VM does not respond or encounters any connectivity issues. For more information, see Disabling Policy Enforcement.
  • Download application logs that are archived by the system periodically to clear resources. For more information, see Application Log Archive.

Enabling Nutanix Marketplace Applications

Enable Nutanix Marketplace Applications to view and launch ready-to-use application blueprints. These application blueprints appear on the Marketplace Manager tab for publishing. You can publish the blueprints to the marketplace after associating them with a project.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Under the General tab, click the Nutanix Marketplace Apps toggle button.
    The application blueprints appear on the Marketplace Manager tab.

What to do next

You can associate the ready-to-use application blueprints with a project and publish them to the marketplace. For more information, see Approving and Publishing a Blueprint or Runbook.

Showback

Showback allows you to estimate the overall service cost of the applications running on your on-prem cloud. You can also view the graphical representation of the cost of the applications.

Calm supports showback for the following platforms.
  • Nutanix
  • VMware through vCenter

To enable and configure showback, see Enabling Showback.

Note: Starting with AOS 5.10 or NCC 3.6.3, Prism Central generates the following NCC alerts for showback:
  • Beam connectivity with Prism Central
  • Prism Central and Prism web console (also known as Prism Element) registration or de-registration

Enabling Showback

Enable Showback to configure the resource cost of your applications and monitor them while you configure a blueprint or manage an application. Showback is applicable only for the Nutanix platform and the VMware through vCenter platform.

About this task

Video: Enabling Showback

Procedure

  1. Log into Prism Central as an administrator.
  2. Click the Settings icon Settings icon in the left pane.
  3. Under the General tab, click the Enable Showback toggle button.
    The Enable Showback window is displayed.
  4. Click the supported provider for which you want to define the cost.
  5. To configure the resource usage cost, click Edit for the selected provider, and configure the cost of the following resources.
    1. In the vCPU field, enter the cost of vCPU consumption for each hour in dollars.
      The default value is $0.01 for each vCPU for each hour.
    2. In the Memory field, enter the cost of memory consumption for each hour in dollars.
      The default value is $0.01 for each GB of usage for each hour.
    3. In the Storage field, enter the cost of storage consumption for each hour in dollars.
      The default value is $0.0003 for each GB of usage for each hour.
  6. Click Enable Showback .

Disabling Showback

Disable showback to stop monitoring the resources cost of your application blueprints.

Procedure

  1. Log into Prism Central as an administrator.
  2. Click the Settings icon Settings icon in the left pane.
  3. Under the General tab, click the Disable Showback toggle button.
    The Disable Showback window is displayed.
  4. To disable Showback, click Disable Showback .

Policy Engine Overview

The policy engine is a single-VM setup for the single or scale-out Prism Central. When you enable the policy engine for your Calm instance, a new VM is created and deployed for the policy engine. All you need is an available IP address that belongs to the same network as that of your Prism Central VM for the policy engine VM.

As an administrator, you can enable the policy engine to:

  • Enforce resource quota policy for the infrastructure resources (compute, memory, and storage) on Nutanix and VMware. The quota policy enforcement allows better governance on resources across infrastructures at the provider and project levels. See Quota Policy Overview.
  • Orchestrate applications through tunnels on a virtual private network (VPC) on Nutanix accounts. See VPC Tunnels for Orchestration.
  • Enforce approval policies to manage resources and control actions in your environment. See Approval Policy Overview.
  • Schedule application actions and runbook executions. See Scheduler Overview.

Enabling policy Engine

The policy engine is a single-VM setup for the single or scale-out Prism Central.

About this task

When you enable the policy engine for your Calm instance, a new VM is created and deployed for the policy engine. All you need is an available IP address that belongs to the same network as that of your Prism Central VM for the policy engine VM.

Note:
  • If your Prism Central is on ESXi, add the VMware provider account for the vCenter that manages the host where the Prism Central VM resides to your Calm.
  • For quota consumption of running applications, you can either wait for the next platform sync to happen or run platform sync manually after the policy engine enablement. For more information on how to run platform sync, see Synchronizing Platform Configuration Changes.
  • When you upgrade Calm to the latest version as part of the Prism Central upgrade and if the policy engine is enabled, then ensure to upgrade your policy engine to the latest version using LCM.
  • If an HTTP proxy is configured, ensure that the IP address you provide for the policy engine VM is added to the HTTP-proxy whitelist.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, enter the IP address in the IP Address for Policy Engine field.
    The IP address must be an available IP address and must belong to the same network as that of your Prism Central VM.
    Figure. Enable Policy Engine Click to enlarge Policy Engine Enablement

  3. Click the Enable button.
  4. Click the Confirm button to enable the policy engine.

What to do next

  • To get quota consumption of running applications for the first time after enabling the policy engine, you can wait for the platform sync to happen or run the platform sync manually. After the first update, all future updates will happen instantly. For information on how to run platform sync, see Synchronizing Platform Configuration Changes.
  • Allocate resource quotas to provider accounts. See Allocating Resource Quota to an Account.
  • Allocate resource quotas to projects. See Managing Quota Limits for Projects.
  • Create VPC tunnels on Nutanix accounts. See Creating VPC Tunnels.
  • Create approval policies for runbook executions, application launch, or application day-2 operations. See Creating an Approval Policy.
  • Schedule application actions and runbook executions. See Creating a Scheduler Job.

Enabling Policy Engine at a Dark Site

You can enable the policy engine at a dark site.

Before you begin

If your Prism Central is on ESXi, add the VMware provider account for the vCenter that manages the host where the Prism Central VM resides to your Calm.

Procedure

  1. Download the policy engine image of the version that is compatible with your Calm version from the Downloads page of the Support & Insights Portal:
    https://portal.nutanix.com/page/downloads?product=calm
  2. Do one of the following:
    • If your Prismr Central is on AHV, upload the image on Prism Central with the following name:

      <Calm version number>-CalmPolicyVM.qcow2

    • If your Prism Central is on ESXi, manually upload the image as template on the vCenter host where the Prism Central VM resides with the following name:

      <Calm version number>-CalmPolicyVM.ova

  3. After uploading the image, enable the policy engine on the Settings page. For more information on enabling the policy engine, see Enabling policy Engine.

Setting up Quota defaults

After you enable the policy engine, you can set up the default quota values for vCPU, memory, and disk. This step is optional.

About this task

Setting up quota defaults saves you from repeatedly entering vCPU, memory, and disk quota values for each cluster. After you set the quota defaults, the default quota values populate automatically when you allocate quotas to your provider accounts.

Note: The quota defaults are visible in the accounts only after the next platform sync. You can also run the platform sync manually. For information on how to run platform sync, see Synchronizing Platform Configuration Changes.

Before you begin

Ensure that you enabled the policy engine for your Calm instance. For information about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Settings iconic Settings icon the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Quotas tab in the left pane.
  3. Select the Set Quota Defaults check box.
  4. Specify the values for vCPU , Memory , and Disk .
  5. Click the Save icon next to the fields.

Viewing Policy Engine VM Details

After you enable policy engine, review the policy engine VM configuration, network configuration, and cluster information on the Policies tab of your Setttings page. For example, you can view the power status, protection status, or cluster name of the policy engine VM.

Before you begin

Ensure that you have enabled the policy engine for your Calm instance. For information about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Policy Settings tab in the left pane.
  3. Expand the Policy Engine VM Details section to view the policy engine VM details.

Disabling Policy Enforcement

Disable the policy enforcement for your Calm instance if the policy engine VM encounters any connectivity issues or the policy engine VM is not responding.

About this task

When you disable policy enforcement, policies are not enforced for quota checks and approval policies.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Policy Settings tab in the left pane.
  3. Select the Skip Policy Checks check box to disable the policy enforcement.
  4. Click the Confirm button.

Enabling Approvals

You can enable approvals for your Calm instance from the settings page.

About this task

Caution: This feature is currently in technical preview and is disabled by default. Do not use any technical preview features in a production environment.

When you enable approvals, events such as runbook executions, application launch, and application day-2 operations that match the conditions defined in the approval policy go through the approval process.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Approvals tab in the left pane.
  3. Use the Approvals toggle button to enable approvals.

Disabling Approvals

You can disable approvals for your Calm instance from the Settings page.

About this task

When you enable approvals, events such as runbook executions, application launch, and application day-2 operations do not go through the approval process even when they match the conditions defined in the approval policy.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Approvals tab in the left pane.
  3. Use the Approvals toggle button to disable approvals.

Viewing Approval Email Templates

You can view the configuration details and email template on the Policies tab of the Settings page.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. On the Policies tab, click the Approvals tab in the left pane.
  3. Under the General Configuration section, view details such as the number of days after which a request expires and the frequency of the email sent to the approver and requester.
  4. Under the Email Content section, click the For Approver or For Requester tabs to view the template of the emails that are sent with each request.
    Figure. Email Template Click to enlarge

    The content of the email templates for approver or requester can be modified only using the APIs. You can use the following supported email template variables.

    • Approver
    • Requester
    • ConditionDetails
    • Event
    • EntityType
    • EntityName
    • State
    • PCIP
    • CreationTime
    • ExpirationTime
    • NutanixLogo

    You can use these variables with the {{}} syntax. For example, {{.PCIP}} .

Application Protection Status

You can view the protection and recovery status of a Calm application when:

  • The VMs of the application running on a Nutanix platform are protected by a protection policy in Prism Central.
  • You enabled the option to show application protection status in Calm.

You can view the protection and recovery status of the application on the Application Overview page. For more information, see Overview Tab.

Note:
  • The option to show application protection status is available only when at least one VM of the application is protected by a protection policy in Prism Central.
  • You can view the protection and recovery status of the Calm applications if the versions of Prism Central and Prism Element are 5.17 or later.
  • If the target recovery location is set to another Prism Central, Calm still displays the correct protection status. However, the recovery is not tracked, and there is no recovery status available for the application. Calm application still points to the old VMs.

To enable the option to show application protection status, see Enabling Application Protection Status View.

Enabling Application Protection Status View

Enable the Show App Protection Status toggle button to view the protection and recovery status of a Calm application that is deployed on a Nutanix platform. You must be a Prism Central administrator to enable or disable the toggle button.

Before you begin

  • Ensure that the versions of Prism Central and Prism Element are 5.17 or above.
  • Ensure that at least one VM of the application is protected by a protection policy in Prism Central.

Procedure

  1. Log on to Prism Central as an administrator.
  2. From the Prism Central UI, click Services > Calm .
  3. Click the Settings icon Settings icon in the left pane.
  4. Under the General tab, click the Show App Protection Status toggle button.

What to do next

You can view the protection and recovery status of the application in the application Overview page. For more details, see Overview Tab.

Application Log Archive

Calm automatically archives run logs of the deleted applications and custom actions that are older than three months. You can download the archives within 7 days from the time of archive creation.

For a running application, data is not archived for the system-generated Create actions.

You can get the following information for Start, Restart, Stop, Delete, and Soft Delete system-generated actions and user-created actions.

  • Started by
  • Run by
  • Status

Calm archives all action details of a deleted application.

Only an administrator can view and download the application log archive. For more information, see Downloading Application Log Archive.

Downloading Application Log Archive

Calm periodically archives application logs to clear resources. You can download the archived application logs from the Settings tab.

Procedure

  1. Log into Prism Central as an administrator.
  2. From the Prism Central UI, click Services > Calm .
  3. Click the Settings icon Settings icon in the left pane.
  4. Under Application log archive is ready to download , click Download .
    The tar.gz file is downloaded.

Provider Account Settings in Calm

Provider accounts are cloud services, baremetals, or existing machines that you can use to deploy, monitor, and govern your applications. You can configure multiple accounts of the same provider.

Use the Settings > Accounts tab to configure provider accounts. You configure provider accounts (by using the provider credentials) to enable Calm to manage applications by using your virtualization resources.

Calm supports the following provider accounts:

Table 1. Provider Accounts
Provider Accounts Description
Nutanix All the AHV clusters that are registered to the Prism Central instance are automatically added as providers.
Note: If you want to add a remote Prism Central (PC) instance as a provider in a multi-PC setup, you must add the remote PC instance as an account in Calm. For more information, see Configuring a Remote Prism Central Account.
VMware To configure a VMware account, see Configuring a VMware Account.
AWS To configure an AWS account, see Configuring an AWS Account.
Azure To configure an Azure account, see Configuring an Azure Account.
GCP To configure a GCP account, see Configuring a GCP Account.
Kubernetes To configure a Kubernetes account, see Configuring a Kubernetes Account.
Xi Cloud To configure Xi Cloud as a provider, see Configuring a Xi Cloud Account.

Nutanix Account Configuration

All AHV clusters that are registered to your Prism Central instance are automatically added as provider accounts to Calm.

You can also configure any remote Prism Central (PC) as an account in Calm to deploy applications on the remote PC. For more information, see Support for Multi-PC Setup.

Support for Multi-PC Setup

In a multiple Prism Centrals (multi-PC) setup, a central Calm instance (called global Calm instance) runs only on one of the PCs (called host or parent PC) and all the other PCs are connected to the central Calm instance as the remote PCs.

The global Calm instance can now manage the applications deployed on the geographically distributed Prism Centrals (also called remote PCs) without the need of separate Calm instances for every PC. A remote PC is only used to provision the tasks for the deployed applications.

In a multi-PC environment, every remote PC is added as an account to the host PC and you can add the account to your project before creating and launching a blueprint.

For more information about adding a remote PC as an account, see Configuring a Remote Prism Central Account.

For more information about adding the account to a project, see Adding Accounts to a Project.

Figure. Multi-PC Setup Click to enlarge

Configuring a Remote Prism Central Account

To deploy an application on a remote PC, you must configure the remote PC as an account in Calm.

About this task

You require the role of a Prism Admin to configure a remote PC account.

For more information about multiple Prism Central setup support, see Support for Multi-PC Setup.

Procedure

  1. Click the Settings icon in the left pane.
  2. Click the Account tab.
    The account inspector panel appears.
  3. Click +Add Account .
    The Account settings page appears.
  4. In the Name field, type a name for the PC account.
  5. From the Provider list, select Nutanix .
    Figure. Remote Prism Central Account Click to enlarge

  6. In the PC IP field, type the IP address of the remote PC.
    The application is provisioned in the remote PC IP address.
  7. In the PC Port field, type the port number for the IP address.
  8. In the User name field, type the administrator username of the remote PC.
  9. In the Password field, type the administrator password of the remote PC.
  10. In the Account Sync Interval field, specify the interval after which the platform sync must run for a cluster.
    Calm uses platform sync to synchronize any configuration changes occur in Calm-managed resources, such as IP Address changes, disk resizing, and so on. Platform sync enables Calm to maintain accurate quota and Showback information.
  11. Click Save .
    The account list displays the account that you created.
  12. To verify the credentials of the account, click Verify .
    Calm adds the remote PC as a Nutanix account after credential authentication and account verification.

VPC Tunnels for Orchestration

Calm allows you to create VMs within the overlay subnets that have association to a Virtual Private Cloud (VPC) when you use your Nutanix or Remote PC account. A VPC is an independent and isolated IP address space that functions as a logically isolated virtual network. VMs that you create with VPC Subnets cannot communicate with a VM that is outside the VPC. Even the VMs outside the VPC cannot reach the VMs within the VPC.

In the absence of this direct communication, you can set up tunnels to communicate with the VMs within the VPC for orchestration activities and to run script-based tasks. You can set up the tunnel VM in any one of the subnets within the VPC.

Figure. VPC Tunnels Click to enlarge

To set up tunnels for your VPCs, you must:

  • Have the VPC and its subnets created for the account in Prism Central. For more information on VPCs, see the VPC Management section in the Flow Networking Guide .
  • Enable the Advanced Network Controller.
  • Attach an external subnet (VLAN) to the VPC to enable the tunnel VM to reach the policy engine VM and establish a tunnel connection. An external subnet allows VMs inside the VPC to reach the VMs that are outside the VPC network.
  • Ensure that the VMs inside the VPC is able to ping the policy engine VM and Prism Central.
  • Permit traffic from the tunnel VM to the Policy Engine VM on port 2222 using TCP connections.
  • Allow connections on default 22 port for SSH script execution or default 5985 for Powershell script execution on the target VM.
  • Enable the policy engine in Calm to allow the tunnel VM to communicate with Calm.
  • Have 2 vCPUs, 2 GiB memory and 10 GiB disk space for the tunnel VM.

For more information on creating VPC tunnels, see Creating VPC Tunnels.

Creating VPC Tunnels

In your Nutanix account, you set up tunnels to get access to the VMs that are created within the VPCs.

About this task

The tunnels that you create enables you to perform check log-in and run script-based execution tasks on the VMs that use the overlay subnets of the VPC.

If tunnel is not configured for the selected VPC, you can only perform basic operations (such as VM provisioning) on the VPC.

Before you begin

Ensure that you have enabled the policy engine on the Settings page. For details about enabling the policy engine, see Enabling policy Engine. To know other requirements to set up the tunnel, see VPC Tunnels for Orchestration.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
  3. Select the Nutanix account in the left pane.
    The Account Settings page appears.
  4. In the VPC Tunnels section, do the following to set up the tunnel.
    1. Click Create Tunnel .
      The Create Tunnel for VPCs window appears.
      Figure. Create Tunnel Click to enlarge

    2. From the Select VPC list, select the VPC on which you want to set up the tunnel.
    3. From the Select VPC Subnet , select the subnet that must be used for the NIC of the tunnel VM.
    4. Under Tunnel Configuration section, select the cluster on which you want to place the VM from the Select Cluster list.
    5. In the Tunnel Name field, edit the name of the tunnel. This step is optional.
      The tunnel name is auto-generated to ensure uniqueness. You can only edit or append based on your requirement.
    6. In the Tunnel VM Name field, edit the tunnel VM name. This step is optional.
      The tunnel VM name is auto-generated to ensure uniqueness. You can only edit or append based on your requirement.
    7. Click Create .

Configuring a VMware Account

Configure your VMware account in Calm to manage applications on the VMware platform.

About this task

Note:
  • If you do not have an administrator user account in vCenter while configuring the account, then you can also use a user account with required permissions. See Permission Required in vCenter.
  • You cannot enable Calm with vSphere Essentials edition license because vSphere Essentials edition does not support hot-pluggable virtual hardware.
  • With VMware accounts, Calm also supports virtual switch (vSwitch) networks and VMware NSX-based networks.

To refer to the video about setting up VMware as provider, click here.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel is displayed.
  3. Click +Add Account .
    The Account Settings page appears.
    Figure. Account- VMware Click to enlarge

  4. In the Name field, type a name for the account.
    Note: The name you specify appears as the account name when you add the account to a project.
  5. From the Provider list, select VMware .
  6. In the Server field, type the server IP address of the vCenter server.
  7. In the Username field, type the user name of the vCenter account.
    If a domain is part of the username, then the username syntax must be <username>@<domain> .
  8. In the Password field, type the password of the account.
  9. In the Port field, type the port number as 443 .
  10. Click Save .
  11. From the Datacenter list, select the datacenter.
    A VMware datacenter is the grouping of servers, storage networks, IP networks, and arrays. All the datacenters that are assigned to your vCenter account are available for your selection.
  12. In the Account Sync Interval field, specify the interval after which the platform sync must run for a cluster.
    Calm uses platform sync to synchronize any configuration changes occur in Calm-managed VMware resources, such as IP Address changes, disk resizing, and so on. Platform sync enables Calm to maintain accurate quota and Showback information.
  13. To monitor the operating cost of your applications, configure the cost of the following resources. This step is optional.
    Note: Ensure that you have enabled showback. For more information about enabling showback, see Enabling Showback.
    • In the vCPU field, type the cost of vCPU consumption for each hour in dollars. The default value is $0.01 for each vCPU for each hour.
    • In the Memory field, type the cost of memory consumption for each hour in dollars. The default value is $0.01 for each GB of usage for each hour.
    • In the Storage field, type the cost of storage consumption for each hour in dollars. The default value is $0.0003 for each GB of usage for each hour.
  14. Click Save .
  15. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

Create a project and configure a VMware environment, see Creating a Project and Configuring VMware Environment.

Permission Required in vCenter

The following table provides the complete list of permissions that you need to enable in vCenter before you configure your VMware account in Calm.

Table 1. Permissions required in vCenter
Entity Permission
Datastore
  • Allocate space
  • Browse datastore
  • Low level file operation
  • Update virtual machine files
Network
  • Assign Network
  • Configure
  • Move Network
Resource
  • Assign virtual machine to resource pool
vSphere Tagging
  • All
Virtual Machine > Change Configuration
  • Add existing disk
  • Add new disk
  • Add or remove device
  • Change CPU count
  • Change memory
  • Modify device settings
  • Configure raw device
  • Rename
  • Set annotation
  • Change settings
  • Upgrade virtual machine compatibility
Virtual Machine > Interaction
  • Configure CD media
  • Connect devices
  • Power On
  • Power off
  • Reset
  • Install VMware tools
Virtual Machine > Edit Inventory
  • Create from existing
  • Remove
Virtual Machine > Provisioning
  • Clone template
  • Customize guest
  • Deploy template
  • Read customization specifications

You must define the custom role at the vCenter level instead of the Datacenter level. For information on how to enable permissions in vCenter, see the vSphere Users and Permissions section in the VMware documents.

Supported vSphere Versions

Calm supports the following versions of vSphere.

  • 7.0
  • 6.7
  • 6.5
  • 6.0

Configuring an AWS Account

Configure your AWS account in Calm to manage applications on the AWS platform.

Before you begin

Ensure that you have the following accounts and details.
  • An AWS account with valid credentials.
  • An IAM user account. For information on how to create an IAM user account, refer to AWS Documentation .
  • A user account with full EC2 access and IAM read-only access.
  • The access key ID and the secret access key for the IAM user account.
Note:
  • Ensure that you have configured the domain name server (DNS). To verify the DNS configuration, from the Prism Central UI, click Prism Central > Gear icon > Name Servers and run the following command.
    nutanix@cvm$ ncli cluster get-name-servers
  • If you are configuring DNS now, then you must restart the Prism Central VM.

Procedure

  1. Click the Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel appears.
  3. Click +Add Account .
    The Account Settings page appears.
    Figure. Account- AWS Click to enlarge

  4. In the Name field, type a name for the account.
  5. From the Provider list, select AWS .
  6. In the Access Key ID field, type the access key ID of your AWS account.
  7. In the Secret Access Key field, type the secret access key of your AWS account.
  8. From the Regions list, select the geographical regions.
    By default, Calm includes all regions except China and GovCloud in the account. You can remove a region from the account. You can also clear the All Regions check box and select regions from the Regions list.
    Warning: Removing a region from an existing AWS account impacts the deployed VMs in that region.
  9. In the Search Public Image field, search the public image applicable to your region, and select the public image. This step is optional.
    You must authenticate the credentials before searching. You can select multiple public images and use any of the selected public images when you create a blueprint for AWS.
  10. Click Save .
  11. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

Create a project and configure an AWS environment, see Creating a Project and Configuring AWS Environment.

Configuring AWS C2S Provider on Calm

GovCloud (US) is an isolated AWS region to help the United States government agencies and federal IT contractors host sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements.

About this task

With AWS C2S support in Calm, you can configure your GovCloud authentication, and then create or manage your workload instances on AWS GovCloud region as done for other AWS regions. The AWS GovCloud provides the same high-level security as other AWS regions, however, the Commercial Cloud Services (C2S) and the C2S Access Portal (CAP) are used to grant controlled access to the C2S Management Console and C2S APIs for Government users and applications.
Note:

The AWS GovCloud (US) region supports the management of regulated data by restricting physical and logical administrative access to U.S. citizens only.

Before you begin

Ensure that you have the following accounts.
  • An AWS GovCloud (US) account with valid credentials.
  • A C2S account configured in AWS.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
    The inspector panel appears.
    Figure. Provider- AWS Click to enlarge

  3. In the Name field, type the name of the account.
  4. From the Type list, select AWS C2S .
  5. In the C2S account address field, type the C2S account IP address.
  6. In the Client Certificate field, type or upload the client certificate.
  7. In the Client Key field, type or upload the client key.
  8. In the Role field, type the required IAM role.
  9. In the Mission field, type the mission.
  10. In the Agency field, type the agency.
  11. Select All GovCloud Regions check box to select all the GovCloud regions.
  12. Click Save .
  13. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

You can use the configured AWS C2S provider while you create a blueprint.

Configuring AWS User Account with Minimum Privilege

To manage applications on the AWS platform using Calm, you must have a privileged AWS user account with an appropriate policy.

Before you begin

Ensure that you have an AWS administrator user account.

Procedure

  1. Log on to the AWS console with your AWS administrator account.
  2. Click Services > IAM .
  3. To add a user, click Users > Add User .
  4. On the Add User page, do the following.
    1. In the User name field, type a user name.
    2. In the Access Type area, select the check boxes next to the Programmatic access and AWS Management Console access fields, and then click Next: Permission .
      Note: Do not configure any fields on the Set Permission page.
    3. Click Next: Tags .
    4. To add a tag to a user, type the key and value pair in the Key and Value fields.
      For more information about IAM tags, see AWS Documents .
    5. Click Next: Review .
    6. Click Create User .
      An IAM user is created.
    7. To display the credential of the user, click Show in the Access key ID , Secret access Key , and Password fields.
      Note: Copy the credentials in a file and save the file on to your local machine. You need the credentials when you configure AWS as an account in Calm to manage applications.
  5. To assign permission to the user, click the user you created on the Users page.
    The Summary page appears.
  6. On the Permissions tab, click + Add inline policy .
  7. On the Create Policy page, click the JSON tab and use the following JSON code in the code editor area.
    
    {
    	"Version": "2012-10-17",
    	"Statement": [
        	{
            	"Effect": "Allow",
            	"Action": [
                	"iam:ListRoles",
                	"iam:ListSSHPublicKeys",
                	"iam:GetSSHPublicKey",
                	"iam:GetAccountPasswordPolicy",
                	"ec2:RunInstances",
                	"ec2:StartInstances",
                	"ec2:StopInstances",
                	"ec2:RebootInstances",
                	"ec2:CreateTags",
                	"ec2:CreateVolume",
                	"ec2:CreateSnapshot",
                	"ec2:CreateImage",
                	"ec2:ModifyImageAttribute",
                	"ec2:ModifyInstanceAttribute",
                	"ec2:AttachVolume",
                	"ec2:DetachVolume",
                	"ec2:ModifyVolume",
                	"ec2:AssociateIamInstanceProfile",
                	"ec2:ReplaceIamInstanceProfileAssociation",
                	"ec2:DisassociateIamInstanceProfile",
                	"ec2:RegisterImage",
                	"ec2:DeregisterImage",
                	"ec2:DeleteSnapshot",
                	"ec2:GetConsoleOutput",
                	"ec2:Describe*",
                	"ec2:DeleteTags",
                	"ec2:TerminateInstances"
            	],
            	"Resource": "*"
        	},
        	{
            	"Effect": "Allow",
            	"Action": ["iam:ListUserPolicies"],
            	"Resource": ["arn:aws:iam::*:user/${aws:username}"]
        	},
        	{
            	"Effect": "Allow",
            	"Action": ["iam:PassRole"],
            	"Resource": ["arn:aws:iam::*:role/*"]
        	}
    	]
    }
    
  8. Click Review Policy .
  9. On the Review Policy page, in the Name field, type a name for the policy and click Create policy .

What to do next

You can configure AWS as a provider on the Settings page. For more information, see Configuring an AWS Account. You can also assign different policy privileges to the user. For more information, see AWS Policy Privileges.

AWS Policy Privileges

The following table displays the list of user policy privileges and the corresponding JSON attributes that you can add in the JSON syntax to assign different privileges to a user.

Table 1. User Privileges and the JSON attributes
To create JSON attributes
EC2 Instances ec2:RunInstances
Volumes ec2:CreateVolume
Snapshot ec2:CreateSnapshot
Image(AMI) ec2:CreateImage
To list or get JSON attributes
SSH Public Keys for all users iam:ListSSHPublicKeys
List IAM Roles iam:ListRoles
EC2 attributes ec2:Describe*
EC2 instance console output ec2:GetConsoleOutput
IAM user policies for the user iam:ListUserPolicies
To update JSON attributes
Image(AMI) attributes ec2:ModifyImageAttribute
To delete JSON attributes
EC2 Instances ec2:TerminateInstances
Instance Tags ec2:DeleteTags
Snapshot ec2:DeleteSnapshot
Images(deregister images) ec2:DeregisterImage
Others JSON attributes
Start/Stop/Restart Instances ec2:RunInstances, ec2:StartInstances, ec2:StopInstances, ec2:RebootInstances
Pass and IAM role to service iam:PassRole

Configuring a GCP Account

Configure your GCP account in Calm to manage applications on the GCP platform.

Before you begin

Ensure that you have the service account file of your GCP account in a JSON format saved on your local machine. To create a GCP service account file, see the GCP documentation .

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel is displayed.
  3. Click +Add Account .
    The Account Settings page appears.
    Figure. Account- GCP Click to enlarge

  4. In the Name field, type a name for the account.
  5. From the Provider list, select GCP .
  6. To import the service account file from your local machine, click Service Account File .
    A service account file is a special Google account file that you can use to upload the details of your GCP account.
    The values in the Project ID , Private Key , Client Email , and Token URI fields are auto-filled after you upload the file.
  7. From the Regions list, select the geographical regions.
    By default, Calm includes all regions in the account. You can remove a region from the account. You can also clear the All Regions check box and select regions from the Regions list.
    Warning: Removing a region from an existing GCP account impacts the deployed VMs in that region.
  8. Select the Enable GKE check box to enable Google Kubernetes Engine (GKE). This step is optional.
  9. In the Server IP field, type the GKE leader IP address.
  10. In the Port field, type the port number.
  11. Click Save .
  12. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

To troubleshoot some common issues, see KB-5616. You can create a project and configure a GCP environment, see Creating a Project and Configuring GCP Environment.

Configuring an Azure Account

Configure your Azure account in Calm to manage applications on the Azure platform.

About this task

Note:
  • For detailed description of the required fields, refer to the Azure documentation .
  • Only authorized organizations can use restricted regions like Australia Central through Calm. For more information, refer to the Microsoft documentation .

Before you begin

  • Assign appropriate role to your application. For detailed information, refer to the Microsoft documentation .

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel appears.
    Figure. Account- Azure Click to enlarge

  3. In the Name field, type a name for the account.
  4. From the Provider list, select Azure .
  5. Do the following under the Service Principal Credentials section.
    1. In the Directory/Tenant ID field, type the directory/tenant ID of your Azure application.
    2. In the Application/Client ID field, type the application/client ID.
    3. In the Client Key/Secret field, type the client key or secret.
  6. From the Subscriptions list, select your Azure subscriptions.
    The subscriptions you select provide access to the associated resource groups during blueprint configuration. The subscriptions allow you to launch VMs in the associated resource groups with a single account. When you do not select any subscriptions, Calm provides access to all the subscriptions available in the Azure service principal.
  7. From the Default Subscription list, select a default subscription. This step is optional.
    Specify the default subscription if the Azure VM configurations such as blueprints and marketplace items created in any earlier versions of Calm require any backward compatibility.
  8. From the Cloud Environment list, select a cloud environment.
    You can select Public Cloud , US Government Cloud , China Cloud , or German Cloud .
  9. Click Save .
  10. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

Create a project and configure an Azure environment, see Creating a Project and Configuring Azure Environment.

Configuring Azure User Account with Minimum Privilege

You must have a privileged Azure user account to manage applications on an Azure platform using Calm.

About this task

To refer to a video about assigning minimum privilege to configure Azure account to work with Calm, click here.

Procedure

  1. Log on to Azure portal with your administrator account.
  2. Open https://shell.azure.com and select bash.
  3. Create a .json file with the following content.
    {
      "Name": "Calm Admin",
      "IsCustom": true,
      "Description": "For calm to manage VMs on azure provisioned from calm applications",
      "Actions": [
        "Microsoft.Storage/storageAccounts/read",
        "Microsoft.Storage/storageAccounts/write",
        "Microsoft.Storage/checknameavailability/read",
        "Microsoft.Storage/skus/read",
        "Microsoft.Network/virtualNetworks/subnets/*",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/networkSecurityGroups/*",
        "Microsoft.Network/networkInterfaces/*",
        "Microsoft.Network/publicIPAddresses/*",
        "Microsoft.Network/publicIPPrefixes/*",
        "Microsoft.Compute/availabilitySets/vmSizes/read",
        "Microsoft.Compute/availabilitySets/read",
        "Microsoft.Compute/availabilitySets/write",
        "Microsoft.Compute/disks/*",
        "Microsoft.Compute/images/read",
        "Microsoft.Compute/images/write",
        "Microsoft.Compute/locations/publishers/read",
        "Microsoft.Compute/locations/publishers/artifacttypes/offers/read",
        "Microsoft.Compute/locations/publishers/artifacttypes/offers/skus/read",
        "Microsoft.Compute/locations/publishers/artifacttypes/offers/skus/versions/read",
        "Microsoft.Compute/skus/read",
        "Microsoft.Compute/snapshots/*",
        "Microsoft.Compute/locations/vmSizes/read",
        "Microsoft.Compute/virtualMachines/*",
        "Microsoft.Resources/subscriptions/resourceGroups/read",
        "Microsoft.Resources/subscriptions/resourceGroups/write",
        "Microsoft.Resources/subscriptions/resourceGroups/delete",
        "Microsoft.GuestConfiguration/*/read",
        "Microsoft.GuestConfiguration/*/write",
        "Microsoft.GuestConfiguration/*/action",
        "Microsoft.Compute/galleries/read",
        "Microsoft.Compute/galleries/images/read",
        "Microsoft.Compute/galleries/images/versions/read",
        "Microsoft.KeyVault/vaults/read",
        "Microsoft.KeyVault/vaults/deploy/action"
      ],
      "NotActions": [],
      "AssignableScopes": [
        "/subscriptions/<subscription id>"
      ]
    } 
  4. In the Azure cloud shell, run the following command.
    az role definition create --role-definition <file>.json
    Use the file you created in step 4 in place of <file>.json .
    Calm Admin user role is created.
  5. In the Azure cloud shell, run the following command to create an Azure Service Principal. The command returns all the information required to add the Azure account in Calm.
    az ad sp create-for-rbac -n "CalmAccount" --role "Calm Admin"
  6. Copy the values for appId , password , and tenant . You need these values to add the Azure account in Calm.

Configuring a Kubernetes Account

Configure your Kubernetes account in Calm to manage applications on the Kubernetes platform.

Before you begin

Ensure that you meet the following requirements.
  • You have a compatible version of Kubernetes. Calm is compatible with Kubernetes 1.16 and 1.17.
  • You have necessary RBAC permissions on the Kubernetes server.
  • You have the authentication mechanism enabled on the Kubernetes cluster.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel is displayed.
  3. Click +Add Account .
    The Account Settings page appears.
    Figure. Provider- Kubernetes Click to enlarge

  4. In the Name field, type a name for the account.
  5. From the Provider list, select Kubernetes .
  6. From the Type list, select one of the following.
    • Select Vanilla to self deploy the kubernetes clusters.
    • Select Karbon to add Karbon as the provider type. Nutanix Karbon is a curated turnkey offering that provides simplified provisioning and operations of Kubernetes clusters.
  7. If you have selected the kubernetes type as Vanilla , then do the following.
    1. In the Server IP field, type the Kubernetes leader IP address.
    2. In the Port field, type the port number of the Kubernetes server.
    3. From the Auth Type list, select the authentication type.
      You can select one of the following authentication types.
      • Basic Auth : Basic authentication is a method for an HTTP user agent, for example, a web browser, to provide a user name and password when making a request.
      • Client Certificate : A client certificate is a digital certificate protected with a key for authentication.
      • CA Certificate : A client authentication certificate is a certificate that is used to authenticate clients during an SSL handshake. The certificate authenticates users who access a server by exchanging the client authentication certificate.
      • Service Account : A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
    4. If you have selected Basic Auth , then do one of the following.
      • In the Username field, type the username. If the domain is a part of the username, then the username syntax should be <username>@<domain> .
      • In the Password field, type the password.
    5. If you have selected Client Certificate , then do one of the following.
      • Under Client Certificate , upload the client certificate.
      • Under Client Key , upload the private key.
    6. If you have selected CA Certificate , then do one of the following.
      • Under CA Certificate , upload the CA certificate.
      • Under Client Certificate , upload the client certificate.
      • Under Client Key , upload the private key.
    7. If you selected Service Account , then do one of the following.
      • Under Token , upload the service account authentication token.
      • Under CA Certificate , upload the CA certificate.
  8. If you have selected the kubernetes type as Karbon , then do the following.
    1. In the Cluster list, select the respective kubernetes cluster that you want to add.
  9. Click Save .
  10. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

Configuring Amazon EKS, Azure Kubernetes Service, or Anthos

For Calm to manage workloads on Amazon EKS, Azure Kubernetes Service (AKS), or Anthos, enable the generic authentication mechanism and create a service account on the Kubernetes cluster. You can then use the service account to communicate with the cluster.

Procedure

  1. Create a service account by running the following Kubernetes command:
    kubectl create serviceaccount ntnx-calm
    A service account is a user that the Kubernetes API manages. A service account is used to provide an identity for the processes that run in a pod.
  2. Bind the cluster admin role to the Calm service account using the following command:
    kubectl create clusterrolebinding ntnx-calm-admin --clusterrole cluster-admin --serviceaccount default:ntnx-calm
  3. Get the service account secret name using the following command:
    SECRET_NAME=$(kubectl get serviceaccount ntnx-calm -o jsonpath='{$.secrets[0].name}')
    Secrets are objects that contain sensitive data such as a key, token, or password. Placing such information in a Secret allows better control and reduces the risk of exposure.
  4. Get the service account token using the following command:
    kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 –decode
  5. Get the CA certificate using the following command:
    kubectl config view --minify --raw -o jsonpath='{.clusters[*].cluster.certificate-authority-data}' | base64 –decode

What to do next

After receiving the service token, you can add the account in Calm and use the token to communicate with the cluster. For more information on adding the account, see Configuring a Kubernetes Account.

Configuring a Xi Cloud Account

To manage workloads on Nutanix Xi Cloud, add your Xi Cloud as an account in Calm if your Prism Central is paired with a Xi cloud. Calm automatically discovers the availability zones of the Xi Cloud and allows you to add the Xi Cloud account as a provider account.

Before you begin

Ensure that you meet the following conditions.
  • You enabled Xi leap in Prism Central.
  • Your Prism Central is paired with Xi Cloud.
  • Your Prism Central and Xi Cloud are connected to a VPN.
  • You added the routes to Xi gateway in Prism Central.

Procedure

  1. Click the Settings icon in the left pane.
  2. Click the Accounts tab.
    The account inspector panel appears.
  3. Click +Add Account .
    The Account Settings page appears.
  4. In the Name field, type a name for the Xi cloud.
  5. From the Provider list, select Xi .
    Calm automatically add the paired availability zones in the Availability Zones field.
  6. Click Save .
  7. To verify the credentials of the account, click Verify .
    Calm lists the account as an active account after successful credential authentication and account verification.

What to do next

You can use the configured Xi cloud to host blueprints and application by using Calm. For more information, see Calm Blueprints Overview.

Platform Sync for Provider Accounts

Calm automates the provisioning and management of infrastructure resources for both private and public clouds. When any configuration changes are made directly to the Calm-managed resources, Calm needs to sync up the changes to accurately calculate and display quotas and Showback information.

Platform sync enables Calm to synchronize any changes in the clusters that are managed by Calm on connected providers. These changes can be any IP Address changes, disk resizing, unavailability of VMs, and so on.

For example, when a VM is powered off externally or deleted, platform sync updates the VM status in Calm. Calm then adds the infrastructure resources consumed by the VM (memory and vCPU) to the total available quota.

You can specify an interval after which the platform sync must run for a cluster. For more information, see Configuring a Remote Prism Central Account and Configuring a VMware Account.

Note: Platform sync is supported for Nutanix, VMware, and AWS. Calm provides automatic platform sync for AWS with a predefined sync interval of 20 minutes. You can, however, sync up the configuration changes instantly for your Nutanix or VMware account. For more information, see Synchronizing Platform Configuration Changes.

Synchronizing Platform Configuration Changes

Platform sync enables Calm to synchronize any changes in the clusters that are managed by Calm on connected providers. These changes can be any IP Address changes, disk resizing, unavailability of VMs, and so on. You can sync up the configuration changes instantly for your accounts.

About this task

Note: Platform sync is supported for Nutanix, VMware, and AWS. Calm provides automatic platform sync for AWS with a predefined sync interval of 20 minutes. The following steps are applicable only to the Nutanix and VMware accounts.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. Click the Accounts tab.
  3. Select the Nutanix or VMware account for which you want to sync up configuration changes in the left pane.
  4. On the Account Settings page, click the Sync Now button.
    Figure. Sync Now Click to enlarge

Allocating Resource Quota to an Account

Allocate resource quotas to your accounts to have a better control over the infrastructure resources (computer, memory, and storage) that are provisioned through Calm. Based on the resource quota you allocate, the policy engine enforces quota checks when applications are launched, scaled-out, or updated.

About this task

Note: You can allocate resource quotas to Nutanix and VMware accounts only.

Before you begin

  • Ensure that you configured your Nutanix or VMware account. For more details about configuring an account, see Provider Account Settings in Calm.
  • Ensure that you enabled the policy engine on the Settings page. For more details about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
    The Settings page appears.
  2. Click the Accounts tab.
  3. Select the Nutanix or VMware account in the left pane.
  4. On the Account Settings page, select the Quotas check box.
    If you have set up the quota defaults on the General tab of the Settings page, the default values populate automatically in the vCPU , Memory , and Disk fields of the discovered clusters.
    Figure. Quota Definition Click to enlarge Quota Definition

  5. Allocate required quota values to the discovered clusters.
    The Physical Resources row below the quota fields shows the physical resource already used and the total physical resource of the cluster. You can use this information when you allocate resource quotas to the account.
  6. Click Save .

Viewing Quota Utilization Report

Use the utilization report to analyze how the projects to which the cluster is assigned consumed the allocated resources of the cluster. For example, if a Nutanix cluster is assigned to three different projects, you can analyze how the assigned projects consumed the allocated resources of that cluster.

Before you begin

  • Ensure that you have enabled the policy engine on the Settings tab. For more details about enabling the policy engine, see Enabling policy Engine.
  • Ensure that you have allocated resource quotas to the provider. For more details, see Allocating Resource Quota to an Account.
  • Ensure that you have allocated resource quotas to projects. For more details, see Adding Accounts to a Project.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. Click the Accounts tab.
  3. Select the Nutanix or VMware account in the left pane.
    The Account Settings page appears.
  4. In the Quotas section, do the following to view the resources consumed for each cluster:
    1. In the Quota Utilization | View Report row, hover your mouse over the status bar of a resource.
      The ToolTip displays the resources consumed and the resources allocated to the cluster.
      Figure. Quota Utilization Status Bar Click to enlarge Quota Utilization Status Bar

      Note: You can also use the status bar to view the overall status and percentage of resources consumed.
    2. Click View Report .
      The Utilization Report window appears.
      Figure. Utilization Report Click to enlarge Utilization Report

    3. View project-wise consumption of resources along with the amount of compute, memory, and storage that the projects used.

Credentials in Calm

Credentials help in abstracting identity settings while connecting to an external system. Credentials are used to authenticate a user to access various services in Calm. Calm supports key-based and password-based authentication method.

Credentials Overview

Credentials are used in multiple Calm entities and workflows.

  • Environment

    Environment allows a Project Admin to add multiple credentials and configure VM default specifications for each of the selected providers as a part of project and environment configurations.

    Project admins must configure an environment before launching an application from the marketplace. The recommendation is to have at least one credential of each secret type (SSH or password) to be defined under each environment in the project. These values get patched wherever the credential values are empty when you launch your marketplace items.

  • Blueprints and runbooks

    Developers can add credentials to a blueprint. These credentials are referenced after the VM is provisioned. Credentials defined within an environment of a project have no significance or impact on the credentials you define within the blueprint.

    Calm supports export and import of blueprints across different Prism Central or Calm instances along with the secrets. The developer uses a passphrase to encrypt credentials and then decrypts credentials in a different instance using the same passphrase to create a blueprint copy.

  • Marketplace

    All global marketplace items have empty credentials values. However, locally published blueprints can have the credential values if the developer published the blueprint with the Publish with Secret s option enabled.

    When you launch a marketplace item, credentials are patched wherever the value is empty. In case there are multiple credentials of a particular type configured within the environment of a project, you get the option to select a credential for the launch.

  • Applications

    Owners can change the credential value of an application multiple times until the application is deleted. The latest value of a credential that is available at that point in the application instance is used when an action is triggered.

    Any change in the credential value at the application level does not impact the credential value at the corresponding blueprint level.

Calm allows managing the following types of credentials:

  • Static Credentials

    Static credentials in Calm are modelled to store secrets (password or SSH private key) in the credential objects that are contained in the blueprints that the applications copy.

  • Dynamic Credentials

    Calm supports external credential store integration for dynamic credentials. A credential store holds username and password or key certificate combinations and enables applications to retrieve and use credentials for authentication to external services whenever required. As a developer, you can:

    • Define credential attributes that you want to pass on to the credential provider from the blueprint during execution.
    • Define variables that the credential provider must use. By default, Calm defines the secret variable when you configure your credential provider.
      Note: To use the credential in the blueprint, the variables in the credential and the blueprint must match.
    • Define a runbook with eScript tasks in the dynamic credential provider definition. The tasks you define in the runbook can set the username, password, private key, or passphrase values for the credential.

    For more information about configuring a credential provider, see Configuring a Credential Provider.

    When a blueprint uses a dynamic credential, the secret (password or SSH private key) is not stored in the credential objects within the blueprint. The secret values are fetched on demand by executing the runbook within the credential provider that you configure in Calm and associate with the blueprint.

    Note:
    • You cannot add a dynamic credential when you configure an environment in our project. You can, however, allow the credential provider in the environment.
    • You cannot use dynamic credentials for HTTP variables, such as profile variables, service variables, runbooks, and so on.
    • You cannot use dynamic credentials for HTTP endpoints and Open Terminal in applications.
    • For ready-to-use blueprints or blueprints that are published without secrets, the empty credential values are patched with the credential along with its associated runbook and variable values.

Configuring a Credential Provider

Calm supports external credential store integration for dynamic credentials.

About this task

As a developer, you can define variable, runbook, and attributes in a dynamic credential provider definition.

Procedure

  1. Click the Settings icon Settings icon in the left pane.
  2. On the Credential Providers tab, click Add Credential Provider .
    Figure. Add Credential Provider Click to enlarge

  3. On the Credentials Provider Account Settings page, enter a name for the credential provider in the Name field.
    Figure. Credential Provider Click to enlarge

  4. Enter the server address of the credential provider in the Provider Server Address field.
  5. Enter the secret for the account in the Provider Secret field. The secret for the provider can be a key, token, or password.
  6. To add the credential attributes, click the + icon next to Credential Attributes and do the following:
    1. Enter a name for the credential attribute in the Name field.
    2. Select a data type for the credential attribute in the Data Type field.
    3. Provide a value for the data type you selected in the Value field. You can select the Secret check box to hide the value of the credential attribute.
      Credential attributes are the variables that you pass on to the credential provider from the blueprint during execution. Developers require these attributes in blueprints and runbooks to use the credential provider.
  7. To add variables, click the + icon in the Variables section and do the following:
    1. Enter a name for the variable in the Name field.
    2. Select a data type for the variable in the Data Type field.
    3. Provide a value for the data type you selected in the Value field. You can select the Secret check box to hide the value of the variable.
      The variables that you add during the credential provider configuration are only used in the runbooks that you define for the credential provider.
      To use the credential in the blueprint, the variables in the credential and the blueprint must match.
  8. Configure a runbook for the credential provider. For information on runbook configuration, see Runbooks Overview.
    You can define a runbook with an eScript task. The tasks are used to set the username, password, private key, or passphrase values for the credential.
    Note: When the runbook uses the eScript task, then do the following in the Set Variable eScript task to fetch SSH keys or multi-line secrets from the credential provider.
    • Encode the secrets.
    • Set the is_secret_encoded variable to True.
    The runbook uses the variables you defined during the credential provider configuration. You can click Variable/Attributes button within the runbook to view, edit, or add variables. After your runbook is defined, you can click Test to test the runbook.
  9. Click Save .

What to do next

You can add the credential provider to a project. For more information, see Adding Accounts to a Project.

Projects and Environments in Calm

In Calm, a project is a set of Active Directory users with a common set of requirements or a common structure and function, such as a team of engineers collaborating on an engineering project.

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project. You can configure multiple environments in a project and set one of the environments as the default environment for the project.

Projects Overview

The project construct defines various environment-specific settings. For example:

  • Permissions, such as the user accounts and groups that can deploy a marketplace application.
  • The networks that can be used when deploying an application.
  • Default VM specifications and deployment options, such as vCPUs, vRAM, storage, base images, Cloud-Init or Sysprep specs.
  • Credentials.
Figure. Projects Click to enlarge

Projects provide logical groupings of user roles to access and use Calm within your organization. To assign different roles to users or groups in a project, you use configured Active Directory in Prism Central.

A project in Calm offers the following user roles. Each role has predefined functions that a user with role assigned can perform in a project. For more information, see Role-Based Access Control in Calm.

  • Project admin
  • Developer
  • Consumer
  • Operator
Note: A Prism Admin is a super user within Calm and within the rest of Prism Central who has full access to all the features and functionalities of Calm.

As part of the project configuration, you also configure environments. Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project. You use a configured environment either during your blueprint creation or during an application launch. When you select a configured environment while launching an application from the marketplace, the values for the application launch are picked up from the selected environment.

Creating a Project

You create a project to map your provider accounts and define user roles to access and use Calm.

About this task

Use this procedure to create and define the basic setup of your project.

Before you begin

Ensure that you configure the provider accounts that you want to add to your project. For more information, see Provider Account Settings in Calm.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Click the +Create Project button to create a new project.
    The Create Project window appears.
    Figure. Create Project window Click to enlarge Create Project window

  3. Enter a name for the project in the Project Name field.
  4. Enter a description for the project in the Description field.
  5. (Optional) Enter an admin for the project in the Project Admin field.
    The project automatically adds you as a Project Admin when you create it. You can add more users in the subsequent steps of project configuration.
  6. Check the Allow Collaboration check box to allow project users to collaboratively manage VMs and applications within the project.
    The Allow Collaboration check box appears when you add your first user to the project. By default, the Allow Collaboration check box is checked and enables a project user to view and manage VMs and applications of other users in the same project. If you uncheck the Allow Collaboration check box, project users can manage only the VMs and applications that they create. You cannot change this configuration after you add the first user and save the project. To change the configuration, you need to remove all users from the project.
  7. Click the Create button.
    The Overview tab for the project appears.
    Figure. Project Overview Click to enlarge Project Overview

    Note: You can view the status of your project creation next to the project name.
  8. To configure your project further, do the following:
    1. Use the Users, Groups & Roles tab to add users to your project. For more information, see Adding Users to a Project.
    2. Use the Accounts tab to add accounts to your project. For more information, see Adding Accounts to a Project.
    3. Use the Environments tab to add credentials and configure VMs for the provider accounts that you selected for your project. For more information, see Configuring Environments in Calm.
    4. Use the Policies tab to define your quota and snapshot policies. For more information, see Quota Policy Overview and Creating a Snapshot Policy.
    You can also use the Users, Groups & Roles , Accounts , and Environments tiles to add or modify users, providers, and environments.
  9. Click the Save button on the page.

What to do next

After you have configured users, providers, and environments, you can use the project to configure blueprints and launch applications.

Adding Users to a Project

Calm uses projects to assign roles to particular users and groups. Based on these roles, users are allowed to view, launch, or manage applications.

About this task

Use this procedure to add users with different roles to your project.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Do one of the following:
    • Click the +Create Project button to create a new project and add users to the project. For more information about creating a project, see Creating a Project.
    • Click a project name in the list of existing projects to add users to that project.
  3. On the Overview tab, click the Add Users button in the Users, Groups & Roles tile.
    The Add Users window appears.
    Figure. Add Users Click to enlarge Add Users

  4. If you have multiple Active Directory domains configured, ensure that the directory service from which you want to add users and groups is selected. Do the following:
    1. Click the gear icon next to the + Add User link.
      The Search Directories window appears.
      Figure. Search Directories Window Click to enlarge search Directories Window

    2. Select the radio button for the Active Directory that you want to use to add users and groups.
    3. Click the Save button.
    Note: Local users are not supported in a project. You can only add users from your configured directory service.
  5. Click the + Add User link to add users or groups to the project.
    A blank row is added with the Name and Role columns.
  6. In the Name column, enter the Active Directory name of a user or a group (typically in the form of name@domain ).
  7. In the Role column, select a user role from the list.
    The default value in the Role column is Project Admin . You can select a value from the list to change the user role. For more information about the user roles and their permissions, see Role-Based Access Control in Calm.
    The Allow Collaboration check box appears when you add the first user to your project. By default, the Allow Collaboration check box is checked and enables a project user to view and manage VMs and applications of other users in the same project. If you uncheck the Allow Collaboration check box, project users can manage only the VMs and applications that they create. You cannot change this configuration after you add the first user and save the project. To change the configuration, you need to remove all users from the project.
  8. Click the Save Users and Project button.
    The Users, Groups & Roles tile on the Overview tab displays the number of users you added to the project.
    Figure. Users, Groups & Roles Click to enlarge Users, Groups, & Roles

    Note:
    • If you add a group to a project, users in the group might not appear in the project members list until they log in.
    • Nested groups (groups within a group) are not supported. For example, if a selected group (Group1) includes a nested group (Group1.1) along with individual names, only individual names are added to the project. The group members of Group 1.1 are not added to the project.

Add Accounts

You can add multiple accounts of the same provider or different providers you configured in Calm to your projects. Calm supports accounts of the following providers:

  • Nutanix
  • VMware
  • AWS
  • Azure
  • GCP
  • Kubernetes
  • Xi

When you add Nutanix accounts to your project, you can allow clusters and their corresponding VLAN or overlay subnets.

A VLAN subnet is bound to a Prism Element cluster. When you use the VLAN subnet to a provision a VM, the VM get automatically placed on that Prism Element.

However, an overlay subnet can span from a few clusters to all the clusters of your Prism Central. Therefore, when you configure your Nutanix account in your project, Calm enables you to allow clusters before allowing the subnets.

Allowing clusters before their subnets enables you to have projects where you can allow VPCs and their corresponding subnets without allowing any VLAN subnets.

For Nutanix and VMware accounts within your project, you can also define resource quota limits for quota checks. For more information, see Quota Policy Overview.

Adding Accounts to a Project

You can add multiple provider accounts or credential provider accounts that you configured in Calm to your project. You can also define resource quota limits for quota checks for Nutanix and VMware accounts within the project.

About this task

Use this procedure to add provider accounts or credential provider accounts to your project.

Before you begin

  • Ensure that you have configured your provider accounts or credential provider accounts on the Settings tab. For information about configuring a provider account, see Provider Account Settings in Calm. For information about configuring a credential provider, see Configuring a Credential Provider.
  • For resource quota limit definition, ensure that you have enabled the policy engine on the Settings tab. For more details about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Do one of the following:
    • Click the +Create Project button to create a new project and add providers to the project. For more information about creating a project, see Creating a Project.
    • Click a project name in the list of existing projects to add providers to that project.
  3. On the Overview tab, click the Add Accounts button in the Accounts tile.
    The Add Accounts page appears.
    Figure. Add Accounts Click to enlarge Add Accounts

  4. Click Select Account and select a provider account or a credential provider account from the list.
    Note: For Nutanix accounts, you can map one or more AHV clusters to your project and allow subnets from the selected clusters. Allowing the subnets provides you the flexibility to manage your application workloads across AHV clusters within a blueprint.
  5. If you selected a credential provider account, then go to step 8.
  6. If you selected a Nutanix account, do the following.
    1. Click Configure Resources .
    2. On the Select Clusters and VLANs tab, select the cluster that you want to allow in the project from the Select clusters to be added to this project list.
      Figure. Select Clusters and VLANs Click to enlarge

      Selecting a cluster for the account is mandatory. You must select a cluster irrespective of whether you want to allow VLAN subnets or not.
    3. Under Select VLANs for the above clusters section, click Select VLANs to view and select the VLANs that you want to allow in the project. This step is optional.
    4. Click the Select VPCs & Subnets button.
      The Select VPCs & Subnets button or tab appears only on VPC-enabled setups.
    5. On the Select VPCs & Subnets tab, select the VPC from the Select VPCs to view overlay subnets below list to view the associated subnets. This step is optional.
      The Select overlay subnets section displays the overlay subnets associated with the VPC you selected.
      Figure. Select VPCs and Subnets Click to enlarge

      If tunnel is not configured for the selected VPC, you can perform only basic operations, such as VM provisioning, on the VPC. To perform check log-in and orchestration, ensure that you create a tunnel for the VPC. For more information, see Creating VPC Tunnels.
    6. Under the Select overlay subnets section, select the overlay subnets that you want to allow in the project. This step is optional.
    7. Do one of the following:
      • For the local Nutanix account, click Confirm and Select Default , select a default VLAN subnet, view the configuration summary, and then click Confirm .
      • For a remote PC account, click Confirm , view the configuration summary, and then click Confirm .
      The default VLAN subnet is used when you create a virtual machine in Prism Central.
    Note: You can select one or more AHV clusters. You can allow one or more subnets per cluster. When you configure a blueprint, only the allowed networks and clusters appear for the user to select during network configuration. If the network selection is a runtime attribute, only the allowed networks and clusters are available to update while launching a blueprint.
  7. If you selected a Nutanix or VMware account, you can define resource quota limits.
    1. Select the Quotas check box.
      The vCPU , Memory , and Disk fields are enabled.
      Figure. Quota Definition Click to enlarge Quota Definition

    2. Enter quota values for vCPU , Memory , and Disk for the selected clusters.
      The Available/Total row shows the available resources quota and the total quota allocated to the provider. The Physical Capacity row shows the used and total physical capacity. Use these details while defining resource quota limits.
  8. Click Save Accounts and Project .
    The Accounts tile on the Overview tab displays the number of accounts you added to the project.
    Figure. Accounts in a Project Click to enlarge Accounts tile

    The Tunnels section at the bottom of the Project Setup page displays the tunnels inherited from the VPCs configured in the project. For information on VPC tunnels, see VPC Tunnels for Orchestration.

Modifying a Project

Calm allows you to modify the users, accounts, and quota details of a saved project. You can also delete an account from a project.

About this task

Use this procedure to modify an existing project.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears.
  2. Click the project that you want to modify.
    The Overview tab of the project appears.
  3. (Optional) Click Edit under the Project Description section to edit the description of the project.
  4. To add users to the project, click the Users, Groups and Roles tab, and then click + Add User .
  5. To remove users from the project, click Delete in the user or group row.
  6. To add accounts to the project, click the Accounts tab, and then click + Add Account to select and add an account from the list.
    If you select a Nutanix or VMware account, you can also define resource quota limits. For more information, see Adding Accounts to a Project.
  7. To remove accounts from the project, do the following:
    1. Click Remove next to an existing account in the left pane to remove the account from the project.
      You can remove an account from the project only when you do not have any applications, blueprints, and environments associated with the account. Make sure you dissociate all applications, blueprints, and environments before removing an account from the project.
      Note: For Nutanix accounts, before removing a subnet from a saved cluster, ensure that the subnet is not associated with any application or blueprint.
    2. In the Remove Account window, click Delete .
  8. Click Save to save the changes.

Deleting a Project

You can delete a project that is not associated with any application or blueprint. If the project is already used to create any applications or blueprints, you cannot delete the project. In such cases, a dialog box appears displaying the association of the project with different applications or blueprints.

About this task

Use this procedure to delete a project.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears.
  2. Select the check box adjacent to the project that you want to delete.
  3. From the Action list, select Delete .
    Calm verifies the association of the project with any application or blueprint. After successful verification, a confirmation message appears.
  4. Click Delete .
    The project is deleted from the Projects tab.

Environment Overview

Environment is a subset of a project. When you create a project, you can add multiple accounts of the same provider or accounts of different providers that you configured in Calm to your project. You can then configure one or more environments in your project.

When you create an application blueprint, you select a project and use the environments you configured for that project for application deployments. You can also optionally select one environment for each application profile in the blueprint.

Note: Environment is not supported for Kubernetes.

Configuring Environments in Calm

You configure environments as a part of your project creation so that you can use the configured environments when you create blueprints or launch marketplace application blueprints. You can configure multiple environments in your project.

About this task

Video: Configuring Environments

Before you begin

Ensure that you have configured a project and have added the required accounts to your project. For more information, see Creating a Project and Adding Accounts to a Project.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Click the project in which you want to configure environments.
  3. On the Overview tab, click the Create Environment button in the Environments tile.
    The General tab of the Create Environment page appears.
  4. Enter a name for the environment in the Name field.
  5. Provide a description for the environment in the Description field.
  6. Select Set as default environment check box to use the environment as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  7. Click Next .
  8. On the Accounts tab, click the Select Account list, and select an account.
    The Select Account list shows the accounts that you added to your project. For more information, see Adding Accounts to a Project. You can click + Add Account to select and add multiple provider accounts to the environment.
  9. Expand the VM Configuration section to configure the virtual machine details for each provider you added.
    • For more information on how to configure a VM for Nutanix, see Configuring Nutanix Environment.
    • For more information on how to configure a VM for AWS, see Configuring AWS Environment.
    • For more information on how to configure a VM for VMware, see Configuring VMware Environment.
    • For more information on how to configure a VM for GCP, see Configuring GCP Environment.
    • For more information on how to configure a VM for Azure, see Configuring Azure Environment.
  10. Click Next .
  11. On the Credentials tab, add credentials for your environment. For more information, see Adding Credentials to the Environment.
  12. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of each environment you configured for your project.

Configuring Nutanix Environment

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project.

About this task

Use this procedure to configure environment variables for Nutanix.

Before you begin

Ensure that you have configured a project and selected a Nutanix account. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Overview tab, click the Create Environment button in the Environments tile.
    Figure. Create Environment Click to enlarge

    The General tab of the Create Environment page appears.
  2. Enter a name and a description for the environment.
  3. Select Set as default environment check box if you want to use it as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  4. Click Next .
  5. On the Accounts tab, click the Select Account list, and select a Nutanix account.
    Figure. Select Account Click to enlarge

    The Create Environment page appears.
  6. Select the Nutanix account in the left pane.
    The Resource Configuration section displays the cluster, VLAN subnets, and overlay subnets you selected in the project for the account.
    When you configure the environment for the first time, the cluster and subnets that you selected for the project is selected for the environment by default. However, you can click the Configure Resources button to configure resources specific to the environment. You can select additional subnets for the environment or remove any subnets that you have already selected for the project.
  7. Expand the VM Configuration section, and select either Windows or Linux as the operating system for the VM.
    Figure. VM Configuration Click to enlarge

  8. In the Cluster list, select the cluster where you want to place the VM.
    The Cluster list displays the clusters that you allowed in the project.
    The VLAN subnets have direct association with the cluster. When you select a VLAN subnet under the Network Adapters section, the associated cluster is auto-populated in the Cluster list. However, if you intend to use overlay subnets, you must select the cluster in list.
  9. Enter a name of the VM in the VM Name field.
    You can use Calm macros to provide a unique name to the VM. For example, vm-@@{calm_time}@@ . For more information on Calm macros, see Macros Overview.
  10. Configure the processing unit of the VM by entering the number of vCPU, cores of each vCPU, and total memory in GB of the VM in the vCPU , cores per vCPU , and Memory (GiB) fields.
  11. If you want to customize the default OS properties of the VM, select the Guest Customization check box.
    Figure. Guest Customization Click to enlarge

    Guest customization allows you to modify the properties of the VM operating system. You can prevent conflicts that might result due to the deployment of virtual machines with identical settings, such as duplicate VM names or same SID. You can also change the computer name or network settings by using a custom script.
    1. Select Cloud-init for Linux or SysPrep for Windows, and enter or upload the script in the Script panel.
      For Sysprep, you must use double back slash for all escape characters . For example, \\v.
    2. For Sysprep script, click Join a Domain check box and configure the following fields.
      • Enter the domain name of the Windows server in the Domain Name field.
      • Select a credential for the Windows VM in the Credentials list. You can also add new credentials.
      • Enter the IP address of the DNS server in the DNS IP field.
      • Enter the DNS search path for the domain in the DNS Search Path field.
  12. To add a virtual disk to the VM, click the + icon next to the DISKS section and do the following.
    Figure. Disks Click to enlarge

    1. Select the device for the image from the Device Type list.
      You can select either CD-ROM or Disk .
    2. Select the device bus from the Device Bus list.
      You can select IDE or SATA for CD-ROM and SCSI , IDE , PCI , or SATA for DISK.
    3. From the Operations list, select one of the following.
      • To allocate the disk memory from the storage container, select Allocate on Storage Container .
      • To clone an image from the disk, select Clone from Image Service .
    4. If you selected Allocate on Storage Container , enter the disk size in GB in the Size (GiB) field.
    5. If you selected Clone from Image Service , select the image you want to add to the disk in the Image field.
      All the images that you uploaded to Prism Central are available for selection. For more information about image configuration, see Image Management section in the Prism Central guide.
    6. Select the Bootable check box for the image that you want to use to start the VM.
    Note: You can add more than one disk and select the disk with which you want to boot up the VM.
  13. Under the Boot Configuration section, select a firmware type to boot the VM.
    • To boot the VM with legacy BIOS firmware, select Legacy BIOS .
    • To boot the VM with UEFI firmware, select UEFI . UEFI firmware supports larger hard drives, faster boot time, and provides more security features.
  14. (For GPU-enabled clusters only) To configure a vGPU, click the + icon under the vGPUs section and do the following:
    1. From the Vendor list, select the GPU vendor.
    2. From the Device ID list, select the device ID of the GPU.
    3. From the Mode list, select the GPU mode.
  15. Under the Categories section, select a category in the Key: Value list.
    Use this option to tag your VM to a defined category in Prism Central. The list options are available based on your Prism Central configuration. If you want to protect your application by a protection policy, select the category defined for the policy in your Prism Central. Categories list is available only for Nutanix.
  16. To add a network adapter, click the + icon next to the Network Adapters (NICS) field.
    Figure. NIC Click to enlarge

    The NIC list shows all the VLAN and overlay subnets. The VLAN subnets have direct association with the cluster. Therefore, when you select a VLAN subnet, the associated cluster is auto-populated in the Cluster list.
    The NICs of a VM can either use VLAN subnets or overlay subnets. For example, if you select an overlay subnet in NIC 1 and then add NIC 2, the NIC 2 list displays only the overlay subnets.
    If you select a VLAN subnet in NIC 1, any subsequent VLAN subnets belong to the same cluster. Similarly, if you select an overlay subnet, all subsequent overlay subnets belong to the same VPC.
  17. Configure the connection in your environment. For more information, see Configuring Connection in your Environment.
  18. Click Next .
  19. Add credentials for the environment. For more information, see Adding Credentials to the Environment. This step is optional.
  20. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of the environment you configured for your project.

What to do next

You can use the environment details while configuring a blueprint for Nutanix or launching a blueprint.

Configuring AWS Environment

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project.

About this task

Use this procedure to configure environment variables for AWS.

Before you begin

Ensure that you have configured a project and selected an AWS account. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Overview tab, click the Create Environment button in the Environments tile.
    Figure. Create Environment Click to enlarge

    The General tab of the Create Environment page appears.
  2. Enter a name and a description for the environment.
  3. Select Set as default environment check box if you want to use it as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  4. Click Next .
  5. On the Accounts tab, click the Select Account list, and select an AWS account.
    Figure. Select Account Click to enlarge

    The Create Environment page appears.
  6. Select the AWS account in the left pane.
  7. Expand the VM Configuration section, and select either Windows or Linux as the operating system for the VM.
    Figure. VM Configuration Click to enlarge

  8. Enter the name of the instance in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  9. Select the Associate Public IP Address check box to associate a public IP address with your AWS instance.
    If you do not select the Associate Public IP Address check box, ensure that the AWS account and Calm are on the same network for the scripts to run.
  10. Select an AWS instance type from the Instance Type list.
    Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
    The list displays the instances that are available in the AWS account. For more information, see AWS documentation.
  11. Select the region from the Region list and configure the following.
    Note: The list displays the regions which are selected while configuring the AWS setting.
    1. Select the availability zone from the Availability Zone list.
      An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS region. Availability Zones allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.
    2. Select the machine image from the Machine Image list.
      An Amazon Machine Image is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2.
    3. Select the IAM role from the IAM Role list.
      An IAM role is an AWS Identity and Access Management entity with permissions to make AWS service requests.
    4. Select the key pairs from the Key Pairs list.
      A key pair (consisting of a private key and a public key) is a set of security credentials that you use to prove your identity when connecting to an instance.
    5. Select the VPC from the VPC list.
      Amazon Virtual Private Cloud (Amazon VPC) allows you to provision a logically isolated section of the AWS cloud where you can launch AWS resources in your defined virtual network.
      • Select the Include Classic Security Group check box to enable security group rules.
      • Select security groups from the Security Groups list.
  12. Enter or upload the AWS user data in the User Data field.
  13. Enter the AWS tags in the AWS Tags field.
    AWS tags are key and value pair to manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.
  14. Under the Storage section, configure the following to boot the AWS instance with the selected image.
    Figure. Storage Click to enlarge

    1. In the Device field, select the device to boot the AWS instance. The available options are based on the image you have selected.
    2. In the Size (GiB) field, enter the required size for the bootable device.
    3. In the Volume Type list, select the volume type. You can select either General Purpose SSD , Provisioned IOPS SSD , and EBS Magnetic HDD .
      For more information on the volume types, see AWS documentation.
    4. (Optional) Select the Delete on termination check box to delete the storage when the instance is terminated.
    You can also add more secondary storages by clicking the + icon next to the Storage section.
  15. Configure the connection in your environment. For more information, see Configuring Connection in your Environment.
  16. Click Next .
  17. (Optional) Add credentials for the environment. For more information, see Adding Credentials to the Environment.
  18. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of the environment you configured for your project.

What to do next

You can use the environment details while configuring a blueprint for AWS or launching a blueprint.

Configuring VMware Environment

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project.

About this task

Use this procedure to configure environment variables for VMware.

Before you begin

Ensure that you have configured a project and selected a VMware account. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Overview tab, click the Create Environment button in the Environments tile.
    Figure. Create Environment Click to enlarge

    The General tab of the Create Environment page appears.
  2. Enter a name and a description for the environment.
  3. Select Set as default environment check box if you want to use it as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  4. Click Next .
  5. On the Accounts tab, click the Select Account list, and select a VMware account.
    Figure. Select Account Click to enlarge

    The Create Environment page appears.
  6. Select the VMware account in the left pane.
  7. Expand the VM Configuration section, and select either Windows or Linux as the operating system for the VM.
    Figure. VM Configuration Click to enlarge

  8. Select the Compute DRS Mode check box to enable load sharing and automatic VM placement.
    Distributed Resource Scheduler (DRS) is a utility that balances computing workloads with available resources in a virtualized environment. For more information about DRS mode, see the VMware documentation .
    • If you selected Compute DRS Mode , then select the cluster where you want to host your VM from the Cluster list.
    • If you have not selected Compute DRS Mode , then select the host name of the VM from the Host list.
  9. Do one of the following:
    • Select the VM Templates radio button and then select a template from the Template list.

      Templates allow you to create multiple virtual machines with the same characteristics, such as resources allocated to CPU and memory or the type of virtual hardware. Templates save time and avoid errors when configuring settings and other parameters to create VMs. The VM template retrieves the list options from the configured vCenter.

      Note:
      • Install the VMware Tools on the Windows templates. For Linux VMs, install Open-vm-tools or VMware-tools and configure the Vmtoolsd service for automatic start-up.
      • Support for Open-vm-tools is available. When using Open-vm-tools , install Perl for the template.
      • Do not use SysPrepped as the Windows template image.
      • If you select a template that has unsupported version of VMware Tools, then a warning appears stating VMware tool or version is unsupported and could lead to VM issues .
      • You can also edit the NIC type when you use a template.

      For more information, refer to VMware KB articles.

    • Select the Content Library radio button, a content library in the Content Library list, and then select an OVF template or VM template from the content library.

      A content library stores and manages content (VMs, vApp templates, and other types of files) in the form of library items. A single library item can consist of one file or multiple files. For more information about the vCenter content library, see the VMware Documentation .

      Caution: Content Library support is currently a technical preview feature in Calm. Do not use any technical preview features in a production environment.
  10. If you want to use the storage DRS mode, then select the Storage DRS Mode check box and a datastore cluster from the Datastore Cluster list.
    The datastore clusters are referred as storage pod in vCenter. A datastore cluster is a collection of datastores with shared resources and a shared management interface.
  11. If you do not want to use storage DRS mode, then do not select the Storage DRS Mode check box, and select a datastore from the Datastore list.
  12. In the VM Location field, specify the location of the folder in which the VM must be created when you deploy the blueprint. Ensure that you specify a valid folder name already created in your VMware account.
    To create a subfolder in the location you specified, select the Create a folder/directory structure here check box and specify a folder name in the Folder/Directory Name field.
    Note: Calm gives preference to the VM location specified in the environment you select while launching an application. For example, you specify a subfolder structure as the VM location in the blueprint and the top-level folder in the environment. When you select this environment while launching your application, Calm considers the VM location you specified in the environment and creates the VM at the top-level folder.
    Select the Delete empty folder check box to delete the subfolder you create within the specified location, in case the folder does not contain any VM resources. This option helps you to keep a clean folder structure.
  13. Enter the instance name of the VM in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  14. Select the CPU Hot Add check box if you want to increase the VCPU count of a running VM.
    Support for CPU Hot Add depends on the Guest OS of the VM.
  15. Update the vCPUs and Core per vCPU count.
  16. Select the Memory Hot Plug check box if you want to increase the memory of a running VM.
    Support for Memory Hot Plug depends on the Guest OS of the VM.
  17. Update the memory in the Memory field.
  18. Under Controller , click + to add the type of controller.
    You can select either SCSI or SATA controller. You can add up to three SCSI and four SATA controllers.
  19. Under the Disks section, click the + icon to add vDisks and do the following:
    1. Select the device type from the Device Type list.
      You can either select CD-ROM or DISK .
    2. Select the adapter type from the Adapter Type list.
      You can select IDE for CD-ROM.
      You can select SCSI , IDE , or SATA for DISK.
    3. Enter the size of the disk in GiB.
    4. In the Location field, select the disk location.
    5. If you want to add a controller to the vDisk, select the type of controller in the Controller list to attach to the disk.
      Note: You can add either SCSI or SATA controllers. The available options depend on the adapter type.
    6. In the Disk mode list, select the type of the disk mode. Your options are:
      • Dependent : Dependent disk mode is the default disk mode for the vDisk.
      • Independent - Persistent : Disks in persistent mode behave like conventional disks on your physical computer. All data written to a disk in persistent mode are written permanently to the disk.
      • Independent - Nonpersistent : Changes to disks in nonpersistent mode are discarded when you shut down or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you shut down or reset.
  20. Under the Tags section, select tags from the Category: Tag pairs field.
    You can assign tags to your VMs so you can view the objects associated with your VMs in your VMware account. For example, you can create a tag for a specific environment and assign the tag to multiple VMs. You can then view all the VMs that are associated with the tag.
  21. (Optional) If you want to customize the default OS properties of the VM, then click the Enable check box under VM Guest Customization and select a customization from the Predefined Guest Customization list.
  22. If you do not have any predefined customization available, select None and do the following.
    1. Select Cloud-init or Custom Spec .
    2. If you selected Cloud-init , enter or upload the script in the Script field.
    3. If you have selected Custom Spec , enter the network details for the VM in the following fields:
      • Enter the hostname in the Hostname field.
      • Enter the domain in the Domain field.
      • Select timezone from the Timezone list.
      • Select Hardware clock UTC check box to enable hardware clock UTC.
      • Click the + icon to add network settings.
      • To automatically configure DHCP server, enable the Use DHCP check box and then skip to the DNS Setting section.
      • Enter a name for the network configuration you are adding to the VM in the Setting name field. Settings name is the saved configuration of your network that you want to connect to your VM.
      • Enter values in the IP Address , Subnet Mask , Default Gateway , and Alternative Gateway fields.
      • Under the DNS Settings section, enter values in the DNS Primary , DNS Secondary , DNS Tertiary , and DNS Search Path .
  23. Configure the connection in your environment. For more information, see Configuring Connection in your Environment.
  24. Click Next .
  25. (Optional) Add credentials for the environment. For more information, see Adding Credentials to the Environment.
  26. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of the environment you configured for your project.

What to do next

You can use the environment details while configuring a blueprint for VMware or launching a blueprint.

Configuring GCP Environment

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project.

About this task

Use this procedure to configure environment variables for GCP.

Before you begin

Ensure that you have configured a project and selected a GCP account. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Overview tab, click the Create Environment button in the Environments tile.
    Figure. Create Environment Click to enlarge

    The General tab of the Create Environment page appears.
  2. Enter a name and a description for the environment.
  3. Select Set as default environment check box if you want to use it as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  4. Click Next .
  5. On the Accounts tab, click the Select Account list, and select a GCP account.
    Figure. Select Account Click to enlarge

    The Create Environment page appears.
  6. Select the GCP account in the left pane.
  7. Expand the VM Configuration section, and select either Windows or Linux as the operating system for the VM.
    Figure. VM Configuration Click to enlarge

  8. Under VM Configuration , enter the instance name of the VM in the Instance Name field. This field is pre-populated with macro as suffix to ensure name uniqueness.
  9. Select the zone from the Zone list.
    Zone is a physical location where you can host the VM.
  10. Select machine type from the Machine Type list.
    The machine types are available based on your zone. A machine type is a set of virtualized hardware resources available to a virtual machine (VM) instance, including the system memory size, virtual CPU (vCPU) count, and persistent disk limits. In Compute Engine, machine types are grouped and curated by families for different workloads.
  11. Under the Disks section, click the + icon to add a disk.
    You can also mark the added vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  12. To use an existing disk configuration, select the Use existing disk check box, and then select the persistent disk from the Disk list.
    Figure. Disks Click to enlarge

  13. If you have not selected the Use existing disk check box, then do the following:
    1. Select the type of storage from the Storage Type list. The available options are as follows.
      • pd-balanced : Use this option as an alternative to SSD persistent disks with a balanced performance and cost.
      • pd-extreme : Use this option to use SSD drives for high-end database workloads. This option has higher maximum IOPS and throughput and allows you to provision IOPS and capacity separately.
      • pd-ssd : Use this option to use SSD drives as your persistent disk.
      • pd-standard : Use this option to use HDD drives as your persistent disk.
      The persistent disk types are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each disk is distributed across several physical disks.
    2. Select the image source from the Source Image list.
      The images available for your selection are based on the selected zone.
    3. Enter the size of the disk in GB in the Size in GB field.
    4. To delete the disk configuration after the instance is deleted, select the Delete when instance is deleted check box under the Disks section.
  14. To add a blank disk, click the + icon under the Blank Disks section and configure the blank disk.
  15. To add networking details to the VM, click the + icon under the Networking section.
  16. To configure a public IP address, select the Associate Public IP address check box and configure the following fields.
    1. Select the network from the Network list and the sub network from the Subnetwork list.
    2. Enter a name of the network in the Access configuration Name field and select the access configuration type from the Access configuration type list.
      These fields appear when you select the Associate public IP Address check box.
  17. Under the SSH Key section, click the + icon and enter or upload the username key data in the Username field.
  18. Select Block project-wide SSH Keys to enable blocking project-wide SSH keys.
  19. Under the Management section, do the following:
    1. Enter the metadata in the Metadata field.
    2. Select the security group from the Network Tags list.
      Network tags are text attributes you can add to VM instances. These tags allow you to make firewall rules and routes applicable to specific VM instances.
    3. Enter the key-value pair in the Labels field.
      A label is a key-value pair that helps you organize the VMs created with GCP as the provider. You can attach a label to each resource, then filter the resources based on their labels.
  20. Under the API Access section, do the following:
    1. Specify the service account in the Service Account field.
    2. Under Scopes, select Default Access or Full Access .
  21. Configure the connection in your environment. For more information, see Configuring Connection in your Environment.
  22. Click Next .
  23. (Optional) Add credentials for the environment. For more information, see Adding Credentials to the Environment.
  24. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of the environment you configured for your project.

What to do next

You can use the environment details while configuring a blueprint for GCP or launching a blueprint.

Configuring Azure Environment

Environment configuration involves configuring VMs and adding credentials for the accounts that you added to your project.

About this task

Use this procedure to configure environment variables for Azure.

Before you begin

  • Ensure that the following entities are already configured in the Azure account.
    • Resource group
    • Availability set
    • Network security group
    • Virtual network
    • Vault certificates
  • Ensure that you have configured a project and selected an Azure account. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Overview tab, click the Create Environment button in the Environments tile.
    Figure. Create Environment Click to enlarge

    The General tab of the Create Environment page appears.
  2. Enter a name and a description for the environment.
  3. Select Set as default environment check box if you want to use it as a default environment to launch applications for the project.
    The Set as default environment check box is selected by default for the first environment you configure in your project.
  4. Click Next .
  5. On the Accounts tab, click the Select Account list, and select an Azure account.
    Figure. Select Account Click to enlarge

    The Create Environment page appears.
  6. Select the Azure account in the left pane.
  7. Expand the VM Configuration section, and select either Windows or Linux as the operating system for the VM.
    Figure. VM Configuration Click to enlarge

  8. Under VM Configuration , enter the instance name of the VM in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  9. Select a resource group from the Resource Group list or select the Create Resource Group check box to create a resource group.
    Each resource in Azure must belong to a resource group. A resource group is simply a logical construct that groups multiple resources together so you can manage the resources as a single entity. For example, you can create or delete resources as a group that share a similar life cycle, such as the resources for an n-tier application.

    The Resource Group list displays the resource groups that are associated with the subscriptions you selected in your Azure account. In case you have not selected any subscriptions, Calm considers all the subscriptions that are available in the Azure service principal to display the resource groups. Each resource group in the list also displays the associated subscription.

  10. If you selected a resource group from the Resource Group list, then do the following:
    1. Select the geographical location of the datacenter from the Location list.
    2. Select Availability Sets or Availability Zones from the Availability Option list.
      You can then select an availability set or availability zone. An availability set is a logical grouping capability to ensure that the VM resources are isolated from each other to provide High Availability if deployed within an Azure datacenter. An availability zone allows you to deploy your VM into different datacenters within the same region.
    3. Select the hardware profile as per your hardware requirements from the Hardware Profile list.
      The number of data disks and NICs depends upon the selected hardware profile. For information about the sizes of Windows and Linux VMs, see Windows and Linux Documentation.
  11. If you selected the Create Resource Group check box to create a resource group, then do the following:
    1. Select a subscription associated to your Azure account in the Subscription field.
    2. Enter a unique name for the resource group in the Name field.
    3. Select the geographical location of the datacenter that you want to add to the resource group in the Location list.
    4. Under Tags , enter a key and value pair in the Key and Value fields respectively.
      Tags are key and value pairs that enable you to categorize resources. You can apply a tag to multiple resource groups.
    5. If you want to automatically delete a resource group that has empty resources while deleting an application, click the Delete Empty Resource Group check box.
    6. Specify the location and hardware profile.
  12. Under the Secrets section, click the + icon and do the following:
    1. Enter a unique vault ID in the Vault ID field.
      These certificates are installed on the VM.
    2. Under Certificates , click the + icon.
    3. Enter the URL of the configuration certificate in the URL field.
      The URL of the certificate is uploaded to key vault as a secret.
    4. Enter the certificate store for the VM in the Store field.
      • For Windows VMs, specify the certificate store on the virtual machine to which the certificate is added. The specified certificate store is implicitly created in the LocalMachine account.

      • For Linux VMs, the certificate file is placed under the /var/lib/waagent directory with the file name <UppercaseThumbprint>.crt for the X509 certificate file and <UppercaseThumbpring>.prv for private key. Both of these files are .pem formatted.

  13. (For Windows) Select the Provision Windows Guest Agent check box.
    This option indicates whether or not to provision the virtual machine agent on the virtual machine. When this property is not specified in the request body, the default behavior is to set it to true. This ensures that the VM Agent is installed on the VM, and the extensions can be added to the VM later.
  14. (For Windows) To indicate that the VM is enabled for automatic updates, select the Automatic OS Upgrades check box.
  15. Under the Additional Unattended Content section, click the + icon and do the following:
    1. Select a setting from the Setting Name list.
      You can select Auto Logon or First Logon Commands .
      Note: Guest customization is applicable only on images that allows or support guest customization.
    2. Enter or upload the xml content. See Sample Auto Logon and First Logon Scripts.
  16. Under the WinRM Listeners section, click the + icon and do the following:
    1. Select the protocol from the Protocol list.
      You can select HTTP or HTTPS .
    2. If you selected HTTPS, then select the certificate URL from the Certificate URL list.
  17. Under the Storage Profile section, select the Use Custom Image check box to use a custom VM image created in your subscription.
    You can then select a custom image or publisher-offer-SKU-version from the Custom Image list.
  18. Under the VM Image Details section, select an image type in the Source Image Type list.
    You can select Marketplace , Subscription , or Shared Image Gallery .
    Do one of the following:
    • If you selected Marketplace , then specify the publisher, offer, SKU, and version for the image.
    • If you selected Subscription , then select the custom image.
    • If you selected Shared Image Gallery , then select the gallery and the image.
  19. Under the OS Disk Details section, do the following:
    Figure. OS Disk Details Click to enlarge

    1. Select the storage type from the Storage Type list.
      You can select Standard HDD , Standard SSD , or Premium SSD .
    2. Select a disk storage account from the Disk Storage list.
      This field is available only when the Use Custom Image check box is enabled.
    3. Select disk caching type from the Disk Caching Type list.
      You can select None , Read-only , or Read write .
    4. Select disk create option from the Disk Create Option list.
      You can select Attach , Empty , or From Image .
  20. Under the Network Profile section, add NICs as per your requirement and do the following for each NIC:
    Figure. Network Profile Click to enlarge

    1. Select a security group from the Security Group list.
    2. Select a virtual network from the Virtual Network list.
    3. Under Public IP Config , enter a name and select an allocation method.
    4. Under Private IP Config , select an allocation method.
      If you selected Static as the allocation method, then enter the private IP address in the IP Address field.
  21. (Optional) Enter tags in the Tags field.
  22. Configure the connection in your environment. For more information, see Configuring Connection in your Environment.
  23. Click Next .
  24. (Optional) Add credentials for the environment. For more information, see Adding Credentials to the Environment.
  25. Click Save Environment & Project .
    The Environments tile on the Overview tab displays the number of environments you configured for the project. You can go to the Environments tab to view the details of the environment you configured for your project.

What to do next

You can use the environment details while configuring a blueprint for Azure or launching a blueprint.

Configuring Connection in your Environment

Perform the following steps to configure connection in your environment.

Procedure

  1. Expand the Connection section.
  2. To check the log on status after creating the VM, click the Check log-in upon create check box.
  3. In the Credential list, select Add New Credential to add a new credential and do the following:
    1. Enter a name of the credential in the Credential Name field.
    2. Enter user name in the Username field.
    3. Select the secret type from the Secret Type list.
      You can either select Password or SSH Private Key .
    4. Do one of the following.
      • If you selected password, enter the password in the Password field.
      • If you selected SSH Private Key, enter or upload the SSH private key in the SSH Private Key field.
      (Optional) If the private key is password protected, click +Add Passphrase to provide the passphrase.
    5. If you want this credential as your default credential, select the Use as default check box.
    6. Click Done .
  4. Select address from the Address list.

    You can either select the public IP address or private IP address of a NIC.

  5. Select the connection from the Connection Type list.
    Select SSH for Linux or Windows (Powershell) for Windows.
    The Connection Port field is automatically populated depending upon the selected Connection Type . For SSH, the connection port is 22 and for PowerShell the connection port is 5985 for HTTP and 5986 for HTTPS.
  6. If you selected Windows (Powershell) , then select the protocol from the Connection Protocol list. You can select HTTP or HTTPS .
  7. Enter the delay in seconds in the Delay field.

    Delay timer defines the time period when the check login script is run after the VM starts. It allows you to configure the delay time to allow guest customization script, IP, and all other services to come up before running the check login script.

  8. In the Retries field, enter the number of log-on attempts the system must perform after each log on failure.

Support for Multiple Credential

Credentials help in abstracting identity settings while connecting to an external system. You can configure multiple credentials of the same type (either SSH key or password) and define under the Environment tab. You can use the configured credentials during launch of an application blueprint.

Adding Credentials to the Environment

Credentials are used to authenticate a user to access various services in Calm. Calm supports key-based and password-based authentication method.

About this task

Use this procedure to add credentials.

Before you begin

Ensure that you have configured a project and created environments for the project. For more information, see Configuring Environments in Calm.

Procedure

  1. On the Credentials tab, click + Add Credentials .
  2. Enter a name of the credential in the Credential Name field.
  3. Enter a username in the Username field.
  4. Select a secret type from the Secret Type list.
    You can either select Password or SSH Private Key .
  5. Do one of the following.
    • If you selected password, enter the password in the Password field.
    • If you selected SSH Private Key, enter or upload the SSH private key in the SSH Private Key field.
    Optionally, if the private key is password protected, click +Add Passphrase to provide the passphrase.
    The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
  6. Click Save Environment & Project .

Quota Policy Overview

Quota policies enforce a usage limit on an infrastructure resource for projects and restrict project members to use more than their specified quota limits. Quotas ensure that a single project or a few projects do not overrun the infrastructures. If the cluster runs out of a resource, project members cannot use the resource even if the project has not reached its specified limit.

Quota policies also enforce a usage limit on an infrastructure resource at the provider account level to ensure that the resource consumption is within the specified quota limits across all projects of that provider account.

Note: Quotas do not reserve any specific amount of infrastructure resources.

Quota Allocation

Quotas are allocated at the account and project levels. Enforcement of resource quota depends on the following factors:

  • The status of the policy engine.

    You must enable the policy engine to enforce resource quota policies. For more information, see Enabling policy Engine.

  • The resource quotas you allocate to the Nutanix and VMware provider accounts. For more information, see Allocating Resource Quota to an Account.
  • The resource quotas you allocate for a project at the project level. For more information, see Managing Quota Limits for Projects.
  • The resource quotas you allocate to the Nutanix and VMware accounts within a project. For more information, see Adding Accounts to a Project.

Project-Level Quota Allocation

You can define resource quota limits at the project level for the projects that you create in Calm or in Prism Central. The Policies tab of the Project page in Calm provides a unified view of all the resource quota limits that you defined for the project and the accounts within the project. You can manage all your project-specific quota definition on the Policies tab. For more information on how to manage project-level quota limits, see Managing Quota Limits for Projects.

You must consider these conditions while allocating quotas at the project level.

  • You can define resource quota limits for a project without defining quota limits for the associated Nutanix or VMware accounts.
  • The quota limits you define at the project level cannot be less than the sum of quotas you allocated to different accounts within the project. You can, however, increase the project-level quota limits.
  • The project-level quota limits cannot be more than the sum of quotas allocated at the account level to the associated providers. For example, if your project has a Nutanix account and a VMware account, the project-level quota limits cannot be more than the sum of quotas allocated globally to the associated Nutanix and the VMware accounts.
  • When you disable quota checks at the project level, Calm does not perform quota checks for the actions that are performed within the project from the time it is disabled.

Quota Checks

Calm performs quota checks for every resource provisioning request. Quota check happens for multi-VM applications, single-VM applications, and the VMs that are created from Prism Central within a project.

  • For multi-VM applications, quota check happens when you launch a blueprint, update an application, or perform a scale-out action to increase the number of replicas of a service deployment.
  • For single VM applications, quota check happens when you launch a blueprint or update an application.
  • For successful launch of a blueprint, application update, or scale-out, the requested resource must be within the quota limit allocated to the associated provider and project.
  • For quota consumption of running applications after enabling the policy engine, you can wait for the platform sync to happen or run the platform sync manually. After the first update, all future updates will happen instantly. For more information on how to run platform sync, see Synchronizing Platform Configuration Changes.
  • In case of quota violation, appropriate notification is displayed with details such as the associated project, associated account if applicable, and the reasons for violation.
  • In case of a scale down or application delete action, the consumed resources are released back as available quotas and added to the total available quota.
  • To view quota consumption of running applications, you can either wait for the next platform sync to happen or run platform sync manually after policy engine enablement. For more information on how to run platform sync, see

Reporting

The Prism Admin and Project Admin can view project-wise usage of infrastructure resources for each cluster. For more information, see Viewing Quota Utilization Report and Managing Quota Limits for Projects.

Managing Quota Limits for Projects

The Policies tab of a project provides you the options to define and manage quota limits for the project and its associated Nutanix and VMware accounts.

About this task

You can do the following as part of the quota limit management at the project level:

  • Enable or disable project-level quota checks.
  • Assign quota limits for the project.
  • Edit quota limits for the associated Nutanix and VMware accounts within the project. For information on allocating resource quota limits to the associated accounts, see Adding Accounts to a Project.
  • View quota utilization report.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Do one of the following:
    • Click the +Create Project button to create a new project and define quota limits. For more information about creating a project, see Creating a Project.
    • Click a project name in the list of existing projects to define quota limits for that project.
  3. Configure your project with users, accounts, and environments. For more information, see Projects Overview.
  4. Click the Policies tab of the project.
    Figure. Policies Tab Click to enlarge

  5. Ensure that the Quotas tab is selected in the left pane.
  6. To enable quota checks at the project level, enable the Quotas toggle button in the right pane.
  7. To assign resource quota limits at the project level, enter quota values for vCPU , Memory , and Disk for the project.
    • The quota limits you define at the project level for vCPU, Memory, and Disk must be equal to or more than the sum of quotas you allocated to different accounts within the project.
    • The Project/Global Quota row shows the total quota limit defined for the different associated accounts within the project and globally at the account levels. The project-level quota limits must be equal to or less than the sum of the quota allocated at the account level to the associated providers globally.
    • If you hover your mouse over the status bar of a resource in the Quota Utilization row, you can view the resources consumed and the resources allocated to the project.
  8. To manage the provider quota limits within the project, expand the provider account in the Provider Quotas section and do the following:
    Figure. Quota Definition Click to enlarge

    1. View the quota limits (vCPU, memory, and disk) allocated to the provider account within the project.
    2. Click Edit to enable and add the quota limit or modify the existing quota limit.
      The Edit Account window opens where you can enable quotas for the account, add quota limits, or modify the existing limits.
    The Available/Total row shows the available resources quota and the total quota allocated to the provider. The Physical Capacity row shows the used and total physical capacity. Use these details while allocating resource quotas to the cluster.
  9. To view the resource utilization, quota utilization, and application quota utilization at the project level, click Quota Utilization Report .
    • Use the Resource Utilization tab to view the total utilization of vCPU, Memory, and Disk at the project level. You can also view the utilization of infrastructure resources by each cluster of the accounts associated with the project.
    • Use the Quota Utilization tab to view detailed utilization data at the level of each cluster of the associated accounts. You can view the quota allocated and quota used by each account. You can also view the quota allocated for each cluster and the percentage of quota utilization at the cluster level.
    • Use the App Quota Utilization tab to review the resource quota utilization in terms of applications in Calm.
  10. To disable quota checks for the actions that are performed within the project, disable the Quotas toggle button.

Creating a Snapshot Policy

A snapshot policy allows you to define rules to create and manage snapshots of application VMs that run on a Nutanix platform. The policy determines the overall intent of the snapshot creation process and the snapshot expiration. You can create rules in your snapshot policy to manage your snapshots on a local cluster, on a remote cluster, or both. Perform the following steps to create your snapshot policy.

About this task

For information on the snapshot configuration and creation, see Snapshot and Restore for Nutanix Platform.

Procedure

  1. Click the Projects icon in the left pane.
    The Projects page appears listing all your existing projects.
  2. Do one of the following:
    • Click the +Create Project button to create a new project and create snapshot policies within the project. For more information about creating a project, see Creating a Project.
    • Click a project name in the list of existing projects to create snapshot policies within the project.
  3. Ensure that you have your project configured with users, accounts, and environments. For more information, see Projects Overview.
  4. On the Policies tab, click Snapshot in the left pane.
  5. Click +Create Snapshot Policy .
    The Create Snapshot Policy page appears.
    Figure. Create Snapshot Policy Click to enlarge

  6. In the Policy Name field, enter a name for the snapshot policy.
  7. In the Policy Description field, enter a description for the snapshot policy.
  8. Select the Set as default snapshot policy check box to make this your default snapshot policy.
  9. Under Primary Site, select a primary environment.
  10. Select an account in the primary environment to which you want to associate the snapshot policy.
    Selecting an account in the Account list enables Local Snapshots . You can view all the clusters that you allowed for the account in the Primary Cluster column under Snapshot Rules.
  11. Under Snapshot Rules, in the Snapshot Expiry field for each cluster, specify the number of days after which the snapshot should expire.
    Note: The storage cost of the snapshot depends on the days of expiration you specify for the cluster. The longer the days of expiration, the higher the storage cost. The default value is zero days, which indicates that the snapshot will never expire.
    If you do not want to include an allowed cluster in the policy, click the Delete icon next to the cluster. You can use the + Add Rule option to include a deleted cluster to the policy.
  12. To enable remote snapshots in your policy, do the following:
    Figure. Remote Snapshots Click to enlarge

    1. Enable the toggle button next to Remote Snapshots .
      You can enable remote snapshots if your target environment and the associated account has multiple allowed clusters, and you want to use one of the clusters to store snapshots. Remote snapshots are particularly useful when your Prism Central has a computer-intensive cluster managing workloads and a storage-intensive cluster managing your data, snapshots, and so on.
      You can anytime use the toggle button to enable or disable remote snapshots in your policy.
    2. Click + Add Rule .
    3. From the Primary Cluster list, select the primary cluster where the snapshots of the VMs are taken.
    4. From the Target Cluster list, select the target cluster where you want to store the snapshots.
    5. From the VM Categories list, select the VM category.
    6. For each cluster, in the Snapshot Expiry field for each cluster, specify the number of days after which the snapshot should expire.
    7. Click + Add Rule and repeat the steps to add more primary and target clusters to the rule.
  13. Click Save Snapshot Policy .

Calm Blueprints Overview

A blueprint is the framework for every application that you model by using Calm. Blueprints are templates that describe all the steps that are required to provision, configure, and execute tasks on the services and applications that you create.

You create a blueprint to represent the architecture of your application and then run the blueprint repeatedly to create an instance, provision, and launch applications.

A blueprint also defines the lifecycle of an application and its underlying infrastructure; starting from the creation of the application to the actions that are carried out on a blueprint until the termination of the application.

You can use blueprints to model the applications of various complexities; from simply provisioning a single virtual machine to provisioning and managing a multi-node, multi-tier application.

Building Blocks of a Blueprint

Calm uses services, application profiles, packages, substrates, and actions as building blocks for a blueprint to define applications.

  • Services

    An application is made up of multiple components (or services) working together. The architecture of an application is composed of compute, storage, network, and their connections and dependencies. Services are logical entities that are exposed by an IP address. End users and services communicate with each other over a network through their exposed IP addresses and ports. For more information, see Services Overview.

  • Application Profiles

    Any useful blueprint requires infrastructure for instantiation. A blueprint can specify the exact infrastructure or can be completely left to the blueprint user to specify at the time of instantiation.

    An application profile provides different combinations of the service, package, and VM (infrastructure choices) while configuring a blueprint. The application profile allows you to use the same set of services and packages on the different platforms. You select an application profile while launching your blueprint.

    Application profiles determine where an application should run, for example, on a Nutanix provider account or on an Azure account. Application profiles also control the T-shirt sizing of an application. T-shirt sizing means that the value of a variable might change based on the selection of a small or a large instance of an application.

    If Showback feature is enabled, the application profile also displays service cost of the resources used for an application.

    Figure. Application Profile Click to enlarge
  • Package (Install and Uninstall)

    Package Install and Uninstall are operations that are run when you first launch a blueprint or when you finally delete the entire application. In other words, these operations are run during the Create or Delete profile actions. Package Install and Uninstall are unique to each application profile, which means that the tasks or the task contents can vary depending upon the underlying cloud or the size.

    Package install is commonly used for installing software packages. For example, installing PostgreSQL with sudo yum -y install postgresql-server postgresql-contrib .

  • Substrates

    Substrates are a combination of the underlying cloud and the virtual machine instance. When you select the desired cloud, Calm displays all of the fields required for creating a virtual machine instance on that particular cloud. The combination of all these fields constitutes a substrate. Substrates are the infrastructure abstraction layer for Calm. Calm can quickly change where or how applications are deployed by simply changing the substrate.

  • Actions

    Actions are runbooks to accomplish a particular task on your application. You can use actions to automate any process such as backup, upgrade, new user creation, or clean-up, and enforce an order of operations across services. For more information, see Actions Overview.

Other Configurational Components

Calm also has a few other components that you can use while configuring your blueprints.

  • Macros

    Calm macros are part of a templating language for Calm scripts. These are evaluated by Calm's execution engine before the script is run. Macros help in making scripts generic and creating reusable workflows. For more information, see Macros Overview.

  • Variables

    Variables are either user defined or added to the entities by Calm. Variables are always present within the context of a Calm entity and are accessible directly in scripts running on that entity or any of its child entities. For more information, see Variables Overview.

  • Categories

    Categories (or tags) are metadata labels that you assign to your cloud resources to categorize them for cost allocation, reporting, compliance, security, and so on. Each category is a combination of key and values. For more information, see Categories Overview.

  • Dependencies

    Dependencies are used to define the dependence of one service in your application on another service or multiple other services for properties such as IP addresses and DNS names. For example, if service 2 is dependent on service 1, then service 1 starts first and stops after service 2.

    For information about how to define dependencies between services, see Setting up the Service Dependencies.

    Figure. Dependencies Click to enlarge
    Note: If there are no dependencies between tasks in a service, Calm runs the tasks in any order or even in parallel.

Blueprint Types

You can configure the following blueprint types in Calm.

  • Single-VM Blueprint

    A single-VM blueprint is a framework that you can use to create and provision an instance and launch applications that require only one virtual machine. Single-VM blueprints enable you to quickly provide Infrastructure-as-a-Service (IaaS) to your end users. For more information, see Creating a Single-VM Blueprint.

  • Multi-VM Blueprint

    A multi-VM blueprint is a framework that you can use to create an instance, provision, and launch applications requiring multiple VMs. You can define the underlying infrastructure of the VMs, application details, and actions that are carried out on a blueprint until the termination of the application. For more information, see Creating a Multi-VM Blueprint.

Blueprint Editor

The blueprint editor provides a graphical representation of various components that allow you to visualize and configure the components and their dependencies in your environment.

Figure. Blueprint Editor Click to enlarge

Use the Blueprints tab to perform actions, such as:

  • Create application blueprints for single-VM or multiple-VM architectures. For more information, see Creating a Single-VM Blueprint and Creating a Multi-VM Blueprint.
  • Add update configuration. For more information, see Update Configuration for VM.
  • Add configuration for snapshots and restore. For more information, see Blueprint Configuration for Snapshots and Restore.
  • Publish blueprints. For more information, see Submitting a Blueprint for Approval.
  • Launch blueprints. For more information, see Launching a Blueprint.
  • Upload existing blueprints from your local machine. For more information, see Uploading a Blueprint.
  • View details of your blueprints. For more information, see Viewing a Blueprint.
  • Edit details of an existing blueprint. For more information, see Editing a Blueprint.

Services Overview

Services are the virtual machine instances, existing machines or bare-metal machines, that you can provision and configure by using Calm. You can either provision a single service instance or multiple services based on the topology of your application. A service can only expose an IP address and ports on which the request is received. After a service is configured, you can clone or edit the service as required.

A service includes the following entities:

VM

A VM defines the configuration of the virtual machine instance, the platform on which the VM will be installed, and the connection information of the machine. For example, as shown in the following figure, you need to define the name, cloud, operating system, IP address, and the connection information for an existing machine.

Figure. VM Tab Click to enlarge

Package

A package enables you to install and uninstall software on an existing machine or bare metal machine by using a script. You need to provide the credentials of the VM on which you need to run the script. A sample script is shown in the following figure. Package also defines the port number and the protocol that is used to access the service.

Figure. Package Tab Click to enlarge

Service

A service enables you to create the variables that are used to define the service-level tasks and service-level actions. As part of the service, you can also define the number of replicas that you want to create of a service. The maximum number of replicas allowed is 300.

Figure. Service Tab Click to enlarge

For information about how to configure a service, see Configuring Nutanix and Existing Machine VM, Package, and Service.

Macros Overview

Calm macros are part of a templating language for Calm scripts. These are evaluated by Calm's execution engine before the script is run.

Macros enable you to access the value of variables and properties that are set on entities. The variables can be user defined or system generated. For more information, see Variables Overview.

Macro Usage

Macros help in making scripts generic and creating reusable workflows. You can use macros in tasks within the blueprints or in the configuration of Calm entities, such as the VM name.

Macro Syntax

Macros require a set of delimiters for evaluation. These are @@{ and }@@ . Everything within these delimiters is parsed and evaluated. For example,

  • To concatenate the value of a path and a string variable, you can use cd "@@{path + '/data'}@@" in your script.
  • To access credentials, you can use the @@{cred_name.username}@@ and @@{cred_name.secret}@@ formats, where cred_name is the name of the credential with which the credential is created.

Supported Entities

Macros support the following entities.

  • Application
  • Deployment
  • Service
  • Package
  • Virtual machine
  • Runbooks

Supported Data Types

Macros support the following data types.

Table 1. Supported Data Types
Data Type Usage
String @@{"some string"}@@ or @@{'some string'}@@
Note: Newline or other such special characters are not supported. You can use \ to escape quotes.
Numbers Supports integer and float. For example, @@{ 10 + 20.63 }@@
Note: All variables are treated as strings.

Supported Operations

Macros support the following operations.

  • Supports basic binary operations or numbers. For example, @@{(2 * calm_int(variable1) + 10 ) / 32 }@@.
  • Supports string concatenation. For example, @@{ foo + bar }@@.
  • Supports slicing for strings. For example, @@{foo[3:6]}@@.
    Note: For a comma separated value, slicing splits the string on comma (,). For example, @@{"x,y,z"[1]}@@ results in y.

Macros of an Array Service

Calm allows you to access macros of an array service using a special macro which starts with calm_array . You can configure a VM with replicas and access the common macros of all the replicas. For example, you can:

  • Use the following syntax to retrieve the name of all the instances of VM separated by commas.

    @@{calm_array_name}@@

  • Use the following syntax to retrieve the IP address of all the instances of VM separated by commas.

    @@{calm_array_address}@@

  • Use the following syntax to retrieve the ID of all the instances of VM separated by commas.

    @@{calm_array_id}@@

Built-in Macros

The following table lists the built-in macros that you can use to retrieve and display the entities.

Table 1. Built-in Macros
Macro Usage
@@{calm_array_index}@@ Index of the entity within an array
@@{calm_blueprint_name}@@ Name of the blueprint from which the application was created
@@{calm_blueprint_uuid}@@ Universally unique identifier (UUID) of the blueprint from which the application was created
@@{calm_application_name}@@ Name of the application
@@{calm_application_uuid}@@ UUID of the application
@@{calm_uuid}@@ UUID of the entity within the application on which the current task is running
@@{calm_random}@@ A random number is generated each time this is used. This will be evaluated each time and should not be used in fields such as VM name.
@@{calm_unique}@@ A random number that is unique to this replica. This will be evaluated to the same value across runs.
@@{calm_jwt}@@ JWT for the currently logged in user for API authentication.
@@{calm_now}@@

@@{calm_today}@@
The current time stamp
@@{calm_time(“<format>”)}@@ The current time in the specified format
@@{calm_year(“YYYY”)}@@

@@{calm_year(“YY”)}@@
The current year in YYYY or YY format
@@{calm_month(“short”)}@@

@@{calm_month(“long”)}@@
Name of the current month in long or short format
@@{calm_day(“month”)}@@

@@{calm_day(“year”)}@@
Numeric day of the month or year
@@{calm_weeknumber}@@

@@{calm_weeknumber(“iso”)}@@
ISO Numeric week of the year
@@{calm_weekday(“number”)}@@

@@{calm_weekday(“name_short”)}@@

@@{calm_weekday(“name_long”)}@@
Day of the week in numeric or short name or long name
@@{calm_hour(“12”)}@@

@@{calm_hour(“24”)}@@

@@{calm_hour(“am_pm”)}@@
Numeric hour of the day in 12:00-hour or 24:00-hour format along with AM or PM
@@{calm_minute}@@ Numeric minute
@@{calm_second}@@ Numeric second
@@{calm_is_weekday}@@ Displays 1 if the current day is a weekday
@@{calm_is_long_weekday}@@ Displays 1 if the current day is a weekday from Monday to Saturday
@@{calm_is_within("time1", "time2")}@@ Displays 1 if the current time is within the time1 and time2 range
@@{calm_project_name}@@ Displays the project name
@@{calm_username + @nutanix.com}@@ Displays the username
@@{calm_float("32.65") * 2}@@

@@{calm_int(calm_array_index) + 1}@@
Typecast to integer. This is useful for binary operations.
@@{calm_string(256) + "-bit"}@@

@@{"xyz" + calm_string(42)}@@
Typecast to string. This is useful for string concatenation.
@@{calm_b64encode(api_response)}@@

@@{calm_b64encode("a,b,c")}@@
Base64 encode the data passed to this macro.
@@{calm_b64encode(b64_encoded_data)}@@

@@{calm_b64encode("YSxiLGM=")}@@
Base64 decode the data passed to this macro.

Platform Macros

You can access the properties of a VM by using the platform macros. The following section describes the macros to access the VM properties for different providers.

Table 1. AHV platform Macros
Macro Usage
@@{platform}@@ To access all the properties of a VM.
@@{platform.status.cluster_reference.uuid}@@ To access the uuid of the cluster or the Prism element.
@@{platform.status.resources.nic_list[0].mac_address}@@ To access mac the address.
Note: Use the nic_list index to access the mac address of a specific nic.
@@{platform.status.resources.nic_list[0].subnet_reference.name}@@ To access the NIC name.
@@{platform.status.resources.power_state}@@ To get the state of the VM.
@@{platform.status.num_sockets}@@ To access number of sockets of the VM.
Note: The @@{platform}@@ macro stores the GET response of the VM. You can access any VM information that is available through the GET API response.
Table 2. VMware platform Macros
Macro Usage
@@{platform}@@ To access all the properties of a VM.
@@{platform.datastore[0].Name}@@ To access the datastore name.
@@{platform.num_sockets}@@ To access number of sockets of the VM.
Note: The @@{platform}@@ macro stores the GET response of the VM. You can access any VM information that is available through the GET API response.
Table 3. GCP platform Macros
Macro Usage
@@{platform}@@ To access all the properties of a VM.
@@{platform.creationTimestamp}@@ To get the VM creation time stamp.
@@{platform.selfLink}@@ To access the self link of the VM.
@@{platform.networkInterfaces[0].subnetwork}@@ To access the network details of the VM.
Note: The @@{platform}@@ macro stores the GET response of the VM. You can access any VM information that is available through the GET API response.

Endpoint Macros

The following table lists the endpoint macros for HTTP, Linux, and Windows endpoint types.

Table 1. HTTP
Macro Usage
@@{endpoint.name}@@ Name of the endpoint
@@{endpoint.type}@@ Type of the endpoint
@@{endpoint.length}@@ Number of IP Addresses in the endpoint
@@{endpoint.index}@@ Index of the IP address or VM in a given endpoint
@@{endpoint.base_url}@@ Base URL of the HTTP endpoint
@@{endpoint.connection_timeout}@@ Time interval in seconds after which the connection attempt to the endpoint stops
@@{endpoint.retry_count}@@ Number of attempts the system performs to create a task after each failure
@@{endpoint.retry_interval}@@ Time interval in seconds for each retry if the task fails
@@{endpoint.tls_verify}@@ Verification for the URL of the HTTP endpoint with a TLS certificate
@@{endpoint.proxy_type}@@ HTTP(s) proxy/SOCKS5 proxy to use
@@{endpoint.base_urls}@@ Base URLs of HTTP endpoints
@@{endpoint.authentication_type}@@ Authentication method to connect to an HTTP endpoint: Basic or None
@@{endpoint.credential.username}@@ User name in the credential to access the endpoint
@@{endpoint.credential.secret}@@ Credential secret type to access the endpoint: Passphrase or SSH Private Key
Table 2. Linux
Macro Usage
@@{endpoint.name}@@ Name of the endpoint
@@{endpoint.type}@@ Type of the endpoint
@@{endpoint.length}@@ Number of IP Addresses in the endpoint
@@{endpoint.index}@@ Index of the IP address or VM in a given endpoint
@@{endpoint.address}@@ IP address to access the endpoint device
@@{endpoint.port}@@ Port number to access the endpoint
@@{endpoint.value_type}@@ Target type of the endpoint: IP address or VM
@@{endpoint.addresses}@@ IP addresses to access endpoint devices
@@{endpoint.credential.secret}@@ Credential secret type to access the endpoint: Passphrase or SSH Private Key
@@{endpoint.credential.username}@@ User name in the credential to access the endpoint
Table 3. Windows
Macro Usage
@@{endpoint.name}@@ Name of the endpoint
@@{endpoint.type}@@ Type of the endpoint
@@{endpoint.length}@@ Number of IP Addresses in the endpoint
@@{endpoint.index}@@ Index of the IP address or VM in a given endpoint
@@{endpoint.address}@@ IP address to access the endpoint device
@@{endpoint.port}@@ Port number to access the endpoint
@@{endpoint.value_type}@@ Target type of the endpoint: IP address or VM
@@{endpoint.connection_protocol}@@ Connection protocol to access the endpoint: HTTP or HTTPS
@@{endpoint.addresses}@@ IP addresses to access endpoint devices
@@{endpoint.credential.secret}@@ Credential secret type to access the endpoint: Passphrase or SSH Private Key
@@{endpoint.credential.username}@@ User name in the credential to access the endpoint
Note: To call an endpoint variable from another object, replace endpoint with the other endpoint name.

Runbook Macros

The following table lists the runbook macros.

Table 1. Runbook Macros
Macro Usage
@@{calm_runbook_name}@@ Name of the runbook
@@{calm_runbook_uuid}@@ Universally unique identifier (UUID) of the runbook

Virtual Machine Common Properties

The following table lists the common properties of the virtual machine that are available for usage.

Table 1. Virtual Machine Common Properties
Properties Usage
@@{address}@@ IP address of the instance that is used by Calm to access the VM
@@{id}@@ ID of the platform identifier
@@{name}@@ Name of the VM or container
@@{mac_address}@@ Mac address of the VM
@@{platform}@@ Platform response for a GET query. This is the response in JSON format from provider.
Note: For an existing machine, only the address property is applicable.

Variables Overview

Macros provide a way to access the values of variables that you set on entities. Variables are either user defined or added to the entities by Calm. Variables are always present within the context of a Calm entity and are accessible directly in scripts running on that entity or any of its child entities.

Note: User defined variables in Calm cannot have macros in their values.

Variable Value Inheritance

The variable value of a parent entity can be accessed by the child entity unless the properties or the variables are overridden by another entity.

For example, if Variable1 is a variable that you defined on the application profile, then all child entity of the application profile can directly access the value of Variable1 in any task or script running on it as @@{variable1}@@ unless overridden by another entity.

Figure. Variable Value Inheritance Click to enlarge

Variable Access

Variables are directly accessed as @@{variable_name}@@ within any task on an entity where the variable is defined and all child entity that inherit this variable. This syntax only delivers the value for the corresponding replica in which the task is running. To get comma-separated values across replicas, you can use @@{calm_array_variable_name}@@ .

For example, on a service with 2 replicas, if you set a backup_dir variable through a set variable Escript task such as:

print "backup_dir=/tmp/backup_@@{calm_array_index}@@"

You get /tmp/backup_0 and /tmp/backup_1 values for replica 0 and 1 respectively.

When a task runs on this service with the echo "@@{backup_dir}@@" script, the script evaluates the following values in each replica of the service:

  • Replica 0

    /tmp/backup_0

  • Replica 1

    /tmp/backup_1

When you change the script to echo "@@{calm_array_backup_dir}@@" , the script evaluates to the following values in each replica of the service:

  • Replica 0

    /tmp/backup_0,/tmp/backup_1

  • Replica 0

    /tmp/backup_0,/tmp/backup_1

The syntax to access the value of variables or properties of other entities or dependencies is @@{<entity name>.<variable/attribute name>}@@ where entity name , is the name of the other entity or dependency and variable/attribute name is the name of the variable or attribute. For example:

  • Example 1: If a blueprint contains a service by the name of app_container , you can access the IP address of the app_container service in any other service using @@{app_container.address}@@ syntax.
  • Example 2: If you need addresses of a service (S1) in a task on another service (S2), you can use @@{S1.address}@@ in the script for the task running on S2. The script will evaluate to the value, such as 10.0.0.3,10.0.0.4,10.0.0.5 in case S1 has 3 replicas.

Action-Level Variables

Action-level variables are variables that are associated to an action and passed as an argument to the runlog when you run the action. Service action variables are unique for each service while the profile action variables are unique for each profile across all services and replicas. If you deploy five replicas, the service action variables will be the same across all replicas.

Action variables are used in the context of running an action and are defined at the action level. For example, if you have an action to install or uninstall a package on a particular VM, you can have the following action variables.

  • Type of action (in this case install or uninstall)
  • Name of the package

With multiple runs of this action, you can then install or uninstall multiple packages on the VM.

Nutanix Variables

The following table lists the Nutanix variables that are available for usage.

Table 1. Nutanix Variables
Variables Usage
@@{address}@@ IP address of the instance that is used by Calm to access the VM
@@{id}@@ ID of the platform identifier
@@{name}@@ Name of the VM or container
@@{mac_address}@@ Mac address of the VM
@@{platform}@@ Platform response for a GET query. This is the response in JSON format from provider.
Note: For an existing machine, only the address property is applicable.

VMware Variables

The following table lists the built-in VMware macros that you can use to retrieve and display the entities.

Table 1. VMware Macros
Properties Usage
@@{address}@@ IP address of the instance that is used by Calm to access the VM
@@{id}@@ ID of the platform identifier
@@{name}@@ Name of the VM or container
@@{mac_address}@@ Mac address of the VM
@@{platform}@@ Platform response for a GET query. This is the response in JSON format from provider.
Note: For an existing machine, only the address property is applicable.

AWS Variables

The following table lists the built-in AWS macros that you can use to retrieve and display the entities.

Table 1. AWS Macros
Macros Usage
@@{address}@@ IP address of the instance that is used by Calm to access the VM.
Note: The VM Name field does not support this macro.
@@{id}@@ Internal ID of the instance that is used within the Prism.
Note: The VM Name field does not support this macro.
@@{name}@@ Name of the VM.
Note: The VM Name field does not support this macro.
@@{aws_instance_id}@@ Instance ID of AWS
@@{private_ip_address}@@ Private IP address
@@{private_dns_name}@@ Private DNS name
@@{public_ip_address}@@ Public IP address
@@{public_dns_name}@@ Public DNS name
@@{vm_zone}@@ AWS zone of instance
@@{platform}@@ Platform response for a GET query. This is the response in JSON format from provider.

GCP Variables

The following table lists the built-in GCP macros that you can use to retrieve and display the entities.

Table 1. GCP Macros
Macros Usage
@@{address}@@

@@{ip_address}@@

@@{public_ip_address}@@
IP address of the instance that is used by Calm to access the VM.
Note: The VM Name field does not support this macro.
@@{id}@@ Internal ID of the instance that is used within the Prism.
Note: The VM Name field does not support this macro.
@@{name}@@ Name of the VM.
Note: The VM Name field does not support this macro.
@@{zone}@@ Zone in which the VM instance is created.
@@{platform_data}@@ Platform response for a GET query. This is the response in JSON format from provider.
@@{internal_ips}@@ List of all the private IP addresses.
@@{external_ips}@@ List of all the public IP addresses.

Azure Variables

The following table lists the built-in Azure macros that you can use to retrieve and display the entities.

Table 1. Azure Macros
Macros Usage
@@{address}@@ IP address of the instance that is used by Calm to access the VM.
Note: The VM Name field does not support this macro.
@@{id}@@ Internal ID of the instance that is used within the Prism.
Note: The VM Name field does not support this macro.
@@{name}@@ Name of the VM.
Note: The VM Name field does not support this macro.
@@{private_ip_address}@@ Private IP address
@@{public_ip_address}@@ Public IP address
@@{resource_group}@@ Resource group name in which the VM instance is created.
@@{platform_data}@@ Platform response for a GET query. This is the response in JSON format from provider.

Kubernetes Variables

The following table lists the Kubernetes variables that are available for usage.

Table 1. Kubernetes Variables
Properties Usage
@@{K8sPublishedService.address}@@ IP address of the service.
@@{K8sPublishedService.name}@@ Name of the service.
@@{K8sPublishedService.ingress}@@ Load balancer IP for public service.
@@{K8sPublishedService.platform}@@ Platform data for the service.
@@{K8sDeployement.name}@@ Name of the deployment.
@@{K8sDeployement.platform}@@ Platform data for the deployment.
Note: Do not use deployment macros in publish service specs or publish macros in deployment service specs in the same calm deployment.

Runtime Variables Overview

Runtime variables are used to mark the attributes while creating the blueprint so that those attributes can be modified at the time of launching the application blueprint. This is useful for the users who cannot edit or create a blueprint such as consumers. For example, while creating a blueprint, if memory attribute is marked as a runtime variable then you can change its value before launching the application blueprint.
Note: Ensure that the attributes marked as runtime variable are not null or empty and an initial value is configured.
Figure. Runtime Variable Click to enlarge

Categories Overview

Categories (or tags) are metadata labels that you assign to your cloud resources to categorize them for cost allocation, reporting, compliance, security, and so on. Each category is a combination of key and values.

Your providers impose a limit to the number of tags that you can use for cloud governance. The following table lists the category or tag limit imposed by each provider:

Table 1. Tag or Category Limit
Providers Category or Tag Limit
Nutanix 30
AWS 50
VMware No limit
GCP 15
Azure 15

Calm reserves 6 tags out of the total tags allowed by your provider and populates them automatically when you provision your VMs using Calm. For example, AWS allows a limit of 50 tags. When you provision your VM on AWS using Calm, 6 out of 50 tags are automatically populated with keys and values specific to Calm VM provisioning. You can use the remaining 46 tags to define other key-value pairs.

The following table lists the Calm-specific categories or tags and their availability for different providers:

Table 2. Calm-Specific Categories or Tags
Categories or Tags Nutanix AWS VMware GCP Azure
account_uuid X X X X
CalmApplication X X X X X
CalmService X X X X X
CalmUsername X X X X X
Calm Project X X X X
OSType X X X X X

Single-VM Blueprints in Calm

A single-VM blueprint is a framework that you can use to create and provision an instance and launch applications that require only one virtual machine.

Creating a Single-VM Blueprint

Single-VM blueprints enable you to quickly provide Infrastructure-as-a-Service (IaaS) to your end users.

About this task

You can create single-VM blueprints with your Nutanix, VMware, AWS, GCP, or Azure accounts. Use these steps to create a single-VM blueprint with any of your provider accounts.

Before you begin

Ensure that the Prism web console (also known as Prism Element) is registered with your Prism Central.

Procedure

  1. Set up your single-VM blueprint. In this step, you provide the name and description for the blueprint and select the project and environment for the blueprint. This step is common for all provider accounts. For more information, see Setting up a Single-VM Blueprint.
  2. Add VM details to your blueprint. In this step, you provide a VM name and associate a provider account and an operating system to the blueprint. This step is also common for all provider accounts. For more information, see Adding VM Details to a Blueprint.
  3. Configure the VM of your blueprint. This options available for VM configuration are derived from either the project or the environment that you selected while setting up the blueprint. For more information, see VM Configuration.
  4. Configure advanced options. In this optional step, you add credentials, configure options to check the logon status of the VM after blueprint provisioning, add pre-create and post-delete tasks, or add packages. For more information, see Configuring Advanced Options for a Blueprint.

Setting up a Single-VM Blueprint

Perform the following steps to do the preliminary setup of your single-VM blueprint.

Before you begin

Ensure that you created a project and configured an environment for the provider account that you want to associate to your blueprint. For more information, see Creating a Project and Configuring Environments in Calm.

Procedure

  1. Click the Blueprint icon in the left pane.
  2. Click + Create Blueprint > Single VM Blueprint .
  3. On the Blueprint Settings tab, enter a name and a description for your blueprint.
  4. From the Project list, select a project.
  5. From the Environment list, select an environment to configure your blueprint.
  6. To save your blueprint setup, click Save .

What to do next

Click VM Details to provide a VM name and associate a provider account and an operating system to the blueprint. For more information, see Adding VM Details to a Blueprint.

Adding VM Details to a Blueprint

Perform the following steps to add VM details to your blueprint.

Before you begin

Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.

Procedure

  1. On the VM Details tab, enter a name for the VM.
  2. From the Account list, select the provider account that you want to associate to your blueprint.
    If your provider account does not appear in the Account list, ensure that you selected the correct project on the Blueprint Settings tab. The project must have the required provider account configured.
  3. From the Operating System list, select either Linux or Windows as the operating system for the VM.
  4. To save the configurations, click Save .

What to do next

Click VM Configuration and configure the VM in your single-VM blueprint. For more information, see VM Configuration.

VM Configuration

Configuring the VM in your blueprint is specific to the provider account and the operating system you select for your blueprint. You can configure the VM in a blueprint with Nutanix, VMware, AWS, GCP, or Azure accounts.

Configuring VM for Nutanix Account

Perform the following steps to configure the VM in a single-VM blueprint for your Nutanix account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. If you have configured an environment during the project creation, then on the VM Configuration tab, click the Clone from environment button to autofill the VM configuration details. This step is optional.
    The Clone from environment button appears only when you select a specific environment you configured for the account from the Environment list on the Blueprint Settings tab. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  2. In the Cluster list, select the cluster that you want to associate to the blueprint.
    The Cluster list displays the clusters that you allowed in the project.
    The VLAN subnets have direct association with the cluster. When you select a VLAN subnet under the Network Adapters (NICs) section, the associated cluster is auto-populated in the Cluster list. However, if you intend to use overlay subnets, you must select the cluster in list.
    If you mark the cluster as runtime editable, the selected subnets also become runtime editable.
    Figure. General Configuration Click to enlarge

  3. Edit the VM name in the VM Name field.
    You can use Calm macros to provide a unique name to the VM. For example, vm-@@{calm_time}@@ . For more information on Calm macros, see Macros Overview.
  4. Configure the processing unit of the VM by entering the number of vCPU, cores of each vCPU, and total memory in GB of the VM in the vCPU , cores per vCPU , and Memory (GiB) fields.
  5. (Optional) If you want to customize the default OS properties of the VM, select the Guest Customization check box.
    Guest customization allows you to modify the properties of the VM operating system. You can prevent conflicts that might result due to the deployment of virtual machines with identical settings, such as duplicate VM names or same SID. You can also change the computer name or network settings by using a custom script.
    1. Select Cloud-init for Linux or SysPrep for Windows, and enter or upload the script in the Script panel.
      For Sysprep, you must use double back slash for all escape characters . For example, \\v.
    2. For Sysprep script, click Join a Domain check box and configure the following fields.
      • Enter the domain name of the Windows server in the Domain Name field.
      • Select a credential for the Windows VM in the Credentials list. You can also add new credentials.
      • Enter the IP address of the DNS server in the DNS IP field.
      • Enter the DNS search path for the domain in the DNS Search Path field.
  6. Under the Disks section, do the following:
    1. To add a disk, click the + icon next to Disks .
    2. Select the device from the Device Type list.
      You can select CD-ROM or DISK .
    3. Select the device bus from the Device Bus list.
      You can select IDE or SATA for CD-ROM and SCSI , IDE , PCI , or SATA for DISK.
    4. From the Operation list, select one of the following:
      • To allocate the disk memory from the storage container, select Allocate on Storage Container .
      • To clone an image from the disk, select Clone from Image Service .
    5. If you selected Allocate on Storage Container , enter the disk size in GB in the Size (GiB) field.
    6. If you selected Clone from Image Service , select the image you want to add to the disk in the Image field.
      All the images that you uploaded to Prism Central are available for selection. For more information about image configuration, see Image Management section in the Prism Central guide.
    7. Select the Bootable check box for the image that you want to use to start the VM.
    Note: You can add more than one disk and select the disk with which you want to boot up the VM.
  7. Under the Boot Configuration section, select a firmware type to boot the VM.
    • To boot the VM with legacy BIOS firmware, select Legacy BIOS .
    • To boot the VM with UEFI firmware, select UEFI . UEFI firmware supports larger hard drives, faster boot time, and provides more security features.
    • To boot the VM with the Secure Boot feature of UEFI, select Secure Boot . Secure Boot ensures a safe and secure start by preventing unauthorized software such as a malware to take control during the VM bootup.
  8. (For GPU-enabled clusters only) To configure a vGPU, click the + icon under the vGPUs section and do the following:
    1. From the Vendor list, select the GPU vendor.
    2. From the Device ID list, select the device ID of the GPU.
    3. From the Mode list, select the GPU mode.
  9. Under the Categories section, select a category in the Key: Value list.
    Use this option to tag your VM to a defined category in Prism Central. The list options are available based on your Prism Central configuration. If you want to protect your application by a protection policy, select the category defined for the policy in your Prism Central.
  10. To add a network adapter, click the + icon next to the Network Adapters (NICS) field.
    Figure. Network Adapters Click to enlarge

    The NIC list shows all the VLAN and overlay subnets. The VLAN subnets have direct association with the cluster. Therefore, when you select a VLAN subnet, the associated cluster is auto-populated in the Cluster list.
    The NICs of a VM can either use VLAN subnets or overlay subnets. For example, if you select an overlay subnet in NIC 1 and then add NIC 2, the NIC 2 list displays only the overlay subnets.
    If you select a VLAN subnet in NIC 1, all subsequent VLAN subnets belong to the same cluster. Similarly, if you select an overlay subnet, all subsequent overlay subnets belong to the same VPC.
  11. To add a serial port to the VM, click the + icon next to the Serial Ports field.
    You can use serial ports to connect a physical port or a file on the VM.
  12. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring VM for VMware Account

Perform the following steps to configure the VM in a single-VM blueprint for your VMware account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. (Optional) If you have configured an environment during the project creation, then on the VM Configuration tab, click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list on the Blueprint Settings tab. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  2. Select the Compute DRS Mode check box to enable load sharing and automatic VM placement.
    Distributed Resource Scheduler (DRS) is a utility that balances computing workloads with available resources in a virtualized environment. For more information about DRS mode, see the VMware documentation .
    • If you selected Compute DRS Mode , then select the cluster where you want to host your VM from the Cluster list.
    • If you have not selected Compute DRS Mode , then select the host name of the VM from the Host list.
  3. Do one of the following:
    • Select the VM Templates radio button and then select a template from the Template list.

      Templates allow you to create multiple virtual machines with the same characteristics, such as resources allocated to CPU and memory or the type of virtual hardware. Templates save time and avoid errors when configuring settings and other parameters to create VMs. The VM template retrieves the list options from the configured vCenter.

      Note:
      • Install the VMware Tools on the Windows templates. For Linux VMs, install Open-vm-tools or VMware-tools and configure the Vmtoolsd service for automatic start-up.
      • Support for Open-vm-tools is available. When using Open-vm-tools , install Perl for the template.
      • Do not use SysPrepped as the Windows template image.
      • If you select a template that has unsupported version of VMware Tools, then a warning appears stating VMware tool or version is unsupported and could lead to VM issues .
      • You can also edit the NIC type when you use a template.

      For more information, refer to VMware KB articles.

    • Select the Content Library radio button, a content library in the Content Library list, and then select an OVF template or VM template from the content library.

      A content library stores and manages content (VMs, vApp templates, and other types of files) in the form of library items. A single library item can consist of one file or multiple files. For more information about the vCenter content library, see the VMware Documentation .

      Caution: Content Library support is currently a technical preview feature in Calm. Do not use any technical preview features in a production environment.
  4. If you want to use the storage DRS mode, then select the Storage DRS Mode check box and a datastore cluster from the Datastore Cluster list.
    The datastore clusters are referred as storage pod in vCenter. A datastore cluster is a collection of datastores with shared resources and a shared management interface.
  5. If you do not want to use storage DRS mode, then do not select the Storage DRS Mode check box, and select a datastore from the Datastore list.
  6. In the VM Location field, specify the location of the folder in which the VM must be created when you deploy the blueprint. Ensure that you specify a valid folder name already created in your VMware account.
    To create a subfolder in the location you specified, select the Create a folder/directory structure here check box and specify a folder name in the Folder/Directory Name field.
    Note: Calm gives preference to the VM location specified in the environment you select while launching an application. For example, you specify a subfolder structure as the VM location in the blueprint and the top-level folder in the environment. When you select this environment while launching your application, Calm considers the VM location you specified in the environment and creates the VM at the top-level folder.
    Select the Delete empty folder check box to delete the subfolder created within the specified location, in case the folder does not contain any VM resources. This option helps you to keep a clean folder structure.
  7. Enter the instance name of the VM in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  8. Under Controllers , click the + icon to add the type of controller.
    You can select either SCSI or SATA controller. You can add up to three SCSI and four SATA controllers.
  9. Under the Disks section, click the + icon to add vDisks and do the following:
    1. Select the device type from the Device Type list.
      You can either select CD-ROM or DISK .
    2. Select the adapter type from the Adapter Type list.
      You can select IDE for CD-ROM.
      You can select SCSI , IDE , or SATA for DISK.
    3. Enter the size of the disk in GiB.
    4. In the Location field, select the disk location.
    5. If you want to add a controller to the vDisk, select the type of controller in the Controller list to attach to the disk.
      Note: You can add either SCSI or SATA controllers. The available options depend on the adapter type.
    6. In the Disk mode list, select the type of the disk mode. Your options are:
      • Dependent : Dependent disk mode is the default disk mode for the vDisk.
      • Independent - Persistent : Disks in persistent mode behave like conventional disks on your physical computer. All data written to a disk in persistent mode are written permanently to the disk.
      • Independent - Nonpersistent : Changes to disks in nonpersistent mode are discarded when you shut down or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you shut down or reset.
    You can also mark the vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  10. Under the Tags section, select tags from the Category: Tag pairs field.
    You can assign tags to your VMs so you can view the objects associated with your VMs in your VMware account. For example, you can create a tag for a specific environment and assign the tag to multiple VMs. You can then view all the VMs that are associated with the tag.
  11. (Optional) If you want to customize the default OS properties of the VM, then click the Enable check box under VM Guest Customization and select a customization from the Predefined Guest Customization list.
  12. If you do not have any predefined customization available, select None .
  13. Select Cloud-init or Custom Spec .
  14. If you selected Cloud-init , enter or upload the script in the Script field.
  15. If you have selected Custom Spec , enter the details for the VM in the following fields:
    1. Enter the hostname in the Hostname field.
    2. Enter the domain in the Domain field.
    3. Select timezone from the Timezone list.
    4. Select Hardware clock UTC check box to enable hardware clock UTC.
    5. Click the + icon to add network settings.
      To automatically configure DHCP server, enable the Use DHCP check box and then skip to the DNS Setting section.
    6. Enter a name for the network configuration you are adding to the VM in the Setting name field.
      Settings name is the saved configuration of your network that you want to connect to your VM.
    7. Enter values in the IP Address , Subnet Mask , Default Gateway , and Alternative Gateway fields.
    8. Under the DNS Settings section, enter the DNS primary, DNS secondary, DNS tertiary, and DNS search path name.
    Note: You can launch a single-VM blueprint without a NIC or network adapter with your VMware account.
  16. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring VM for GCP Account

Perform the following steps to configure the VM in a single-VM blueprint for your GCP account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. (Optional) If you have configured an environment during the project creation, then on the VM Configuration tab, click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list on the Blueprint Settings tab. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  2. (Optional) Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  3. Select a zone from the Zone list.
    A zone is a physical location where you can host the VM.
  4. Select the type of machine from the Machine type list.
    The machine types are available based on your zone. A machine type is a set of virtualized hardware resources available to a virtual machine (VM) instance, including the system memory size, virtual CPU (vCPU) count, and persistent disk limits. In Compute Engine, machine types are grouped and curated by families for different workloads.
  5. Under the DISKS section, click the + icon to add a disk.
    You can also mark the added vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  6. To use an existing disk configuration, select the Use existing disk check box, and then select the persistent disk from the Disk list.
  7. If you have not selected the Use existing disk check box, then do the following:
    1. Select the type of storage from the Storage Type list. The available options are as follows.
      • pd-balanced : Use this option as an alternative to SSD persistent disks with a balanced performance and cost.
      • pd-extreme : Use this option to use SSD drives for high-end database workloads. This option has higher maximum IOPS and throughput and allows you to provision IOPS and capacity separately.
      • pd-ssd : Use this option to use SSD drives as your persistent disk.
      • pd-standard : Use this option to use HDD drives as your persistent disk.
      The persistent disk types are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each disk is distributed across several physical disks.
    2. Select the image source from the Source Image list.
      The images available for your selection are based on the selected zone.
    3. Enter the size of the disk in GB in the Size in GB field.
    4. To delete the disk configuration after the instance is deleted, select the Delete when instance is deleted check box under the Disks section.
  8. To add a blank disk, click the + icon under the Blank Disks section and configure the blank disk.
  9. To add networking details to the VM, click the + icon under the Networking section.
  10. To configure a public IP address, select the Associate Public IP address check box and configure the following fields.
    1. Select the network from the Network list and the sub network from the Subnetwork list.
    2. Enter a name of the network in the Access configuration Name field and select the access configuration type from the Access configuration type list.
      These fields appear when you select the Associate public IP Address check box.
  11. To configure a private IP address, clear the Associate Public IP address check box and select the network and sub network.
  12. Under the SSH Key section, click the + icon and enter or upload the username key data in the Username field.
  13. Select Block project-wide SSH Keys to enable blocking project-wide SSH keys.
  14. Under the Management section, do the following:
    1. Enter the metadata in the Metadata field.
    2. Select the security group from the Network Tags list.
      Network tags are text attributes you can add to VM instances. These tags allow you to make firewall rules and routes applicable to specific VM instances.
    3. Enter the key-value pair in the Labels field.
      A label is a key-value pair that helps you organize the VMs created with GCP as the provider. You can attach a label to each resource, then filter the resources based on their labels.
  15. Under the API Access section, do the following:
    1. Specify the service account in the Service Account field.
    2. Under Scopes, select Default Access or Full Access .
  16. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring VM for AWS Account

Perform the following steps to configure the VM in a single-VM blueprint for your AWS account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. (Optional) If you have configured an environment during the project creation, then on the VM Configuration tab, click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list on the Blueprint Settings tab. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  2. Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  3. Select the Associate Public IP Address check box to associate a public IP address with your AWS instance.
    If you do not select the Associate Public IP Address check box, ensure that the AWS account and Calm are on the same network for the scripts to run.
  4. Select an AWS instance type from the Instance Type list.
    Instance types include varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to select the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes that allows you to scale your resources to the requirements of your target workload.
    The list displays the instances that are available in the AWS account. For more information, see AWS documentation.
  5. Select a region from the Region list and do the following:
    Note: The list displays the regions that are selected while configuring the AWS setting.
    1. Select an availability zone from the Availability Zone list.
      An availability zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS region. Availability zones allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.
    2. Select a machine image from the Machine Image list.
      An Amazon Machine Image is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2.
    3. Select an IAM role from the IAM Role list.
      An IAM role is an AWS Identity and Access Management entity with permissions to make AWS service requests.
    4. Select a key pair from the Key Pairs list.
      A key pair (consisting of a private key and a public key) is a set of security credentials that you use to prove your identity when connecting to an instance.
    5. Select the VPC from the VPC list and do the following:
      Amazon Virtual Private Cloud (Amazon VPC) allows you to provision a logically isolated section of the AWS cloud where you can launch AWS resources in your defined virtual network.
      • Select the Include Classic Security Group check box to enable security group rules.
      • Select security groups from the Security Groups list.
      • Select a subnet from the Subnet list.
  6. Enter or upload the AWS user data in the User Data field.
  7. Enter AWS tags in the AWS Tags field.
    AWS tags are key and value pair to manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.
  8. Under the Storage section, configure the following to boot the AWS instance with the selected image.
    1. From the Device list, select the device to boot the AWS instance.
      The available options are based on the image you have selected.
    2. In the Size(GiB) field, enter the required size for the bootable device.
    3. From the Volume Type list, select the volume type. You can select either General Purpose SSD , Provisioned IOPS SSD , and EBS Magnetic HDD .
      For more information on the volume types, see AWS documentation.
    4. Optionally, select the Delete on termination check box to delete the storage when the instance is terminated.
    You can also add more secondary storages by clicking the + icon next to the Storage section.
  9. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring VM for Azure Account

Perform the following steps to configure the VM in a single-VM blueprint for your Azure account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. (Optional) If you have configured an environment during the project creation, then on the VM Configuration tab, click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list on the Blueprint Settings tab. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  2. Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  3. Select a resource group from the Resource Group list or select the Create Resource Group check box to create a resource group.
    Each resource in Azure must belong to a resource group. A resource group is simply a logical construct that groups multiple resources together so you can manage the resources as a single entity. For example, you can create or delete resources as a group that share a similar life cycle, such as the resources for an n-tier application.

    The Resource Group list displays the resource groups that are associated with the subscriptions you selected in your Azure account. In case you have not selected any subscriptions, Calm considers all the subscriptions that are available in the Azure service principal to display the resource groups. Each resource group in the list also displays the associated subscription.

  4. If you selected a resource group from the Resource Group list, then do the following:
    1. Select the geographical location of the datacenter from the Location list.
    2. Select Availability Sets or Availability Zones from the Availability Option list.
      You can then select an availability set or availability zone. An availability set is a logical grouping capability to ensure that the VM resources are isolated from each other to provide High Availability if deployed within an Azure datacenter. An availability zone allows you to deploy your VM into different datacenters within the same region.
    3. Select the hardware profile as per your hardware requirements from the Hardware Profile list.
      The number of data disks and NICs depends upon the selected hardware profile. For information about the sizes of Windows and Linux VMs, see Windows and Linux Documentation.
  5. If you selected the Create Resource Group check box to create a resource group, then do the following:
    1. Select a subscription associated to your Azure account in the Subscription field.
    2. Enter a unique name for the resource group in the Name field.
    3. Select the geographical location of the datacenter that you want to add to the resource group in the Location list.
    4. Under Tags , enter a key and value pair in the Key and Value fields respectively.
      Tags are key and value pairs that enable you to categorize resources. You can apply a tag to multiple resource groups.
    5. If you want to automatically delete a resource group that has empty resources while deleting an application, click the Delete Empty Resource Group check box.
    6. Specify the location and hardware profile.
  6. Under the Secrets section, click the + icon and do the following:
    1. Enter a unique vault ID in the Vault ID field.
    2. Under Certificates , click the + icon.
    3. Enter the URL of the configuration certificate in the URL field.
      The URL of the certificate is uploaded to the key vault as a secret.
  7. Under the Admin Credentials section, do the following:
    1. Enter the username in the Username field.
    2. Select a secret type from the Secret Type list.
      You can either select Password or SSH Private Key.
    3. Do one of the following.
      • If you selected password, then enter the password in the Password field.
      • If you selected SSH Private Key, then enter or upload the SSH Private Key in the SSH Private Key field.
      • You can use the selected or default credential as the default credential for the VM.
      • You cannot use key-based credential for Windows VMs.
      • Username and password must adhere to the complexity requirements of Azure.
  8. (For Windows) Select the Provision Windows Guest Agent check box.
    This option indicates whether or not to provision the virtual machine agent on the virtual machine. When this property is not specified in the request body, the default behavior is to set it to true. This ensures that the VM Agent is installed on the VM, and the extensions can be added to the VM later.
  9. (For Windows) To indicate that the VM is enabled for automatic updates, select the Automatic OS Upgrades check box.
  10. Under the Additional Unattended Content section, click the + icon and do the following:
    1. Select a setting from the Setting Name list.
      You can select Auto Logon or First Logon Commands .
      Note: Guest customization is applicable only on images that allows or support guest customization.
    2. Enter or upload the xml content. See Sample Auto Logon and First Logon Scripts.
  11. Under the WinRM Listeners section, click the + icon and do the following:
    1. Select the protocol from the Protocol list.
      You can select HTTP or HTTPS .
    2. If you selected HTTPS, then select the certificate URL from the Certificate URL list.
  12. Under the Storage Profile section, select the Use Custom Image check box to use a custom VM image created in your subscription.
    You can then select a custom image or publisher-offer-SKU-version from the Custom Image list.
  13. Under the VM Image Details section, select an image type in the Source Image Type list.
    You can select Marketplace , Subscription , or Shared Image Gallery .
    • If you selected Marketplace , then specify the publisher, offer, SKU, and version for the image.
    • If you selected Subscription , then select the custom image.
    • If you selected Shared Image Gallery , then select the gallery and the image.
  14. Under the OS Disk Details section, do the following:
    1. Select the storage type from the Storage Type list.
      You can select Standard HDD , Standard SSD , or Premium SSD .
    2. Select a disk storage account from the Disk Storage list.
      This field is available only when the Use Custom Image check box is enabled.
    3. Select disk caching type from the Disk Caching Type list.
      You can select None , Read-only , or Read write .
    4. Select disk create option from the Disk Create Option list.
      You can select Attach , Empty , or From Image .
  15. Under the Data Disk section, do the following:
    1. Select the storage type from the Storage Type list.
      You can select Standard HDD , Standard SSD , or Premium SSD .
    2. Select disk caching type from the Disk Caching Type list.
      You can select None , Read-only , or Read write .
    3. Enter the size in GB in the Size field.
    4. Enter disk logical unit number (LUN) in the Disk LUN field.
      Note: The LUN value should be unique across data disk list.
  16. Under the Network Profile section, add NICs as per your requirement and do the following for each NIC:
    1. Select a security group from the Security Group list.
    2. Select a virtual network from the Virtual Network list.
    3. Under Public IP Config , enter a name and select an allocation method.
    4. Under Private IP Config , select an allocation method.
      If you selected Static as the allocation method, then enter the private IP address in the IP Address field.
  17. Enter tags in the Tags field.
  18. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring VM for Xi Cloud Account

Perform the following steps to configure the VM in a single-VM blueprint for your Xi Cloud account.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.

Procedure

  1. On the VM Configuration tab, edit the VM name in the VM Name field.
    You can use Calm macros to provide a unique name to the VM. For example, vm-@@{calm_time}@@ . For more information on Calm macros, see Macros Overview.
  2. Configure the processing unit of the VM by entering the number of vCPU, cores of each vCPU, and total memory in GB of the VM in the vCPU , cores per vCPU , and Memory fields.
  3. If you want to customize the default OS properties of the VM, select the Guest Customization check box.
    Guest customization allows you to modify the properties of the VM operating system. You can prevent conflicts that might result due to the deployment of virtual machines with identical settings, such as duplicate VM names or same SID. You can also change the computer name or network settings by using a custom script.
  4. Select Cloud-init for Linux or SysPrep for Windows, and enter or upload the script in the Script panel.
    For Sysprep, you must use double back slash for all escape characters . For example, \\v.
    For Sysprep script, click Join a Domain check box and configure the following fields.
    • Enter the domain name of the Windows server in the Domain Name field.
    • Select a credential for the Windows VM in the Credentials list. You can also add new credentials.
    • Enter the IP address of the DNS server in the DNS IP field.
    • Enter the DNS search path for the domain in the DNS Search Path field.
  5. Select the image from the Image list.
    The list displays the images that are available in the cluster. You can add more than one image by clicking the + icon.
    All the images that you uploaded to Prism Central are available for selection. For more information about image configuration, see Image Management section in the Prism Central guide.
  6. Select the device from the Device Type list.
    You can select CD-ROM or Disk .
  7. Select the device bus from the Device Bus list.
    You can select IDE or SATA for CD-ROM and SCSI , IDE , PCI , or SATA for DISK.
  8. Select the Bootable check box for the image that you want to use to start the VM.
  9. To add a vDisk, click the + icon and specify the device type, device bus, and disk size.
    You can also mark the vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  10. Under Categories , select a category in the list.
    Use this option to tag your VM to a defined category in Prism Central. The list options are available based on your Prism Central configuration. If you want to protect your application by a protection policy, select the category defined for the policy in your Prism Central.
  11. Under the Network section, select the VPC from the VPC list. For more information about VPC, see Xi Infrastructure Service Admininistration Guide .
  12. To save the blueprint, click Save .

What to do next

  • You can optionally configure the advanced options in the blueprint. For more information, see Configuring Advanced Options for a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring Advanced Options for a Blueprint

Perform the following steps to configure advanced options such as credentials, packages, pre-create and post-delete tasks. Configuring advanced options is optional for a blueprint.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.
  • Ensure that you configured the VM in your blueprint. For more information, see VM Configuration.

Procedure

  1. Add credentials to enable packages and actions. For more information, see Adding Credentials.
  2. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  3. Configure pre-create or post-delete tasks and packages in the blueprint. For more information, see Configuring Tasks or Packages in a Blueprint.
  4. Add an action. For more information, see Adding an Action to a Single-VM Blueprint.
  5. Click Save .

What to do next

  • You can configure application variables in the blueprint. For more information, see Configuring Application Variables in a Blueprint.
  • You can view the blueprint on the Blueprint tab. You can use the blueprint to model your application. For more information, see Blueprints Management in Calm.

Configuring Tasks or Packages in a Blueprint

Perform the following steps to configure pre-create task, post-delete task, install package, or uninstall package in a single-VM blueprint.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.
  • Ensure that you configured the VM in your blueprint. For more information, see VM Configuration.

Procedure

  1. On the Advanced Options tab, do one of the following:
    • To configure a pre-create or post-delete task, click Edit next to the Pre VM create tasks or Post VM delete tasks field under the PreCreate & PostDelete section.
    • To configure an install or uninstall package, click the Edit button next to the Package Install or Package Uninstall field under the Packages section.
  2. Click + Add Task .
  3. Click the Task button.
  4. Enter the task name in the Task Name field.
  5. Select the type of tasks from the Type list.
    The available options are:
    • Execute : Use this task type to run eScripts on the VM. For more information, see Creating an Execute Task.
    • Set Variable : Use this task to change variables in a blueprint. For more information, see Creating a Set Variable Task.
    • Delay : Use this task type to set a time interval between two tasks or actions. For more information, see Creating a Delay Task.
    • HTTP Task : Use this task type to query REST calls from a URL. An HTTP task supports GET, PUT, POST, and DELETE methods. For more information, see Creating an HTTP Task.
  6. To add another task or package, do one of the following:
    • To add another pre-create or post-delete task, click the Pre create or Pre delete button and repeat steps 3 to 5.
    • To add another task for package install or uninstall, click the Package Install or Package Uninstall button.
  7. To establish a connection between tasks, click Add Connector and use the arrow to create the connection.
  8. To delete a task, click the Delete button next to the task.
  9. To add variables, do one of the following:
    • To add a pre-create or post-delete variable, click the Pre create Variables or Post delete Variables tab.
    • To add a package install or uninstall variable, click the Package Install Variables or Package Uninstall Variables tab.
  10. Click the + icon next to Variables .
  11. In the Name field, enter a name for the variable.
  12. From the Data Type list, select one of the base type variables or import a custom library variable type.
    If you selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    If you imported a custom variable type, all the variable parameters are auto filled.
  13. If you want to hide the variable value, select the Secret check box.
  14. Click Done .

Configuring Application Variables in a Blueprint

Perform the following steps to configure application variables in your blueprint.

Procedure

  1. On the blueprint page, click App variables .
  2. Click + Add Variable .
  3. In the Name field, enter a name for the variable.
  4. From the Data Types list, select one of the base type variables or import a custom library variable type. Your options are:
    • String
    • Integer
    • Multi-line string
    • Date
    • Time
    • Date Time
    If you selected a base type variable, then configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    If you have imported a custom variable type, all the variable parameters are auto filled.
  5. Enter a value for the selected data type in the Value field.
    You can select the Secret check box to hide the variable value.
  6. Click Show Additional Options .
  7. In the Input Type field, select one of the following input types:
    • Simple : Use this option for default value.
    • Predefined : Use this option to assign static values.
    • eScript : Use this option to attach a script that you run to retrieve values dynamically at runtime. Script can return single or multiple values depending on the selected base data type.
    • HTTP : Use this option to retrieve values dynamically from the defined HTTP end point. Result is processed and assigned to the variable based on the selected base data type.
  8. If you selected Simple , then enter the value for the variable in the Value field.
  9. If you selected Predefined , then enter the value for the variable in the Option field.
    To add multiple values for the variable, click + Add Option , and enter values in the Option field.
    Note: To make any value as default, select Default for the option.
  10. If you selected eScript , enter the eScript in the field.
    You can upload the script from the library or from your computer by clicking the upload icon.
    You can also publish the script to the library by clicking the publish button.
    Note:
    • You cannot add macros to eScripts.
    • If you have selected Multiple Input (Array) check box with input type as eScript, then ensure that the script returns a list of values separated by comma. For example, CentOS, Ubuntu, Windows.
  11. If you selected HTTP , then configure the following fields.
    1. In the Request URL field, enter the URL of the server that you want to run the methods on.
    2. In the Request Method list, select one of the following request methods.
      • Use the GET method to retrieve data from a specified resource.
      • Use the PUT method to send data to a server to update a resource.
      • Use the POST method to send data to a server to create a resource.
      • Use the DELETE method to send data to a server to delete a resource.
      In the Request Body field, enter the PUT, POST, or DELETE request. You can also upload the request by clicking the upload icon.
    3. In the Content Type list, select the type of the output format.
      The available options are XML , JSON , and HTML .
    4. In the Connection Timeout (sec) field, enter the timeout interval in seconds.
    5. (Optional) In the Authentication field, select Basic and do the following:
      • In the Username field, enter the user name.
      • In the Password field, enter the password.
    6. If you want to verify the TLS certificate for the task, select the Verify TLS Certificate check box.
    7. If you want to use a proxy server that you configured in Prism Central, select the Use PC Proxy configuration check box.
      Note: Ensure that the Prism Central has the appropriate HTTP proxy configuration.
    8. In the Retry Count field, enter the number of attempts the system must perform to create a task after each failure.
      By default, the retry count is zero. It implies that the task creation procedure stops after the first attempt.
    9. In the Retry Interval field, enter the time interval in seconds for each retry if the task fails.
    10. Under the Headers section, enter the HTTP header key and value in the Key and Value fields respectively.
      If you want to publish the HTTP header key and value pair as secret, select the Secrets check box.
    11. Under the Expected Response Options section, enter the details for the following fields:
      • In the Response Code field, enter the response code.
      • From the Response Status list, select either Success or Failure as the response status for the task.
    12. In the Set Response Path for Variable field, enter the variables from the specified response path.
      The example of json format is $.x.y and xml format is //x/y. For example, if the response path for variable is $.[*].display for response.
      [
          {
              "display": "HTML Tutorial",
              "url": "https://www.w3schools.com/html/default.asp"
          },
          {
              "display": "CSS Tutorial",
              "url": "https://www.w3schools.com/css/default.asp"
          },
          {
              "display": "JavaScript Tutorial",
              "url": "https://www.w3schools.com/js/default.asp"
          },
          {
              "display": "jQuery Tutorial",
              "url": "https://www.w3schools.com/jquery/default.asp"
          },
          {
              "display": "SQL Tutorial",
              "url": "https://www.w3schools.com/sql/default.asp"
          },
          {
              "display": "PHP Tutorial",
              "url": "https://www.w3schools.com/php/default.asp"
          },
          {
              "display": "XML Tutorial",
              "url": "https://www.w3schools.com/xml/default.asp"
          }
      ]
      Then, during the launch time the list options are ["HTML Tutorial","CSS Tutorial","JavaScript Tutorial","jQuery Tutorial","SQL Tutorial","PHP Tutorial","XML Tutorial"].
  12. (Optional) Enter a label and description for the variable.
  13. (Optional) Set variable options.
    • Select the Mark this variable private check box to make the variable private. Private variables are not shown during the blueprint launch or in the application.
    • Select the Mark this variable mandatory check box to make the variable a requisite for application launch.
    • Select the Validate with Regular Expression check box if you want to test the Regex values. Click Test Regex , provide the value for the Regex, and test or save the Regex. You can enter regex values in PCRE format. For more details, see from http://pcre.org/.
  14. Click Done .

Multi-VM Blueprints in Calm

A multi-VM blueprint is a framework that you can use to create an instance, provision, and launch applications that require multiple VMs.

Creating a Multi-VM Blueprint

In a Multi-VM blueprint, you can define the underlying infrastructure of the VMs, application details, and actions that are carried out on a blueprint until the termination of the application.

About this task

You can create and configure multi-VM blueprints with your Nutanix, VMware, AWS, GCP, or Azure accounts.

Before you begin

Ensure that you have configured an account and a project for your blueprint.

Procedure

  1. Add a service. For more information, see Adding a Service.
  2. Configure VM, package, and service for your provider account. For more information, see Configure Multi-VM, Package, and Service.
  3. Set the service dependencies. For more information, see Setting up the Service Dependencies.
  4. Add and configure an application profile. For more information, see Adding and Configuring an Application Profile.
  5. (Optional) Add and configure Scale Out and Scale In. For more information, see Adding and Configuring Scale Out and Scale In.
  6. Create an action. For more information, see Adding an Action to a Multi-VM Blueprint.

Adding a Service

Services are the virtual machine instances, existing machines or bare-metal machines, that you can provision and configure by using Calm. A service exposes the IP address and ports on which the request is received. You can either provision a single-service instance or multiple services based on the topology of your application.

About this task

For more information about services in Calm, see Services Overview.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page is displayed.
  2. From the + Create Blueprint list, select Multi VM/Pod Blueprint .
    The Blueprint Setup window appears.
  3. Enter the name of the blueprint in the Name field.
  4. Optionally, provide a description about the blueprint in the Description field.
  5. Select a project from the Project list.
    Note: The available account options depend on the selected project.
  6. Click Proceed .
    The Multi-VM Blueprint Editor page appears.
    Figure. Multi-VM Blueprint Editor Click to enlarge

  7. To add a service, click the + icon next to Service in the Overview Panel.
    The service inspector appears in the Blueprint Canvas.
    Figure. Service Inspector Click to enlarge

What to do next

Configure the VM, package, and service. For more information, see Configure Multi-VM, Package, and Service.

Configure Multi-VM, Package, and Service

You can define and configure the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application for a service provider.

Configuring Nutanix and Existing Machine VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on a Nutanix platform.

Before you begin

  • Ensure that you have completed the pre-configuration requirements. For more information, see Creating a Multi-VM Blueprint.
  • Ensure that you have created a project and configured an environment for Nutanix. For more information, see Creating a Project and Configuring Nutanix Environment.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
    Figure. Blueprint Configuration Click to enlarge

  2. Enter a name of the service in the Service Name field.
  3. On the VM tab, in the Name field, enter a name for the VM.
  4. Select the provider account from the Account list.
    You can select Existing Machine or a Nutanix account.
    Note: The account options depend on the project you selected while setting up your blueprint.
  5. If you selected Existing Machine , then do the following:
    1. Select Windows or Linux from the Operating System list.
    2. In the Configuration section, enter the IP address of the existing machine in the IP Address field
    3. In the Select tunnel to connect with list, select a tunnel that you want to use to connect with this VM if the VM is within the VPC. This step is optional.
    4. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
    Figure. Existing Machine Click to enlarge

  6. If you selected a Nutanix account, then select Windows or Linux from the Operating System list.
  7. If you have configured an environment during the project creation, then under the Preset VM Config section, click the Clone from environment button to autofill the VM configuration details. This step is optional.
    The Clone from environment button appears only when you select a specific environment you configured for the account from the Environment list in the application profile. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  8. In the Cluster list, select the cluster you want to associate to the service.
    The Cluster list displays the clusters that you allowed in the project.
    The VLAN subnets have direct association with the cluster. When you select a VLAN subnet under the Network Adapters (NICs) section, the associated cluster is auto-populated in the Cluster list. However, if you intend to use overlay subnets, you must select the cluster in list.
    If you mark the cluster as runtime editable, the selected subnets also become runtime editable.
  9. Under the VM Configuration section, enter the name of the VM in the VM Name field.
    You can use Calm macros to provide a unique name to the VM. For example, vm-@@{calm_array_index}@@-@@{calm_time}@@ . For more information on Calm macros, see Macros Overview.
  10. Configure the processing unit of the VM by entering the number of vCPU, cores of each vCPU, and total memory in GB of the VM in the vCPU , cores per vCPU , and Memory (GiB) fields.
  11. (Optional) If you want to customize the default OS properties of the VM, select the Guest Customization check box.
    Guest customization allows you to modify the properties of the VM operating system. You can prevent conflicts that might result due to the deployment of virtual machines with identical settings, such as duplicate VM names or same SID. You can also change the computer name or network settings by using a custom script.
    1. Select Cloud-init for Linux or SysPrep for Windows, and enter or upload the script in the Script panel.
      For Sysprep, you must use double back slash for all escape characters . For example, \\v.
    2. For Sysprep script, click Join a Domain check box and configure the following fields.
      • Enter the domain name of the Windows server in the Domain Name field.
      • Select a credential for the Windows VM in the Credentials list. You can also add new credentials.
      • Enter the IP address of the DNS server in the DNS IP field.
      • Enter the DNS search path for the domain in the DNS Search Path field.
  12. Under the DISKS section, do the following:
    1. To add a disk, click the + icon next to DISKS .
    2. Select the device from the Device Type list.
      You can select CD-ROM or DISK .
    3. Select the device bus from the Device Bus list.
      You can select IDE or SATA for CD-ROM and SCSI , IDE , PCI , or SATA for DISK.
    4. From the Operation list, select one of the following:
      • To allocate the disk memory from the storage container, select Allocate on Storage Container .
      • To clone an image from the disk, select Clone from Image Service .
    5. If you selected Allocate on Storage Container , enter the disk size in GB in the Size (GiB) field.
    6. If you selected Clone from Image Service , select the image you want to add to the disk in the Image field.
      All the images that you uploaded to Prism Central are available for selection. For more information about image configuration, see Image Management section in the Prism Central guide.
    7. Select the Bootable check box for the image that you want to use to start the VM.
    Note: You can add more than one disk and select the disk with which you want to boot up the VM.
  13. Select one of the following firmwares to boot the VM.
    • Legacy BIOS : Select legacy BIOS to boot the VM with legacy BIOS firmware.
    • UEFI : Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger hard drives, faster boot time, and provides more security features.
    • To boot the VM with the Secure Boot feature of UEFI, select Secure Boot . Secure Boot ensures a safe and secure start by preventing unauthorized software such as a malware to take control during the VM bootup.
  14. Under the Categories section, select a category in the Key: Value list.
    Use this option to tag your VM to a defined category in Prism Central. The list options are available based on your Prism Central configuration. If you want to protect your application by a protection policy, select the category defined for the policy in your Prism Central.
  15. To add a network adapter, click the + icon next to the Network Adapters (NICS) field and select the subnet from the NIC list.
    The NIC list shows all the VLAN and overlay subnets. The VLAN subnets have direct association with the cluster. Therefore, when you select a VLAN subnet, the associated cluster is auto-populated in the Cluster list.
    Figure. Network Adapter Click to enlarge

    The NICs of a VM can either use VLAN subnets or overlay subnets. For example, if you select an overlay subnet in NIC 1 and then add NIC 2, the NIC 2 list displays only the overlay subnets.
    If you select a VLAN subnet in NIC 1, all subsequent VLAN subnets belong to the same cluster. Similarly, if you select an overlay subnet, all subsequent overlay subnets belong to the same VPC.
  16. To add a serial port to the VM, click the + icon next to the Serial Ports field.
    You can use serial ports to connect a physical port or a file on the VM.
  17. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  18. On the Package tab, enter the package name in the Package Name field.
    1. Click one of the following:
      • To create a task to install a package, click Configure install .
      • To create a task to uninstall a package, click Configure uninstall .
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge Packages

    3. Enter the task name in the Task Name field.
  19. To create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the Execute task type, see Creating an Execute Task.
    • Set Variable : To create the Set Variable task type, see Creating a Set Variable Task.
    • HTTP : To create the HTTP type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
    For Execute and Set Variable tasks, you can use endpoints as targets for script execution. For more information, see Endpoints Overview.
  20. To reuse a task from the task library, do the following.
    1. Click Browse Library .
    2. Select the task from the task library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. Optionally, edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. To update the variable or macro names, click Apply .
    6. To copy the task, click Copy .
  21. On the Service tab, do the following:
    1. Under Deployment Config , enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variables or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are auto filled.
    7. Select the Secret check box if you want to hide the value of the variable.
  22. Add credentials to the blueprint. For more information, see Adding Credentials.
  23. Click Save on the Blueprint Editor page.
    The blueprint is saved and listed on the Blueprints page.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Configuring AWS VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on an AWS platform.

Before you begin

  • Ensure that you have completed the pre-configuration requirements. See Creating a Multi-VM Blueprint.
  • Ensure that you have created a project and configured an environment for AWS. For more information, See Creating a Project and Configuring AWS Environment.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
    Figure. Blueprint Configuration Click to enlarge Blueprint configuration

  2. Enter a name of the service in the Service Name field.
  3. On the VM tab, enter a name for the VM in the Name field.
  4. Select the AWS account from the Account list.
    Note: The account options depend on the project you selected while setting up your blueprint.
  5. Select Windows or Linux from the Operating System list.
  6. (Optional) If you have configured an environment during the project creation, then click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list in the application profile. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  7. Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  8. Select the Associate Public IP Address check box to associate a public IP address with your AWS instance.
    If you do not select the Associate Public IP Address check box, ensure that the AWS account and Calm are on the same network for the scripts to run.
  9. Select an AWS instance type from the Instance Type list.
    Instance types include varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to select the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes that allows you to scale your resources to the requirements of your target workload.
    The list displays the instances that are available in the AWS account. For more information, see AWS documentation.
  10. Select a region from the Region list and do the following:
    Note: The list displays the regions that are selected while configuring the AWS setting.
    1. Select an availability zone from the Availability Zone list.
      An availability zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS region. Availability zones allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.
    2. Select a machine image from the Machine Image list.
      An Amazon Machine Image is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2.
    3. Select an IAM role from the IAM Role list.
      An IAM role is an AWS Identity and Access Management entity with permissions to make AWS service requests.
    4. Select a key pair from the Key Pairs list.
      A key pair (consisting of a private key and a public key) is a set of security credentials that you use to prove your identity when connecting to an instance.
    5. Select the VPC from the VPC list and do the following:
      Amazon Virtual Private Cloud (Amazon VPC) allows you to provision a logically isolated section of the AWS cloud where you can launch AWS resources in your defined virtual network.
  11. Enter AWS tags in the AWS Tags field.
    AWS tags are key and value pair to manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.
  12. Under the Storage section, configure the following to boot the AWS instance with the selected image.
    1. From the Device list, select the device to boot the AWS instance.
      The available options are based on the image you have selected.
    2. In the Size(GiB) field, enter the required size for the bootable device.
    3. From the Volume Type list, select the volume type. You can select either General Purpose SSD , Provisioned IOPS SSD , and EBS Magnetic HDD .
      For more information on the volume types, see AWS documentation.
    4. Optionally, select the Delete on termination check box to delete the storage when the instance is terminated.
    You can also add more secondary storages by clicking the + icon next to the Storage section.
  13. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  14. On the Package tab, enter the package name in the Package Name field.
    1. Click one of the following:
      • Configure install : To create a task to install a package.
      • Configure uninstall : To create a task to uninstall a package.
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge

    3. Enter the task name in the Task Name field.
  15. To create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the execute type of task, see Creating an Execute Task.
    • Set Variable : To create the set variable type of task, see Creating a Set Variable Task.
    • HTTP : To create the HTTP Task type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
    For Execute and Set Variable tasks, you can use endpoints as targets for script execution. For more information, see Endpoints Overview.
  16. To reuse a task from the task library, do the following.
    1. Click Browse Library .
    2. Select the task from the task library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. Optionally, edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. To update the variable or macro names, click Apply .
    6. To copy the task, click Copy .
  17. On the Service tab, configure the following.
    1. In the Deployment Config pane, enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variable or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are auto filled.
    7. Select the Secret check box if you want to hide the value of the variable.
  18. Add credentials to the blueprint. For more information, see Adding Credentials.
  19. Click Save on the Blueprint Editor page.
    The blueprint is saved and listed on the Blueprints page.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Configuring VMware VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on a VMware platform.

Before you begin

  • Ensure that you complete the pre-configuration requirements. See Creating a Multi-VM Blueprint.
  • Ensure that you have created a project and configured an environment for VMware. For more information, see Creating a Project and Configuring VMware Environment.
  • You need licenses for both Compute and Storage distributed resource scheduler (DRS) in order to use the VMware DRS mode.
  • Ensure that storage DRS is enabled and set to fully automated in vCenter.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
    Figure. Blueprint Configuration Click to enlarge vmware

  2. Enter a name of the service in the Service Name field.
  3. On the VM tab, enter the name of the VM in the Name field.
  4. Select VMware from the Account list.
    Note: The account options depend on the project you selected while setting up the blueprint.
  5. Select Windows or Linux from the Operating System list.
  6. (Optional) If you have configured an environment during the project creation, then click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list in the application profile. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  7. Select the Compute DRS Mode check box to enable load sharing and automatic VM placement.
    Distributed Resource Scheduler (DRS) is a utility that balances computing workloads with available resources in a virtualized environment. For more information about DRS mode, see the VMware documentation .
    • If you selected Compute DRS Mode , then select the cluster where you want to host your VM from the Cluster list.
    • If you have not selected Compute DRS Mode , then select the host name of the VM from the Host list.
  8. Do one of the following:
    • Select the VM Templates radio button and then select a template from the Template list.

      Templates allow you to create multiple virtual machines with the same characteristics, such as resources allocated to CPU and memory or the type of virtual hardware. Templates save time and avoid errors when configuring settings and other parameters to create VMs. The VM template retrieves the list options from the configured vCenter.

      Note:
      • Install the VMware Tools on the Windows templates. For Linux VMs, install Open-vm-tools or VMware-tools and configure the Vmtoolsd service for automatic start-up.
      • Support for Open-vm-tools is available. When using Open-vm-tools , install Perl for the template.
      • Do not use SysPrepped as the Windows template image.
      • If you select a template that has unsupported version of VMware Tools, then a warning appears stating VMware tool or version is unsupported and could lead to VM issues .
      • You can also edit the NIC type when you use a template.

      For more information, refer to VMware KB articles.

    • Select the Content Library radio button, a content library in the Content Library list, and then select an OVF template or VM template from the content library.

      A content library stores and manages content (VMs, vApp templates, and other types of files) in the form of library items. A single library item can consist of one file or multiple files. For more information about the vCenter content library, see the VMware Documentation .

      Caution: Content Library support is currently a technical preview feature in Calm. Do not use any technical preview features in a production environment.
  9. If you want to use the storage DRS mode, then select the Storage DRS Mode check box and a datastore cluster from the Datastore Cluster list.
    The datastore clusters are referred as storage pod in vCenter. A datastore cluster is a collection of datastores with shared resources and a shared management interface.
  10. If you do not want to use storage DRS mode, then do not select the Storage DRS Mode check box, and select a datastore from the Datastore list.
  11. In the VM Location field, specify the location of the folder in which the VM must be created when you deploy the blueprint. Ensure that you specify a valid folder name already created in your VMware account.
    To create a subfolder in the location you specified, select the Create a folder/directory structure here check box and specify a folder name in the Folder/Directory Name field.
    Note: Calm gives preference to the VM location specified in the environment you select while launching an application. For example, you specify a subfolder structure as the VM location in the blueprint and the top-level folder in the environment. When you select this environment while launching your application, Calm considers the VM location you specified in the environment and creates the VM at the top-level folder.
    Select the Delete empty folder check box to delete the subfolder created within the specified location, in case the folder does not contain any VM resources. This option helps you to keep a clean folder structure.
  12. Enter the instance name of the VM in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  13. Under Controllers , click the + icon to add the type of controller.
    You can select either SCSI or SATA controller. You can add up to three SCSI and four SATA controllers.
  14. Under the Disks section, click the + icon to add vDisks and do the following:
    1. Select the device type from the Device Type list.
      You can either select CD-ROM or DISK .
    2. Select the adapter type from the Adapter Type list.
      You can select IDE for CD-ROM.
      You can select SCSI , IDE , or SATA for DISK.
    3. Enter the size of the disk in GiB.
    4. In the Location field, select the disk location.
    5. If you want to add a controller to the vDisk, select the type of controller in the Controller list to attach to the disk.
      Note: You can add either SCSI or SATA controllers. The available options depend on the adapter type.
    6. In the Disk mode list, select the type of the disk mode. Your options are:
      • Dependent : Dependent disk mode is the default disk mode for the vDisk.
      • Independent - Persistent : Disks in persistent mode behave like conventional disks on your physical computer. All data written to a disk in persistent mode are written permanently to the disk.
      • Independent - Nonpersistent : Changes to disks in nonpersistent mode are discarded when you shut down or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you shut down or reset.
    You can also mark the vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  15. Under the Tags section, select tags from the Category: Tag pairs field.
    You can assign tags to your VMs so you can view the objects associated with your VMs in your VMware account. For example, you can create a tag for a specific environment and assign the tag to multiple VMs. You can then view all the VMs that are associated with the tag.
  16. (Optional) If you want to customize the default OS properties of the VM, then click the Enable check box under VM Guest Customization and select a customization from the Predefined Guest Customization list.
  17. If you do not have any predefined customization available, select None .
  18. Select Cloud-init or Custom Spec .
  19. If you selected Cloud-init , enter or upload the script in the Script field.
  20. If you have selected Custom Spec , enter the details for the VM in the following fields:
    1. Enter the hostname in the Hostname field.
    2. Enter the domain in the Domain field.
    3. Select timezone from the Timezone list.
    4. Select Hardware clock UTC check box to enable hardware clock UTC.
    5. Click the + icon to add network settings.
      To automatically configure DHCP server, enable the Use DHCP check box and then skip to the DNS Setting section.
    6. Enter a name for the network configuration you are adding to the VM in the Setting name field.
      Settings name is the saved configuration of your network that you want to connect to your VM.
    7. Enter values in the IP Address , Subnet Mask , Default Gateway , and Alternative Gateway fields.
    8. Under the DNS Settings section, enter the DNS primary, DNS secondary, DNS tertiary, and DNS search path name.
  21. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  22. On the Package tab, enter the package name in the Package Name field.
    1. Click one of the following:
      • Configure install : To create a task to install a package.
      • Configure uninstall : To create a task to uninstall a package.
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge

    3. Enter the task name in the Task Name field.
  23. To create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the Execute type of task, see Creating an Execute Task.
    • Set Variable : To create the set variable type of task, see Creating a Set Variable Task.
    • HTTP : To create the HTTP task type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
    For Execute and Set Variable tasks, you can use endpoints as targets for script execution. For more information, see Endpoints Overview.
  24. To reuse a task from the task library, do the following.
    1. Click Browse Library .
    2. Select the task from the task library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. Optionally, edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. To update the variable or macro names, click Apply .
    6. To copy the task, click Copy .
  25. On the Service tab, configure the following.
    1. In the Deployment Config pane, enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variable or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are automatically filled.
    7. Select the Secret check box if you want to hide the value of the variable.
  26. Add credentials to the blueprint. For more information, see Adding Credentials.
  27. Click Save on the Blueprint Editor page.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Supported VMware Guest Tools Versions

To know the supported VMware guest tools versions, see the

VMware Product Interoperability Matrices .

Configuring GCP VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on a GCP platform.

Before you begin

  • Ensure that you complete the pre-configuration requirements. See Creating a Multi-VM Blueprint.
  • Ensure that you have created a project and configured an environment for AWS. For more information, see Creating a Project and Configuring GCP Environment.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
    Figure. Blueprint Configuration Click to enlarge

  2. Enter a name of the service in the Service Name field.
  3. On the VM tab, enter the name of the VM in the Name field.
  4. Select a GCP account from the Account list.
    Note: The account options depend on the selected project while setting up the blueprint.
  5. Select Windows or Linux from the Operating System list.
  6. (Optional) If you have configured an environment during the project creation, then click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list in the application profile. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  7. (Optional) Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  8. Select a zone from the Zone list.
    A zone is a physical location where you can host the VM.
  9. Select the type of machine from the Machine type list.
    The machine types are available based on your zone. A machine type is a set of virtualized hardware resources available to a virtual machine (VM) instance, including the system memory size, virtual CPU (vCPU) count, and persistent disk limits. In Compute Engine, machine types are grouped and curated by families for different workloads.
  10. Under the DISKS section, click the + icon to add a disk.
    You can also mark the added vDisks runtime editable so you can add, delete, or edit the vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  11. To use an existing disk configuration, select the Use existing disk check box, and then select the persistent disk from the Disk list.
  12. If you have not selected the Use existing disk check box, then do the following:
    1. Select the type of storage from the Storage Type list. The available options are as follows.
      • pd-balanced : Use this option as an alternative to SSD persistent disks with a balanced performance and cost.
      • pd-extreme : Use this option to use SSD drives for high-end database workloads. This option has higher maximum IOPS and throughput and allows you to provision IOPS and capacity separately.
      • pd-ssd : Use this option to use SSD drives as your persistent disk.
      • pd-standard : Use this option to use HDD drives as your persistent disk.
      The persistent disk types are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each disk is distributed across several physical disks.
    2. Select the image source from the Source Image list.
      The images available for your selection are based on the selected zone.
    3. Enter the size of the disk in GB in the Size in GB field.
    4. To delete the disk configuration after the instance is deleted, select the Delete when instance is deleted check box under the Disks section.
  13. To add a blank disk, click the + icon under the Blank Disks section and configure the blank disk.
  14. To add networking details to the VM, click the + icon under the Networking section.
  15. To configure a public IP address, select the Associate Public IP address check box and configure the following fields.
    1. Select the network from the Network list and the sub network from the Subnetwork list.
    2. Enter a name of the network in the Access configuration Name field and select the access configuration type from the Access configuration type list.
      These fields appear when you select the Associate public IP Address check box.
  16. To configure a private IP address, clear the Associate Public IP address check box and select the network and sub network.
  17. Under the SSH Key section, click the + icon and enter or upload the username key data in the Username field.
  18. Select Block project-wide SSH Keys to enable blocking project-wide SSH keys.
  19. Under the Management section, do the following:
    1. Enter the metadata in the Metadata field.
    2. Select the security group from the Network Tags list.
      Network tags are text attributes you can add to VM instances. These tags allow you to make firewall rules and routes applicable to specific VM instances.
    3. Enter the key-value pair in the Labels field.
      A label is a key-value pair that helps you organize the VMs created with GCP as the provider. You can attach a label to each resource, then filter the resources based on their labels.
  20. Under the API Access section, do the following:
    1. Specify the service account in the Service Account field.
    2. Under Scopes, select Default Access or Full Access .
  21. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  22. On the Package tab, enter the package name in the Name field.
    1. Click one of the following:
      • Configure install : To create a task to install a package.
      • Configure uninstall : To create a task to uninstall a package.
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge

    3. Enter the task name in the Task Name field.
  23. To create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the Execute task type, see Creating an Execute Task.
    • Set Variable : To create the Set Variable task type, see Creating a Set Variable Task.
    • HTTP : To create the HTTP Task type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
    For Execute and Set Variable tasks, you can use endpoints as targets for script execution. For more information, see Endpoints Overview.
  24. To reuse a task from the task library, do the following.
    1. Click Browse Library .
    2. Select the task from the task library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. Optionally, edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. To update the variable or macro names, click Apply .
    6. To copy the task, click Copy .
  25. On the Service tab, configure the following.
    1. In the Deployment Config pane, enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variable or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are auto filled.
    7. Select the Secret check box if you want to hide the value of the variable.
  26. Add credentials to the blueprint. For more information, see Adding Credentials.
  27. Click Save on the Blueprint Editor page.
    The blueprint is saved and listed under blueprints tab.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Configuring Azure VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on an Azure platform.

Before you begin

  • Ensure that you have configured the following entities in the Azure account.
    • Resource Group
    • Availability set
    • Network Security Group
    • Virtual Network
    • Vault Certificates
  • Ensure that you complete the pre-configuration requirements. See Creating a Multi-VM Blueprint.
  • Ensure that you have created a project and configured an environment for Azure. For more information, see Creating a Project and Configuring Azure Environment.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
    Figure. Blueprint Configuration Click to enlarge

  2. Enter a name of the service in the Service Name field.
  3. Under the VM tab, enter the name of the VM in the Name field.
  4. Select Azure from the Account list.
    Note: The account options depend on the selected project while creating the blueprint.
  5. Select Windows or Linux from the Operating System list.
  6. (Optional) If you have configured an environment during the project creation, then click Clone from environment to autofill the VM configuration details.
    The Clone from environment option appears only when you select a specific environment you configured for the account from the Environment list in the application profile. The option does not appear if you select All Project Accounts as your environment.
    You can also click the View Configuration option to review the configuration details before cloning the environment.
  7. Edit the VM name in the Instance Name field.
    This field is pre-populated with a macro as suffix to ensure name uniqueness. The service provider uses this name as the VM name.
  8. Select a resource group from the Resource Group list or select the Create Resource Group check box to create a resource group.
    Each resource in Azure must belong to a resource group. A resource group is simply a logical construct that groups multiple resources together so you can manage the resources as a single entity. For example, you can create or delete resources as a group that share a similar life cycle, such as the resources for an n-tier application.

    The Resource Group list displays the resource groups that are associated with the subscriptions you selected in your Azure account. In case you have not selected any subscriptions, Calm considers all the subscriptions that are available in the Azure service principal to display the resource groups. Each resource group in the list also displays the associated subscription.

  9. If you selected a resource group from the Resource Group list, then do the following:
    1. Select the geographical location of the datacenter from the Location list.
    2. Select Availability Sets or Availability Zones from the Availability Option list.
      You can then select an availability set or availability zone. An availability set is a logical grouping capability to ensure that the VM resources are isolated from each other to provide High Availability if deployed within an Azure datacenter. An availability zone allows you to deploy your VM into different datacenters within the same region.
    3. Select the hardware profile as per your hardware requirements from the Hardware Profile list.
      The number of data disks and NICs depends upon the selected hardware profile. For information about the sizes of Windows and Linux VMs, see Windows and Linux Documentation.
  10. If you selected the Create Resource Group check box to create a resource group, then do the following:
    1. Select a subscription associated to your Azure account in the Subscription field.
    2. Enter a unique name for the resource group in the Name field.
    3. Select the geographical location of the datacenter that you want to add to the resource group in the Location list.
    4. Under Tags , enter a key and value pair in the Key and Value fields respectively.
      Tags are key and value pairs that enable you to categorize resources. You can apply a tag to multiple resource groups.
    5. If you want to automatically delete a resource group that has empty resources while deleting an application, click the Delete Empty Resource Group check box.
    6. Specify the location and hardware profile.
  11. Under the Secrets section, click the + icon and do the following:
    1. Enter a unique vault ID in the Vault ID field.
    2. Under Certificates , click the + icon.
    3. Enter the URL of the configuration certificate in the URL field.
      The URL of the certificate is uploaded to the key vault as a secret.
    4. Enter store in the Store field.
      • For Windows VMs, the Store field specifies the certificate store on the virtual machine to which the certificate is added. The specified certificate store is implicitly created in the LocalMachine account.

      • For Linux VMs, the certificate file is placed under the /var/lib/waagent directory. The format of the file name is <UppercaseThumbprint>.crt for the X509 certificate and <UppercaseThumbpring>.prv for private key. Both of these files are .pem formatted.

  12. (For Windows) Select the Provision Windows Guest Agent check box.
    This option indicates whether or not to provision the virtual machine agent on the virtual machine. When this property is not specified in the request body, the default behavior is to set it to true. This ensures that the VM Agent is installed on the VM, and the extensions can be added to the VM later.
  13. (For Windows) To indicate that the VM is enabled for automatic updates, select the Automatic OS Upgrades check box.
  14. Under the Additional Unattended Content section, click the + icon and do the following:
    1. Select a setting from the Setting Name list.
      You can select Auto Logon or First Logon Commands .
      Note: Guest customization is applicable only on images that allows or support guest customization.
    2. Enter or upload the xml content. See Sample Auto Logon and First Logon Scripts.
  15. Under the WinRM Listeners section, click the + icon and do the following:
    1. Select the protocol from the Protocol list.
      You can select HTTP or HTTPS .
    2. If you selected HTTPS, then select the certificate URL from the Certificate URL list.
  16. Under the Storage Profile section, select the Use Custom Image check box to use a custom VM image created in your subscription.
    You can then select a custom image or publisher-offer-SKU-version from the Custom Image list.
  17. Under the VM Image Details section, select an image type in the Source Image Type list.
    You can select Marketplace , Subscription , or Shared Image Gallery .
    • If you selected Marketplace , then specify the publisher, offer, SKU, and version for the image.
    • If you selected Subscription , then select the custom image.
    • If you selected Shared Image Gallery , then select the gallery and the image.
  18. Under the OS Disk Details section, do the following:
    1. Select the storage type from the Storage Type list.
      You can select Standard HDD , Standard SSD , or Premium SSD .
    2. Select a disk storage account from the Disk Storage list.
      This field is available only when the Use Custom Image check box is enabled.
    3. Select disk caching type from the Disk Caching Type list.
      You can select None , Read-only , or Read write .
    4. Select disk create option from the Disk Create Option list.
      You can select Attach , Empty , or From Image .
  19. Under the Data Disk section, do the following:
    1. Select the storage type from the Storage Type list.
      You can select Standard HDD , Standard SSD , or Premium SSD .
    2. Select disk caching type from the Disk Caching Type list.
      You can select None , Read-only , or Read write .
    3. Enter the size in GB in the Size field.
    4. Enter disk logical unit number (LUN) in the Disk LUN field.
      Note: The LUN value should be unique across data disk list.
  20. Under the Network Profile section, add NICs as per your requirement and do the following for each NIC:
    1. Select a security group from the Security Group list.
    2. Select a virtual network from the Virtual Network list.
    3. Under Public IP Config , enter a name and select an allocation method.
    4. Under Private IP Config , select an allocation method.
      If you selected Static as the allocation method, then enter the private IP address in the IP Address field.
  21. Optionally, enter tags in the Tags field.
  22. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  23. On the Package tab, enter the package name in the Name field.
    1. Click one of the following:
      • Configure install : To create a task to install a package.
      • Configure uninstall : To create a task to uninstall a package.
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge

    3. Enter the task name in the Task Name field.
  24. To create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the execute type of task, see Creating an Execute Task.
    • Set Variable : To create the set variable type of task, see Creating a Set Variable Task.
    • HTTP : To create the HTTP Task type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
    For Execute and Set Variable tasks, you can use endpoints as targets for script execution. For more information, see Endpoints Overview.
  25. Select the script from the Script Type list.
    For shell, PowerShell, and eScript scripts, you can access the available list of macros by using @@{ .
    Note: Azure library SDK support is available for eScripts.
  26. To reuse a task from the library do the following.
    1. Click Browse Library .
    2. Select the task from the library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. Optionally, edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. Click Apply to update the variable or macro names.
    6. Click Copy to copy the task.
  27. On the Service tab, configure the following.
    1. In the Deployment Config pane, enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variables or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are automatically filled.
    7. Select the Secret check box if you want to hide the value of the variable.
    8. In the Port List pane, enter the name, protocol, and port number in the Name , Protocol , and Port fields.
  28. Add credentials to the blueprint. For more information, see Adding Credentials.
  29. Click Save on the Blueprint Editor page.
    The blueprint is saved and listed under blueprints tab.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Azure Troubleshooting

The following section describes Azure troubleshooting.

  • For settings save or verification failure, you can check the logs at the following location.

    /home/calm/log/styx.log

  • For application blueprints save failure, you can check the logs at the following locations.
    • /home/calm/log/hercules_*.log
    • /home/calm/log/styx.log
  • For provisioning failure, you can check the logs at the following locations.
    • Task logs on UI
    • /home/epsilon/log/indra_*.log [Signature: Encountered ERROR]
    • /home/epsilon/log/durga_*.log
    • /home/epsilon/log/arjun_*.log
    • /home/calm/log/hercules_*.log
    • /home/calm/log/styx.log

Configuring Xi VM, Package, and Service

You can define the underlying infrastructure of the VM, application details, and actions that are carried out on a blueprint until the termination of the application on Xi cloud provider.

Before you begin

Ensure that you have configured DNS in the VPC section in the Xi Cloud dashboard in the Prism Central.

Procedure

  1. Perform the basic blueprint setup, and add a service. For more information, see Adding a Service.
  2. Enter a name of the service in the Service Name field.
  3. On the VM tab, enter the name of the VM in the Name field.
  4. Select Xi from the Account list.
    Note: The account options depend on the project you selected while setting up the blueprint.
    The Availability Zone field is automatically filled.
  5. Select Windows or Linux from the Operating System list.
  6. Under VM Configuration , enter the instance name of the VM in the VM Name field. This field displays the macro as suffix to ensure name uniqueness.
    The service provider uses this name as the VM name.
  7. In the vCPUs field, enter the required number of vCPUs for the VM.
  8. In the Cores per vCPU field, enter the number of cores per vCPU for the VM.
  9. In the Memory field, enter the required memory in GiB for the VM.
  10. (Optional) If you want to customize the default OS properties of the VM, select the Guest Customization check box and do the following.
    Guest customization allows you to upload custom scripts to modify the properties of the OS of the VM.
    1. Select Cloud-init or SysPrep type and enter the script in the Script panel.
      Note:
      • Select Cloud-init for Linux and Sysprep for Windows. For Sysprep, you must use double back slash for all escape characters . For example, \\v.
      • You can also upload the script by clicking the upload icon.
    2. For Sysprep script, click Join a Domain check box and configure the following fields.
      • Domain Name : Enter the domain name of the Windows server.
      • Credentials : From the Credentials list, enter a credential for the Windows VM. You can also create new credentials. For more information, see step 22.
      • DNS IP : Enter the IP address of the DNS server.
      • DNS Search Path : Enter the DNS search path for the domain.
  11. To add a vDisk, click + vDisks and do the following.
    1. Select the device type from the Device Type list.
      You can select CD-ROM or Disk .
    2. Select the device bus from the Device Bus list.
      You can select IDE or SATA for CD-ROM .
      You can select SCSI , IDE , PCI , or SATA for Disk .
    3. Enter the size of the vDisk in GiB.
    You can also make the vDisks as runtime editable. If you have marked the vDisk attribute as runtime editable, you can add, delete, or edit vDisks while launching the blueprint. For more information about runtime editable attributes, see Runtime Variables Overview.
  12. Select categories from the Categories list.
    Note: Categories field allows you to tag your VM to a defined category in the Prism Central. Based on the Prism Central configuration, the list options are available.
  13. Under Network , select the VPC from the VPC list. For more information about VPC, see Xi Infrastructure Service Admininistration Guide.
  14. Configure the connection in your blueprint. For more information, see Configuring Check Log-In.
  15. Under the Package tab, enter the package name in the Name field.
    1. Click one of the following:
      • Configure install : To create a task to install a package.
      • Configure uninstall : To create a task to uninstall a package.
    2. In the Blueprint Canvas, click + Task .
      Figure. Package Click to enlarge

    3. Enter the task name in the Task Name field.
  16. If you want to create a task, select the type of task from the Type list.
    The available options are:
    • Execute : To create the Execute task type, see Creating an Execute Task.
    • Set Variable : To create the Set Variable task type, see Creating a Set Variable Task.
    • HTTP : To create the HTTP Task type, see Creating an HTTP Task.
    • Delay : To create the Delay task type, see Creating a Delay Task.
  17. (Optional) To reuse a task from the task library, do the following.
    1. Click Browse Library .
    2. Select the task from the task library.
      When you select a task, the task inspector panel displays the selected task details.
    3. Click Select .
    4. (Optional) Edit the variable or macro names as per your blueprint.
      The variable or macro names used in the task can be generic, you can update them with corresponding variable or macro names as per your blueprint.
    5. To update the variable or macro names, click Apply .
    6. To copy the task, click Copy .
  18. On the Service tab, configure the following.
    1. In the Deployment Config pane, enter the number of default, minimum and maximum service replicas that you want to create in the Default , Min , and Max fields respectively.
      The Min and Max fields define the scale-in and scale-out actions. The scale-in and scale-out actions cannot scale beyond the defined minimum and maximum numbers. The default field defines the number of default replicas the service creates.
      If there is an array of three VMs, define the minimum number as three.
    2. In the Variables section, click the + icon to add variable types to your blueprint.
    3. In the Name field, enter a name for the variable.
    4. From the Data Type list, select one of the base type variables or import a custom library variable type.
    5. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    6. If you have imported a custom variable type, all the variable parameters are auto filled.
    7. Select the Secret check box if you want to hide the value of the variable.
  19. Click Save on the Blueprint Editor page.
    The blueprint is saved and listed under blueprints tab.

Configuring Kubernetes Deployment, Containers, and Service

Perform the following procedure to configure Kubernetes Deployment, Containers, and Service.

Before you begin

  • Ensure that you have completed the pre-configuration requirements. See Creating a Multi-VM Blueprint.
  • Ensure that the selected project has Kubernetes or GCP with GKE enabled or both as part of it.
  • Refer Kubernetes Documentation to get detailed information about the kubernetes attributes and configuration.

Procedure

  1. To add a service to the blueprint, see Adding a Service.
  2. To add a Pod, click + against the Pod .

    A Pod is the basic execution unit of a Kubernetes application and the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your cluster.

    The Pod service inspector panel appears.
  3. Enter a name of the pod in the Pod Name field.
  4. Under the Deployment tab, select the account from the Account list. All the accounts added to the project are available for selection.
  5. Optionally, edit the Calm deployment name in the Calm Deployment Name field.
    This filed is auto-populated.
  6. Optionally, edit the K8s deployment name in the K8s Deployment Name field.
    This filed is automatically populated.
  7. Enter namespace in the Namespace field.
    Namespace is a kubernetes field to use in environments with many users spread across multiple teams, or projects.
  8. Enter the number of replicas in the Replica field.
  9. Optionally, enter annotations in the Annotations field.
    You can use kubernetes annotations to attach arbitrary non-identifying metadata to objects.
  10. Enter selector in the Selectors field.
    The selector field defines how the Deployment finds which pods to manage.
  11. Enter label in the Label field.

    Labels are key/value pairs that are attached to objects, such as pods. You can use Labels to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. You can also use Labels to organize and to select subsets of objects. You can attach Labels to objects either at the creation time or later. Each object can have a set of key/value labels defined. Each key must be unique for a given object.

  12. Optionally, you can edit the pod name in the K8s Pod Name field.
    This field is auto-populated.
  13. Enter value of image pull secrets in the Image Pull Secrets field.
    You can provide the list of secret names (pre-configured in a Kubernetes cluster by using Kubernetes Docker secret object) to be use by Kubernetes cluster to pull the container images from registries that require authentication.
  14. Select DNS policy from the DNS Policy list.
  15. Under Containers tab, optionally edit the Calm service name in the Calm Service Name field.
  16. Optionally, you can edit the K8s service name in the K8s Service Name field.
    This field is auto-populated.
  17. Enter arguments for the container in the Args field.
  18. Enter Docker image in the Image field.
  19. Select a value from the Image Pull Policy .
    You can either select Never or Always or IfNotPresent (default).
  20. Under Pre Stop Lifecycle , select an action.
    You can select None (default) or Exec or HTTP Get Action or TCP Socket .
    This hook is called immediately before a container is terminated. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.
  21. Under Post Stop Lifecycle , select an action.
    You can select None (default) or Exec or HTTP Get Action or TCP Socket .
    This hook runs immediately after a container is created. However, there is no guarantee that the hook runs before the container ENTRYPOINT. No parameters are passed to the handler.
  22. Under Container Port , enter the port number in the Port field.
    1. Enter name of the port in the Name field.
    2. Select protocol from the Protocol list.
      You can either select TCP or UDP .
  23. Under Readiness Probe , enter command in the Command field.
    The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A pod is ready after all of its Containers are ready. One use of this signal is to control the pods used as backends for Services. When a pod is not ready, it is removed from Service load balancers.
  24. Under Resource Limit , enter the cores per CPU in the CPU field.
    When you specify a pod, you can optionally specify how much CPU and memory (RAM) each Container needs. CPU and memory are each a resource type. A resource type has a base unit. You can specify CPU in units of cores and memory in bytes.
    1. Enter the bytes of memory in the Memory field.
  25. Under Resource Request , enter the termination message path in the Termination Message Path field.
    Termination messages provide a way for containers to write information about fatal events to a location where you can easily retrieve and surface these events by tools like dashboards and monitoring software.
  26. Under Service tab, optionally you can edit the Calm Published Service Name field.
  27. Optionally, you can edit the K8s Service Name field.
  28. Select a service type from the Service Type list. You can select one of the following.
    • ClusterIP : Exposes the service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
    • NodePort : Exposes the service on each node's IP at a static port (the NodePort ). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
    • LoadBalancer : Exposes the service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
  29. Enter namespace in the Namespace field.
    You can use Namespaces in environments with many users spread across multiple teams, or projects.
  30. Enter label in the Label field.

    Labels are key/value pairs that are attached to objects, such as pods. You can use Labels to specify identifying attributes of objects that are meaningful and relevant, but do not directly imply semantics to the core system. You can also use Labels to organize and select subsets of objects. You can attach Labels to objects at creation time and add or modify at any time. Each object can have a set of key/value labels defined. Each key must be unique for a given object.

  31. Enter selector in the Selectors field.
    The selector field defines how the Deployment finds which pods to manage.
  32. Under Port List , enter the port name in the Port Name field.
    1. Enter node port in the Node Port field.
    2. Enter port number in the Port field.
    3. Select protocol from the Protocol list.
      You can either select TCP or UDP .
    4. Enter the target port in the Target Port field.
  33. To upload the POD specification file in .JSON or .YAML format from your local machine, click against the icon next to the Pod Name field and upload the specification file.
    You can also download the POD specification file in .JSON or .YAML format.
  34. To edit the uploaded POD specification, click the Spec Editor toggle button and click Edit .
    The Script Editor page is displayed. You can edit the specification file in .YAML or .JSON format.
  35. To save the edited POD specification file, click Done .
  36. Click Save .
    The blueprint is saved and listed under blueprints tab.

What to do next

Define the service dependencies. See Setting up the Service Dependencies.

Setting up the Service Dependencies

Dependencies are used to define the order in which tasks must get executed. Perform the following procedure to set up the service dependency.

Before you begin

  • Ensure that at least more than one service must be available. See Adding a Service.
  • Ensure that you have completed the pre-configuration requirements. See Creating a Multi-VM Blueprint.

Procedure

  1. Select the service.
  2. Select the dependency icon and drag to the service on which you want to create the dependency.
    Figure. Create Dependency Click to enlarge

What to do next

Configure the application profile. See Adding and Configuring an Application Profile.

Adding and Configuring an Application Profile

An application profile provides different combinations of the service, package, and VM while configuring a blueprint. You configure application profiles and use them while launching a blueprint.

Before you begin

Ensure that you have completed the pre-configuration requirements. See Creating a Multi-VM Blueprint.

Procedure

  1. To create an application profile, click the + icon next to Application Profile in the Overview Panel.
    Figure. Application Profile Click to enlarge

  2. In the Inspector Panel, enter the name of the application profile in the Application Profile Name field.
  3. Optionally, select an environment for the application profile from the Environment list.
    The environment available for the selection in the Environment list depends on the project you selected in the Blueprint Setup window. If you selected a default environment while configuring your environments for the project, the default environment automatically appears in the Environment list. You can select a different environment if required.
  4. Click the + icon next to Variables .
  5. Enter a name for the variable in the Name field.
  6. Select a data type from the Data Type list. You can select one of the following data type:
    • String
    • Integer
    • Multi-line string
    • Date
    • Time
    • Date Time
  7. Enter a value for the selected data type in the Value field.
    You can select the Secret check box to hide the variable value.
  8. Click Show Additional Options .
  9. In the Input Type field, select one of the following input type:
    • Simple: Use this option for default value.
    • Predefined: Use this option to assign static values.
    • eScript: Use this option to attach a script that is run to retrieve values dynamically at runtime. Script can return single or multiple values depending on the selected base data type.
    • HTTP: Use this option to retrieve values dynamically from the defined HTTP end point. Result is processed and assigned to the variable based on the selected base data type.
  10. If you have selected Simple , enter the value for the variable in the Value field.
  11. If you have selected Predefined , enter the value for the variable in the Option field.
    1. To add multiple values for the variable, click + Add Option , and enter values in the Option field.
      Note: To make any value as default, select the Default radio button for the option.
  12. If you have selected eScript , enter the eScript in the field.
    You can also upload the script from the library or from the computer by clicking the upload icon.
    You can also publish the script to the library by clicking the publish button.
    Note:
    • You cannot add macros to eScripts.
    • If you have selected Multiple Input (Array) check box with input type as eScript, then ensure that the script returns a list of values separated by comma. For example, CentOS, Ubuntu, Windows.
  13. If you have selected HTTP , configure the following fields.
    • Request URL : In the Request URL field, enter the URL of the server that you want to run the methods on.
    • Request Method : In the Request Method list, select one of the following request methods. The available options are GET, PUT, POST, and DELETE.
    • Request Body : In the Request Body field, enter the PUT request. You can also upload the PUT request by clicking the upload icon.
    • Content Type : In the Content Type list, select the type of the output format. The available options are XML , JSON, and HTML.
    • Connection Timeout (sec) : In the Connection Timeout (sec) field, enter the timeout interval in seconds.
    • Authentication : Optionally, if you have selected authentication type as BASIC, enter the user name and the password in the User name and Password fields respectively.
    • SSL Certificate Verification : If you want to verify SSL certificate for the task, click the SSL Cerificate Verification field.
    • Retry Count : Enter the number of attempts the system performs to create a task after each failure. By default, the retry count is zero. It implies that the task creation procedure stops after the first attempt.
    • Retry Interval : Enter the time interval in seconds for each retry if the task fails.
    • Headers : In the Header area, enter the HTTP header key and value in the Key and Value fields respectively. If you want to publish the HTTP header key and value pair as secret, click the Secrets fields.
    • Response Code : Enter the response code for the selected response status.
    • Response Status : Select either Success or Failure as the response status for the task.
    • Set Response Path for Variable : Enter the variables from the specified response path. The example of json format is $.x.y and xml format is //x/y. For example, if the response path for variable is $.[*].display for response.
      [
          {
              "display": "HTML Tutorial",
              "url": "https://www.w3schools.com/html/default.asp"
          },
          {
              "display": "CSS Tutorial",
              "url": "https://www.w3schools.com/css/default.asp"
          },
          {
              "display": "JavaScript Tutorial",
              "url": "https://www.w3schools.com/js/default.asp"
          },
          {
              "display": "jQuery Tutorial",
              "url": "https://www.w3schools.com/jquery/default.asp"
          },
          {
              "display": "SQL Tutorial",
              "url": "https://www.w3schools.com/sql/default.asp"
          },
          {
              "display": "PHP Tutorial",
              "url": "https://www.w3schools.com/php/default.asp"
          },
          {
              "display": "XML Tutorial",
              "url": "https://www.w3schools.com/xml/default.asp"
          }
      ]
      Then, during the launch time the list options are ["HTML Tutorial","CSS Tutorial","JavaScript Tutorial","jQuery Tutorial","SQL Tutorial","PHP Tutorial","XML Tutorial"].
  14. Optionally, enter a label and description for the variable.
  15. Optionally, do the following:
    • Mark this variable private : Select this to make the variable private. Private variables are not shown at luanch or in the application.
    • Mark this variable mandatory : Select this to make the variable a requisite for application launch.
    • Validate with Regular Expression : Select this if you want to test the Regex values. Click Test Regex , provide the value for the Regex, and test or save the Regex. You can enter Regex values in PCRE format. For more details, see from http://pcre.org/.
  16. Click Save on the Blueprint Editor page.

What to do next

After you added the application profile and its variables, you can create actions in the profile. For more information, see Adding an Action to a Multi-VM Blueprint.

Blueprint Configurations in Calm

Blueprint configuration involves adding tasks, actions, snapshot and restore configurations, and VM update configurations.

Configuring a Blueprint

Perform the following procedure to configure a blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Click the blueprint that you want to configure.
    The blueprint editor page is displayed.
  3. Click Configuration .
    The Blueprint Configuration window is displayed.
  4. In the Blueprint Name field, enter a name of the blueprint.
  5. In the Blueprint Description field, enter a brief description about the blueprint.
  6. Click + against the Downloadable Image Configuration field and configure the following:
    1. In the Package Name field, enter the name of the package.
    2. In the Description field, enter a brief description about the package.
    3. In the Image Name field, enter the name of the image.
    4. In the Image Type list, select the type of image.
    5. In the Architecture list, select the architecture.
    6. In the Source URI field, enter the source URI to download the image.
  7. In the Product Name field, enter the name of the product.
  8. In the Product Version field, enter the version of the product.
  9. Click one of the following:
    • To save the configuration, click Save .
    • To go back to the previous screen, click Back .

Adding Credentials

Credentials are used to authenticate a user to access various services in Calm. Calm supports static and dynamic credentials with key-based and password-based authentication methods.

Procedure

  1. To add a credential, do one of the following:
    • To add a credential in a single-VM blueprint, click Add/Edit Credentials on the Advanced Options tab and then click + Add Credentials .
    • To add a credential in a multi-VM blueprint or a brownfield application, click Credentials on the Blueprint Editor page and then click Credentials + .
    • To add a credential in a task, click Add New Credential in the Credential list.
  2. In the Name field, type a name for the credential.
  3. Under the Type section, select the type of credential that you want to add.
    • Static : Credentials store keys and passwords in the credential objects that are contained in the blueprints.
    • Dynamic : Credentials fetch keys and passwords from an external credential store that you integrate with Calm as the credential provider.
  4. In the Username field, type the user name.
    For dynamic credentials, specify the @@(username)@@ that you defined while configuring the credential provider.
    Note: A dynamic credential provider definition requires username and secret. The secret variable is defined by default when you configure your credential provider. However, you must configure a runbook in the dynamic credential provider definition for the username variable before you use the variable in different entities.
  5. Select either Password or SSH Private Key as the secret type.
  6. Do one of the following to configure the secret type.
    • If you selected Static as the credential type and Password as the secret type, then type the password in the Password field.
    • If you selected Static as the credential type and SSH Private Key as the secret type, then enter or upload the key in the SSH Private Key field.
    • If you selected Dynamic as the credential type and Password or SSH Private Key as the secret type, then select a credential provider in the Provider field. After you select the provider, verify or edit the attributes defined for the credential provider.
    If the private key is password protected, click +Add Passphrase to provide the passphrase. For dynamic credentials, you must configure a runbook in the dynamic credential provider definition for the passphrase variable and then use the @@{passphrase}@@ variable.
    The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
  7. If you want this credential as your default credential, select the Use as default check box.
  8. Click Done to add the credential.

Configuring Check Log-In

You configure a check log-in task to check whether you are able to SSH into the VM you create. Perform the following steps to configure check log-in.

Procedure

  1. Under Connection , select the Check log-in upon create check box to check the log on status after creating the VM.
  2. In the Credential list, select Add New Credential to add a new credential and do the following:
    1. Enter a name for the credential in the Name field.
    2. Select the type of credential you want to add under the Type section. Your options are:
      • Static : Credentials store keys and passwords in the credential objects that are contained in the blueprints.
      • Dynamic : Credentials fetch keys and passwords from an external credential store that you integrate with Calm as the credential provider.
    3. Enter the user name in the Username field.
      For dynamic credentials, specify the @@(username)@@ that you defined while configuring the credential provider.
      Note: A dynamic credential provider definition requires username and secret. The secret variable is defined by default when you configure your credential provider. However, you must configure a runbook in the dynamic credential provider definition for the username variable to use in different entities.
    4. Select either Password or SSH Private Key as the secret type.
    5. Do one of the following to configure the secret type.
      • If you selected Static as the credential type and Password as the secret type, then type the password in the Password field.
      • If you selected Static as the credential type and SSH Private Key as the secret type, then enter or upload the key in the SSH Private Key field.
      • If you selected Dynamic as the credential type and Password or SSH Private Key as the secret type, then select a credential provider in the Provider field. After you select the provider, verify or edit the attributes defined for the credential provider.
      If the private key is password protected, click +Add Passphrase to provide the passphrase. For dynamic credentials, you must configure a runbook in the dynamic credential provider definition for the passphrase variable and then use the @@{passphrase}@@ variable.
      The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
    6. If you want this credential as your default credential, select the Use as default check box.
    7. Click Done .
  3. Select address from the Address list.

    You can either select the public IP address or private IP address of a NIC.

  4. Select the connection from the Connection Type list.
    Select SSH for Linux or Windows (Powershell) for Windows.
    If you selected Windows (Powershell) , then select the protocol from the Connection Protocol list. You can select HTTP or HTTPS .
    The Connection Port field is automatically populated depending upon the selected Connection Type . For SSH, the connection port is 22 and for PowerShell the connection port is 5985 for HTTP and 5986 for HTTPS.
  5. Enter the delay in seconds in the Delay field.

    Delay timer defines the time period when the check login script is run after the VM starts. It allows you to configure the delay time to allow guest customization script, IP, and all other services to come up before running the check login script.

  6. In the Retries field, enter the number of log-on attempts the system must perform after each log on failure.
  7. To save the blueprint, click Save .

Tasks Overview

Tasks are part of your deployment creation process and are run one after the other. The tasks are used to perform a variety of operations such as setting up your environment, installing a set of software on your service, and so on.

You have the following basic types of tasks.

  • Execute: Used to run eScripts on a VM. For more information, see Creating an Execute Task.
  • Set variable: Used to change variables in a task. For more information, see Creating a Set Variable Task.
  • HTTP: Used to query REST calls from a URL. For more information, see Creating an HTTP Task.
  • Delay: Used to set a time interval between two tasks or actions. For more information, see Creating a Delay Task.

Pre-reate and Post-delete Tasks

Pre-create tasks are actions that are performed before a service is provisioned in a blueprint. For example, if you want to assign static IP addresses to your VMs by using IPAM service, you can create and run a pre-create task to receive the IP addresses before the service is provisioned. The pre-create task helps to restrict the broadcast traffic to receive the IP addresses for those VMs during the service provision.

Post-delete tasks are actions that are performed after you delete a service in a blueprint. For example, if you want to delete the assigned IP addresses from your VMs, you can add a post-delete task to delete the IP addresses after the service is deleted. The post-delete task helps to restrict the broadcast traffic to delete the IP addresses for those VMs during the service provision.

Creating an Execute Task

You can create the Execute task type to run scripts on the VM.

About this task

Use this procedure to create an Execute task.

Procedure

  1. In the Script Type list, select one of the following:
    • Shell
    • EScript
    • Powershell
    For Shell, PowerShell, and eScript scripts, you can access the available list of macros by using @@{ .
    Note: You can use macro expansions for variables used for eScripts.
    For sample eScripts , see Supported eScript Modules and Functions. For sample Powershell scripts, see Sample Powershell Script.
  2. If you have selected the script type as Shell or Powershell , do the following:
    1. In the Endpoint list, select an endpoint for the task or click Add New Endpoint to create an endpoint. For more information about creating an endpoint, see Creating an Endpoint.
    2. In the Credential list, select an existing credential or click Add New Credential to add a credential. For more information about adding a credential, see Adding Credentials.
    3. Enter the install or uninstall script in the Script panel.
      For example, see Sample Scripts for Installing and Uninstalling Services.
      You can also upload a script by clicking the upload icon.
    4. If you want to test the script in Calm playground, click Test Script .
      Calm playground allows you to test a script by running and reviewing the output and making required changes.
      The Test Script page is displayed.
    5. On the Authorization tab, enter the following fields:
      • IP Address : Enter the IP address of the test machine.
      • Port : Enter the port number of the test machine.
      • Select tunnel to connect with (Optional) : Select the tunnel to get access to the VMs within the VPC.
      • Credential : Select the credential from the list.
      • User name : Enter a user name.
      • Password : Enter a password.
    6. Click Login and Test .
      The Test script page is displayed.
      You can also view your script in the Source Script field.
    7. (Optional) You can edit your script in the Source Script field.
    8. If you are using macros in your script, provide the variable values in the macro inspector panel and click Assign and Test .
    9. Click Test .
      The test result is displayed in the Output field.
    10. To go back to the previous screen, click Done .
  3. If you have selected the script type as EScript , do the following:
    1. In the Select tunnel to connect with list, select a tunnel to access VMs in the VPC. This step is optional.
    2. In the Script field, enter the script.
      You can also upload a script by clicking the upload icon.
    3. Click Test Script .
      The Test EScript page is displayed.
      You can also view your script in the Source Script field.
    4. If you are using macros in your script, provide the variable values in the macro inspector panel and click Assign and Test .
    5. To test the script, click Test .
      The test result is displayed in the Output field.
    6. To go back to the previous screen, click Done .
  4. To publish this task to the task library, click Publish to Library .
    The task is published to the Library and you can browse and use the task while creating a blueprint.

Creating a Set Variable Task

You can create a Set Variable task type to change variables in a blueprint.

About this task

Use this procedure to create a Set Variable task.

Procedure

  1. In the Script Type list, select one of the following:
    • Shell
    • Powershell
    • EScript
    For Shell, Powershell, and EScript scripts, you can access the available list of macros by using @@{ .
    For sample Escripts , see Supported eScript Modules and Functions. For sample Powershell scripts, see Sample Powershell Script.
  2. If you have selected the script type as Shell or Powershell , do the following:
    1. In the Endpoint list, select an endpoint for the task or click Add New Endpoint to create an endpoint. For more information about creating an endpoint, see Creating an Endpoint.
    2. In the Credential list, select an existing credential or click Add New Credential to add a credential. For more information about adding a credential, see Adding Credentials.
  3. If you have selected the script type as EScript , then select a tunnel to access VMs in the VPC in the Select tunnel to connect with list. This step is optional.
  4. Enter the install or uninstall script in the Script panel.
    For example, see Sample Scripts for Installing and Uninstalling Services.
  5. In the Output field, enter the name of the variable that you have defined through the set variable task.
    If you are setting multiple variables, enter the variable name for each of the variables by clicking the Output field.
  6. To publish this task to the task library, click Publish to Library .
    The task is published to the Library and you can browse and use the task while creating a blueprint.

Creating an HTTP Task

You can create an HTTP task type to query REST calls from a URL. An HTTP task supports GET, PUT, POST, and DELETE methods.

About this task

Note: You can use macro expansions for variables used in an HTTP task.

Procedure

  1. In the Request URL field, enter the URL of the server that you want to run the methods on.
  2. In the Select tunnel to connect with list, select a tunnel to access VMs in the Virtual Private Cloud (VPC). This step is optional.
  3. In the Request Method list, select one of the following request methods.
    • GET : Use this method to retrieve data from a specified resource.
    • PUT : Use this method to send data to a server to update a resource. In the Request Body field, enter the PUT request. You can also upload the put request by clicking the upload icon.
    • POST : Use this method to send data to a server to create a resource. In the Request Body field, enter the POST request. You can also upload the post request by clicking the upload icon.
    • DELETE : Use this method to send data to a server to delete a resource. In the Request Body field, enter the DELETE request. You can also upload the delete request by clicking the upload icon.
  4. In the Content Type list, select the type of the output format.
    The available options are XML , JSON , and HTML .
  5. In the Header area, enter the HTTP header key and value in the Key and Value fields respectively.
  6. If you want to publish the HTTP header key and value pair as secret, click the Secrets fields.
  7. In the Connection Time Out field, enter the timeout interval in seconds.
  8. Optionally, in the Authentication field, select Basic and do the following:
    1. In the Username field, enter the user name.
    2. In the Password field, enter the password.
  9. If you want to verify SSL certificate for the task, click the SSL Cerificate Verification field.
  10. If you want to use a proxy server as configured in the Prism Central, click the Use PC Proxy configuration .
    Note: Ensure that the Prism Central has the appropriate HTTP proxy configuration.
  11. In the Retry Count field, enter the number of attempts the system performs to create a task after each failure.
    By default, the retry count is zero. It implies that the task creation procedure stops after the first attempt.
  12. In the Retry Interval field, enter the time interval in seconds for each retry if the task fails.
  13. In the Expected Response Options area, enter the details for the following fields:
    • Response Status : Select either Success or Failure as the response status for the task.
    • Response Code : Enter the response code for the selected response status.
    Note: If the response code is not defined, then by default all the 2xx response codes are marked as success and any other response codes are marked as failure.
    • Set Variables from response : Enter the variables from the specified response path. The example of json format is $.x.y and xml format is //x/y . For more information about json path syntax, see http://jsonpath.com.
      Note: To retrieve the output format in HTML format, add a * in the syntax.
  14. If you want to test the script in Calm playground, click Test script .
    Calm playground allows you to test a script by running and reviewing the output and making required changes.
    The Test Script page is displayed. You can also edit the fields described from step 1–11.
  15. Click Test .
    The test result is displayed in the Output field .
  16. To publish this task to the task library, click Publish to Library .
    The task is published to the Library and you can browse and use the task while creating a blueprint.

Creating a Delay Task

You can create a Delay task type to set a time interval between two tasks or actions.

Procedure

In the Sleep Interval field, enter the sleep time interval in seconds for the task.
The delay task type is created. You can use the task type to set a time interval between two tasks or actions.

Adding a Pre-create or Post-delete Task

Pre-create tasks are actions that are performed before a service is provisioned in a blueprint. Post-delete tasks are actions that are performed after you delete a service in a blueprint.

Procedure

  1. In the Overview Panel, under the service in which you want to add the task, expand VM , and then click Pre-create or Post-delete .
    Figure. Pre-create or Post-delete Task Click to enlarge Pre-create and Post-delete Task

  2. In the Blueprint Canvas, click + Task for the pre-create or post-delete.
  3. In the Inspector Panel, do the following:
    1. Enter the task name in the Task Name field.
    2. Select the type of tasks from the Type list.
      • Execute : Use this task type to run eScripts on the VM. To create the Execute task type, see Creating an Execute Task.
      • Set Variable : Use this task to change variables in a blueprint. To create the Set Variable task type, see Creating a Set Variable Task.
      • HTTP : Use this task type to query REST calls from a URL. An HTTP task supports GET, PUT, POST, and DELETE methods. To create the HTTP type, see Creating an HTTP Task.
      • Delay : Use this task type to set a time interval between two tasks or actions. To create the Delay task type, see Creating a Delay Task.
    3. To use tasks from the library, click Browse Library , select the task in the Browse Task from Library page, and click Select . This step is optional.
    4. Click Publish to Library to publish the task you configured to your task library. This step is optional.
  4. Click Save .

Actions Overview

Actions are flows to accomplish a particular task on your application. You can use actions to automate any process such as backup, upgrade, new user creation, or clean-up and enforce an order of operations across services.

You can categorize actions into the following types.

Table 1. Action Types
Type Description
Profile Actions Application Profile Actions are a set of operations that you can run on your application. For example, when you launch a blueprint, the Create action is run. When you do not need the application for a period of time, you can run the Stop action to gracefully stop your application. When you are ready to resume your work, you can run Start action to bring the application back to the running state.

You have the following types of profile actions.

  • System-defined Profile Actions

    These actions are automatically created by Calm in every blueprint and the underlying application. Because these actions are system-defined, a blueprint developer cannot directly edit the tasks or the order of tasks within the action.

  • Custom Profile Actions

    These actions are created by the blueprint developer and are added whenever the developer needs to expose a set of operations to the application user. Common custom profile actions are Upgrade, Scale In, and Scale Out. In these actions, individual tasks can be manually added in the desired order by the developer.

Service Actions Service Actions are a set of operations that are run on an individual service. These actions cannot be run directly by the application user but can be run indirectly using either a profile actions or a package install or uninstall operation.

Services span application profiles. For example, if you create a service action in the AHV profile, the same service action is available in the AWS profile as well.

You have the following types of service actions.

  • System-defined Service Actions

    These actions are automatically created by Calm in every blueprint and the underlying application. These actions cannot be run individually and are run only when the corresponding profile action is run. For example, any operations within the Stop service action are run when an application user runs the Stop profile action.

  • Custom Service Actions

    These actions are created by the blueprint developer for any repeatable operations within the blueprint. For example, if the App service should fetch new code from git during both the Create and Upgrade profile actions, the blueprint developer can create a single custom service action. The developer can then reference the action in both the Create and Upgrade actions rather than maintaining two separate tasks that perform the same set of operations.

Custom Actions

The following are the most common custom actions that developers add to their blueprints:

Table 2. Custom Actions
Custom Action Description
Scale In The scale-in functionality enables you to decrease the number of replicas of a service deployment. The number of instances to be removed from a service for each scale-in action is defined in the blueprint while configuring the task in the profile level action.

The scale count number must be less than or equals to the minimum number of replicas defined for the service. The VM that is created last is deleted first.

For information on how to configure scale in, see Adding and Configuring Scale Out and Scale In.

Scale Out The scale out functionality enables you to increase the number of replicas of a service deployment. The number of instances to be added to a service for each scale-out action is defined in the blueprint while configuring the task in the profile level action.

The scale count number must be less than or equals to the maximum number of replicas defined for the service.

For information on how to configure scale out, see Adding and Configuring Scale Out and Scale In.

For information about how to create an action, see Adding an Action to a Multi-VM Blueprint and Adding an Action to a Single-VM Blueprint.

Adding an Action to a Single-VM Blueprint

An action is a set of operations that you can run on your application that are created as a result of running a blueprint.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.
  • Ensure that you configured the VM in your blueprint. For more information, see VM Configuration.
  • Ensure that you added credentials to enable packages and actions. For more information, see Adding Credentials.

Procedure

  1. To add an action, click the + Add action next to Actions .
    Figure. Blueprint Action Click to enlarge

  2. To change the action name, click the edit icon on the Tasks tab.
    Figure. Add Action Click to enlarge

  3. Click the + Add Task button.
  4. In the Blueprint Canvas, select the task and do the following in the Inspector Panel.
    1. Enter the name of the task in the Task Name field.
    2. Select the type of the task from the Type list.
      • Execute : Use this task type to run eScripts on the VM. To create the Execute task type, see Creating an Execute Task.
      • Set Variable : Use this task to change variables in a blueprint. To create the Set Variable task type, see Creating a Set Variable Task.
      • HTTP : Use this task type to query REST calls from a URL. An HTTP task supports GET, PUT, POST, and DELETE methods. To create the HTTP type, see Creating an HTTP Task.
      • Delay : Use this task type to set a time interval between two tasks or actions. To create the Delay task type, see Creating a Delay Task.
    3. (Optional) To use tasks from the library, click Browse Library , select the task in the Browse Task from Library page, and click Select .
    4. (Optional) Click Publish to Library to publish the task you configured to your task library.
  5. Click Done .

Adding an Action to a Multi-VM Blueprint

An action is a set of operations that you can run on your application that are created as a result of running a blueprint.

Before you begin

  • Ensure that you have at least one service available. See Adding a Service.
  • Ensure that you have completed the pre-configuration requirements. See Creating a Multi-VM Blueprint.

Procedure

  1. To add an action, click the + icon next to Actions in the Overview Panel.
    Figure. Blueprint Action Click to enlarge Actions

  2. In the Blueprint Canvas, select + Action for the service, and do the following in the Inspector Panel.
    1. Enter the task name in the Task Name field.
    2. Select the action from the Service Actions list.
    3. Click Save .
  3. In the Blueprint Canvas, select + Task and do the following in the Inspector Panel.
    Figure. Task Click to enlarge Task

    1. Enter the name of the task in the Task Name field.
    2. Select the type of the task from the Type list.
      You can select Execute or Set Variable .
    3. Select the script from the Script Type list and enter the script in the Script field.
      You can select Shell , EScript , or Powershell .
      Note: To view the supported list of eScript modules and functions, refer to Supported eScript Modules and Functions.
      For the Shell and eScript scripts, you can access the available list of macros by using @@{ .
      When you select the Shell and Powershell script, you can optionally select or add an endpoint. You can also select or add credentials.
      You can click Publish to Library to publish the task you configured to your task library.
    4. Enter the output of the script in the Output field.
    5. Click Save on the Blueprint Editor page.

Adding and Configuring Scale Out and Scale In

Perform the following procedure to add and configure the Scale Out and Scale In task.

Procedure

  1. Add a service. See Adding a Service.
  2. Configure VM, Package and Service. See Configuring Nutanix and Existing Machine VM, Package, and Service.
  3. Add an Application profile. See Adding and Configuring an Application Profile.
  4. In the Overview Panel, under Application Profile , click the + icon next to Actions.
    Figure. Scale In and Scale Out Click to enlarge

  5. In the Blueprint Canvas, below the service inspector, click + Task .
  6. In the Inspector Panel, enter the name of the task in the Task Name field.
  7. Select Scale In or Scale Out from the Scaling Type list.
  8. Enter the number in the Scaling Count field.
    The Scaling out and Scaling in number should be less than the minimum and maximum number of replicas defined for the service.
  9. Click Save .

What to do next

You can run the scale-in or scale-out tasks on the Applications page. To do that, you first have to launch the blueprint and then click the Applications icon to view the created application on the Application page. You can run the scale in or scale out on the Manage tab of the application.

Blueprint Configuration for Snapshots and Restore

The snapshot and restore feature allows you to create a snapshot of a virtual machine at a particular point in time and restore from the snapshot to recreate the application VM from that time. You can configure snapshot and restore for both single-VM and multi-VM applications on a Nutanix platform. All you need to do is to add the snapshot/restore configuration to the blueprint. Adding the configuration generates separate profile actions for snapshot and restore to which you can add further tasks and actions.

For VMware, AWS, and Azure platforms, the snapshot and restore feature is available by default only to the single-VM applications.

For more information on blueprint configuration for snapshots, see Configuring Single-VM Blueprints with Nutanix for Snapshots and Configuring Multi-VM Blueprints on Nutanix for Snapshots.

Configuring Single-VM Blueprints with Nutanix for Snapshots

The snapshot/restore action for single-VM applications with Nutanix is no longer available by default. To enable snapshot, you must add a snapshot/restore configuration to the single-VM blueprint. You can configure to create snapshots locally or on a remote cluster. Snapshot and restore is a paired action in a blueprint and are always managed together.

About this task

The snapshot/restore configuration generates separate application profile actions for snapshot and restore. These actions also allow you to add more tasks and actions as part of the snapshot and restore configuration. For example, shutting down the application and the VM before creating the snapshot or restarting the VM before a restore. You can access these actions from the Manage tab of the Applications page.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.
  • Ensure that you configured the VM in your blueprint. For more information, see VM Configuration.
  • Ensure that you added credentials to enable packages and actions. For more information, see Adding Credentials.
  • Ensure that you created the required snapshot policy. You associate snapshot policies when you launch the blueprint configured for snapshot and restore. For more information about creating snapshot policy, see Creating a Snapshot Policy.

Procedure

  1. On the Advanced Options tab, next to Snapshot/Restore, click the + Add Snapshot/ Restore Config option.
    Figure. Snapshot Config Click to enlarge

  2. In the Add Snapshot and Restore window, do the following:
    1. Enter the suffix that you want to associate to the snapshot/restore profile action.
    2. Enter a name for the snapshot in the Snapshot Name field.
      You can use Calm macros to provide a unique name to the snapshot whenever they are generated. For example, snapshot-@@{calm_array_index}@@-@@{calm_time}@@ .
    3. Under the Snapshot Location section, select Local or Remote Cluster to specify whether this configuration should manage your snapshots locally or on a remote cluster.
    4. Select the Delete older VM after restore check box to delete the older VM after the service is restored from the snapshot.
    5. Click Save .
      Saving the configuration generates separate profile actions for snapshot and restore.
  3. On the Advanced Options tab, click Edit next to the snapshot or restore action to edit the configuration or add tasks. For more information about adding a task, see Configuring Tasks or Packages in a Blueprint.

What to do next

  • You can launch the blueprint after associating snapshot policies and rules. For more information, see Launching a Blueprint.
  • You can access the create snapshots from the Manage tab on the Applications page. For more information, see Creating Snapshots on a Nutanix Platform.

Configuring Multi-VM Blueprints on Nutanix for Snapshots

You can configure the snapshot/restore action in a blueprint on Nutanix account to create snapshots locally or on a remote cluster. Snapshot/restore is a paired action for a particular service in a blueprint and are always managed together.

About this task

The snapshot/restore definition of a service generates snapshot configuration and its corresponding restore configuration. You can use these configurations to modify your snapshot and restore setup.

The snapshot/restore configuration generates separate application profile actions for snapshot and restore. These actions allow you to add more tasks and actions as part of the snapshot and restore configuration. For example, shutting down the application and the VM before creating the snapshot or restarting the VM or services before a restore. You can access these actions from the Manage tab of the Applications page to create or restore snapshots.

Before you begin

  • Ensure that you have completed the pre-configuration requirements. For more information, see Creating a Multi-VM Blueprint.
  • Ensure that you have at least one service available. For more information, see Adding a Service.
  • Ensure that you create the required snapshot policy. You associate snapshot policies when you launch the blueprint configured for snapshot and restore. For more information about creating snapshot policy, see Creating a Snapshot Policy.

Procedure

  1. In the Overview Panel, expand the service to which you want to add the snapshot and restore action.
  2. Click the + icon next to Snapshot/Restore .
    Figure. Multi-VM Snapshot Click to enlarge

  3. In the Add Snapshot and Restore window, do the following:
    Figure. Multi-VM Snapshot Options Click to enlarge

    1. Enter the suffix that you want to associate to the snapshot/restore profile action.
    2. Enter a name for the snapshot in the Snapshot Name field.
      You can use Calm macros to provide a unique name to the snapshot whenever they are generated. For example, snapshot-@@{calm_array_index}@@-@@{calm_time}@@ .
    3. Under the Snapshot Location section, select Local or Remote Cluster to specify whether this configuration should manage your snapshots locally or on a remote cluster.
    4. In case of multiple replicas of the service, do one of the following:
      • Select Take snapshot of the first replica only to take snapshot of only the first replica.
      • Select Take snapshot of the entire replica set to take snapshot of the entire replica set.
    5. Select the Delete older VM after restore check box to delete the older VM after the service is restored from the snapshot.
    6. Click Save .
      Saving the configuration generates separate profile actions for snapshot and restore.
  4. To view and edit the snapshot and restore configurations, expand the corresponding Snapshot/Restore under the service.
    You can click the configuration to view the details in the Inspector Panel or make any changes to the configuration.
  5. To view the profile actions for snapshot and restore or add more tasks and actions as part of snapshot and restore configuration, expand the Application Profile section.

What to do next

  • You can add more tasks and actions to the snapshot and restore application profiles. For more information, see Adding an Action to a Multi-VM Blueprint.
  • You can launch the blueprint after associating snapshot policies and rules. For more information, see Launching a Blueprint.
  • You can create snapshots from the Manage tab on the Applications page. For more information, see Creating Snapshots on a Nutanix Platform.

Update Configuration for VM

The update configuration feature allows you to update virtual machines of running applications on Nutanix to a higher or lower configuration. Using this feature, you can modify VM specifications such as the vCPU, memory, disks, networking, or categories (tags) of a running application with minimal downtime. You no longer have to create new blueprints or approach your IT administrator to modify VM resources.

To update configurations of a running application VM, you need to perform the following actions:

  • Add an update configuration to the application blueprint.
  • Launch the update configuration
Figure. Update Configurations Click to enlarge

Add Update Configuration in the Blueprint

As a blueprint developer, you can add update configurations for a service in the blueprint. These update configurations are at the parallel level of application profile actions and can be executed individually for a particular service. As part of the configuration, you can do the following:

  • Specify the change factor (increase, decrease, or provide a definitive value) for VM configurations (vCPU, cores per vCPU, and memory). You can provide a minimum or maximum limit for each component based on the change factor you select and allow blueprint consumers to edit components during updates.

    For example, consider a case where the original vCPU value in the blueprint is 4. You then add a change factor to the update configuration to increase the vCPU by 1 with a maximum limit of 5. When this update is launched, you can run the action only once to increase the vCPU to 5. Once the VM is upgraded to 5 vCPU, you cannot add any more vCPUs to the VM.

  • Add disks with a minimum and maximum limit for each disk. You can allow blueprint users to edit the disk size during updates until the value reaches the maximum or minimum limit. You can also allow blueprint users to remove existing vdisks from the VM.
  • Add categories (tags) to the running VM.
  • Add NICs to the VM or allow blueprint consumers to remove NICs. You can only add those NICs that belong to the same cluster and remove only those NICs that are not used to provide the address. You can also allow consumers to choose the desired subnet during updates.

The update configuration generates the corresponding action where you can add tasks to define how you want to execute the update.

For more information about adding update configuration to a blueprint, see Adding an Update Configuration to Single-VM Blueprints and Adding an Update Configuration to Multi-VM Blueprints.

Launch Update Configuration

You can update VM specifications from the Manage tab of applications on Nutanix. For more information, see Update VM Configurations of Running Applications.

Adding an Update Configuration to Single-VM Blueprints

As a blueprint developer, you can add an update configuration to a single-VM application blueprint.

About this task

The update configuration feature allows you to update the virtual machine of a running single-VM application to a higher or lower configuration. For more information, see Update Configuration for VM.

Before you begin

  • Ensure that you set up your single-VM blueprint. For more information, see Setting up a Single-VM Blueprint.
  • Ensure that you added the VM details to your blueprint. For more information, see Adding VM Details to a Blueprint.
  • Ensure that you configured the VM in your blueprint. For more information, see VM Configuration.

Procedure

  1. On the Advanced Options tab, next to Update Configs, click the + Add Config option.
  2. On the Update Configs page, click the edit icon next to the update config name to change the name of the configuration.
  3. Under the VM Configuration section, select a change factor for the attributes and specify the value for the selected factor. To do that:
    1. View the current value of vCPUs, No. of Cores, and Memory (GiB) in the Current Value column.
    2. Click Update for the attribute that you want to update.
      Figure. Single-VM Update Config Options Click to enlarge

    3. Select a value in the Operation list for the attribute. You can select Increase , Decrease , or Equal .
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU by 1, then select Increase in the Operation list.
      Note: The update value is relative to the current value when you select Increase or Decrease . The update value is absolute when you select Equal .
    4. Specify an update value in the Update column based on the Operation value you selected.
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU by 1, then enter 1 in the Update field.
    5. Specify the limit value to which the configuration of an attribute can be updated in the Min Value or Max Value column.
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU to a maximum limit of 6, then specify 2 in the Max Value column. THe vCPU of the VM can be updated until its value reaches 6vCPU. After the VM reaches 6 vCPU, more vCPUs cannot be added to the VM.
      You can also enable the Editable toggle button of an attribute to allow your users to change its value within the limits you specify in the Min Value or Max Value column during the launch of the update.
  4. Under the Disks section, do the following:
    1. To add vdisk to the update configuration, click the + icon next to Add/Edit vDisks to this VM .
    2. Select the device type and the device bus.
    3. To define the disk size, specify the value for the vdisk size in the Value field.
      You can enable the Editable toggle button and specify the Min Value and Max Value for the vdisk.
    4. To allow your users to remove existing vdisks from the VM during the launch of the update, click Advanced Settings and select the Allow users to remove existing vDisks check box.
  5. Under the Categories section, do the following:
    1. To add to the update configuration, select the categories in the Key: Value list.
      Note: The categories you select must have the default SSH port (port 22) open in the security policies.
    2. To allow your users to remove existing categories during the launch of the update, click Advanced Settings and select the Allow users to remove existing categories check box.
    3. To allow your users to add new categories during the launch of the update, select the Allow users to add new categories check box.
  6. Under the Network Adapters section, do the following:
    1. To add more NICs to the update configuration, click the + icon next to Add/Edit NICs to this VM and select the NIC from the list.
      The NICs of a VM can either use VLAN subnets or overlay subnets. For example, if an overlay subnet is selected for NIC 1 and you want to add NIC 2, the NIC 2 list displays only the overlay subnets.
      If you selected a VLAN subnet in NIC 1, any subsequent VLAN subnets belong to the same cluster. Similarly, if you select an overlay subnet, all subsequent overlay subnets belong to the same VPC.
      You can enable the Editable toggle button to allow your users to choose the desired subnet during the launch of the update.
    2. To allow your users to remove existing NICs during the launch of the update, click Advanced Settings and select the Allow users to remove existing NICs check box.
  7. To add variables to the update configuration, click the Variables tab and then the + icon next to Variables .
  8. Click Done to save the update configuration.
    Saving the update config generates the Config component. The Config component lets you open the Update Config window to edit the update configuration.
  9. On the Advanced Options tab, click Save to save the blueprint and to generate the corresponding action for the update configuration.
  10. Click Edit next to the configuration to add more tasks to the update configuration.

Adding an Update Configuration to Multi-VM Blueprints

As a blueprint developer, you can add an update configuration for a service to a multi-VM application blueprint.

About this task

The update configuration feature allows you to update virtual machines of running multi-VM applications to a higher or lower configuration. For more information, see Update Configuration for VM.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page appears.
  2. Do one of the following:
    • To add an update configuration to a new blueprint, select Multi VM/Pod Blueprint from the + Create Blueprint list, and create a blueprint. For more information, see Creating a Multi-VM Blueprint.
    • To add an update configuration to an existing blueprint, click the blueprint name to open the blueprint editor.
  3. Ensure that you have added the service to which you want to add the update configuration. For more information about adding a service, see Adding a Service.
  4. In the Overview Panel, click the + icon next to Update Config .
    Figure. Multi-VM Blueprint Update Config Click to enlarge

    The Update Config window appears.
  5. From the Select Service to Update list, select the service to which you want to add the update configuration.
    Figure. Update Config Options Click to enlarge

  6. In the Name the update configuration field, specify a name for the configuration.
  7. Under the VM Configuration section, select a change factor for the attributes and specify the value for the selected factor. To do that:
    1. View the current value of vCPUs, No. of Cores, and Memory (GiB) in the Current Value column.
    2. Click Update for the attribute that you want to update.
    3. Select a value in the Operation list for the attribute. You can select Increase , Decrease , or Equal .
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU by 1, then select Increase in the Operation list.
      Note: The update value is relative to the current value when you select Increase or Decrease . The update value is absolute when you select Equal .
    4. Specify an update value in the Update column based on the Operation value you selected.
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU by 1, then enter 1 in the Update field.
    5. Specify the limit value to which the configuration of an attribute can be updated in the Min Value or Max Value column.
      For example, if the original vCPU value in the blueprint is 4 and you want to increase the vCPU to a maximum limit of 6, then specify 2 in the Max Value column. THe vCPU of the VM can be updated until its value reaches 6vCPU. After the VM reaches 6 vCPU, more vCPUs cannot be added to the VM.
      You can also enable the Editable toggle button of an attribute to allow your users to change its value within the limits you specify in the Min Value or Max Value column during the launch of the update.
  8. Under the Disks section, do the following:
    1. To add vdisk to the update configuration, click the + icon next to Add/Edit vDisks to this VM .
    2. Select the device type and the device bus.
    3. To define the disk size, specify the value for the vdisk size in the Value field.
      You can enable the Editable toggle button and specify the Min Value and Max Value for the vdisk.
    4. To allow your users to remove existing vdisks from the VM during the launch of the update, click Advanced Settings and select the Allow users to remove existing vDisks check box.
  9. Under the Categories section, do the following:
    1. To add to the update configuration, select the categories in the Key: Value list.
      Note: The categories you select must have the default SSH port (port 22) open in the security policies.
    2. To allow your users to remove existing categories during the launch of the update, click Advanced Settings and select the Allow users to remove existing categories check box.
    3. To allow your users to add new categories during the launch of the update, select the Allow users to add new categories check box.
  10. Under the Network Adapters section, do the following:
    1. To add more NICs to the update configuration, click the + icon next to Add/Edit NICs to this VM and select the NIC from the list.
      You can enable the Editable toggle button to allow your users to choose the desired subnet during the launch of the update.
    2. To allow your users to remove existing NICs during the launch of the update, click Advanced Settings and select the Allow users to remove existing NICs check box.
  11. Click Done to save the update configuration.
    Saving the update config generates the Config component. The Config component lets you open the Update Config window to edit the update configuration.
  12. On the Blueprint Editor page, click Save to save the blueprint and generate the corresponding action for the update configuration.
    Saving the blueprint generates the Action component. The auto-generated Action component performs the start and stop of the service. You can also add tasks and actions to the component to define how you want your users to launch the update.

Blueprints Management in Calm

After you configure a blueprint, you can publish, unpublish, launch, or delete a blueprint.

Blueprint Publishing

Publishing a blueprint allows you to make the blueprint available at Marketplace, so that other users can use the published blueprint. Unpublishing a blueprint allows you to remove the blueprint from the Marketplace. For more information, see Submitting a Blueprint for Approval.

Blueprint Launching

Launching a blueprint allows you to deploy your application on the blueprint and start using it.

The blueprint launch page provides the following views:

Figure. Blueprint Launch Views Click to enlarge

  • View as Consumer : This view of the blueprint launch page displays only the editable fields that consumers require to launch a blueprint. When you design your blueprint for consumers with minimum configurations at runtime, use this view to get an idea about the blueprint launching experience of your consumers.

    Blueprints that are launched from the marketplace display only the fields that require inputs from consumers. Displaying only editable fields offers a simpler and easy launching experience for your consumers.

  • View as Developer : This view of the blueprint launch page displays all editable and noneditable fields that you configure for the blueprint. As a blueprint developer, you can switch between View as Consumer and View as Developer on the blueprint launch page.

    You can switch to View as Developer after you develop your blueprints to verify how you configured different fields and the launching experience the configuration will provide to your consumers.

For more information, see Launching a Blueprint.

Submitting a Blueprint for Approval

After you configure a blueprint, you can submit the blueprint to get an approval from the administrator. The administrator approves the blueprint and then publishes the blueprint at the marketplace for consumption.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Click the blueprint that you want to publish.
    The blueprint editor page is displayed.
  3. Click Publish .
    Figure. Publish Blueprint window Click to enlarge

    The Publish Blueprint window is displayed.
  4. If the blueprint is getting published for the first time, select New marketplace item and do the following.
    1. To publish the blueprint with secret variables, click the Publish with Secrets toggle-button.
    1. Enter the version number in the Initial Version field.
      Note: Ensure that the version number is in the x.x.x format.
    2. Enter the blueprint description in the Description field.
  5. If you want to revise a published blueprint version, select New version of an existing marketplace item and do the following.
    1. To publish the blueprint with secret variables, enable the Publish with Secrets button.
    2. Select the already published blueprint from the Marketplace Item list.
    3. Enter the version number in the Initial Version field.
      Note: Ensure that the version number is in the x.x.x format.
    4. Enter the blueprint description in the Description field.
    5. Enter the log changes in the Change Log field.
  6. If you want to upload an icon for the blueprint, click Change .
    1. Click Upload from computer to browse and select an image from your local machine.
    2. Click Open .
    3. Provide a name to the image in the Name of the Icon field.
    4. Click the right icon.
    5. Click Select & Continue .
    Note: User with administrator role can only upload an icon.
  7. Optionally, if you want to select an icon, already available in a blueprint, click the right icon.
  8. Optionally, to delete an icon, click the delete icon.
  9. Click Submit for Approval .
    The blueprint is submitted to the marketplace manager for approval. Your administrator can find the submitted blueprint on the Approval Pending tab of the Marketplace Manager page.

What to do next

You can request your administrator to approve and publish the blueprint to the marketplace. For more information about blueprint approval and publishing, see Approving and Publishing a Blueprint or Runbook.

Launching a Blueprint

You launch a blueprint to deploy an application on the blueprint and start using the application.

Before you begin

For blueprints on a Nutanix platform, ensure that you have created the snapshot policy. For more information about snapshot policy creation, see Creating a Snapshot Policy.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page is displayed.
  2. Click the blueprint that you want to launch.
    The blueprint details page is displayed.
  3. Click Launch .
    The blueprint launch page is displayed.
    Figure. Launch Blueprint Click to enlarge

  4. Enter a name for the application in the Application Name field.
  5. Enter a description for the application in the Application Description field.
  6. Select the environment from the Environment list.
    If you select an environment that is different from the account that you used for blueprint configuration, Calm updates all platform-dependant fields to match with the selected environment configuration.
    For example, you created the application blueprint using an account with an environment (ENV1) so that the platform-dependant fields are similar to ENV1. While launching the application blueprint, if you select a different environment (ENV2), Calm updates all platform-dependant fields to match with the ENV2 configuration.
  7. Select the application profile from the App Profile field.
    In case, any of the fields are marked runtime while creating the blueprint, those fields are editable and displayed here. To view the runtime variables, expand the service under VM Configurations .
  8. In the sections for the service configuration and credentials configuration, verify and edit the configuration requirements for your application services and credentials.
    Figure. Blueprint Launch - Service Configuration Click to enlarge Blueprint launching

    Use the View as Developer option at the top of the blueprint launch page to view all configuration fields.
    Note: The View as Consumer view displays only the editable fields while the View as Developer view displays all configuration fields for your services and credentials. As a developer, you can select the View as Developer to view the configuration details of all fields.
  9. If the blueprint is configured with a Nutanix account, do the following:
    1. Under Snapshot Configurations, select a snapshot policy in the Snapshot Policy list.
    2. Based on the policy you select, select a rule in the Select Local Rule or Select Remote Rule list.
      The Select Local Rule or Select Remote Rule list appears based on the Snapshot Location you defined in your blueprint. For more information, see Blueprint Configuration for Snapshots and Restore. The values in the list appear based on the snapshot policy you defined in the project and selected in the Snapshot Policy list. For more information, see Creating a Snapshot Policy. The values also depend on the VM categories you configured in your blueprint.
    The Snapshot Configuration section appears depending on the environment you select while launching the blueprint. If you select a specific environment, you must provide the snapshot policy and snapshot rule to launch the blueprint. The Snapshot Configuration section does not appear in case you select the environment with all project accounts for the launch.
    Note: Ensure that you have a valid NIC in the blueprint.
  10. Click Deploy .
    The system validates the provided platform-specific data against the selected provider and if the validation fails, an error message appears. To know more about the validation error, see Platform Validation Errors.

    If the validation is successful, the application is available under the Application tab.

Platform Validation Errors

When you enter the platform data that is invalid for a provider while creating a blueprint, you get a validation error. The following table details the invalid platform data for each provider.

Providers Invalid Platform Data
Nutanix Image, NIC List, and Categories.
GCP Machine Type, Disk Type, Network, SubNetwork, Source, Image, Zone, and Blank Disk.
AWS Vpc, Security Groups, and Subnets.
VMware Network name, NIC Type, NIC settings mismatch, Host, Template, Datastore, Datacenter, Storage Pod, and cluster.
Azure Image details (publisher, offer, sku, version), Custom image, Resource group, Availability Set Id, NIC List, Network Security group, Virtual Network Name, and Subnet Name.

The platform validation error message appears as displayed in the following image.

Figure. Platform validation error message Click to enlarge

Uploading a Blueprint

You can also upload configured blueprints to the Blueprints tab. Perform the following procedure to upload a blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Click Upload Blueprint .
    The browser window is displayed.
  3. Browse to the location of the saved blueprint and select the blueprint.
  4. Do one of the following.
    • Double-click the selected blueprint.
    • Select and click Open .
    Figure. Upload Blueprint Click to enlarge

    The Upload Blueprint window is displayed.
  5. Enter the name of the blueprint in the Blueprint Name field.
  6. Select the project from the Project list.
  7. Click Upload .
    The blueprint is uploaded and available for use.
    Note: You must provide the credentials password or key of the blueprint.

Downloading a Blueprint

You can also download a configured blueprint to your local machine and use it later. Perform the following procedure to download a blueprint.

Before you begin

Ensure that at least one blueprint must be available.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Do one of the following.
    • Click the blueprint that you want to download and click Download .
    • Select the blueprint that you want to download and Action > Download .
    The Download Blueprint dialog box appears.
  3. Optionally, if you want to download the blueprint with the credentials and secrets used in the blueprint, click the check box in the Download Blueprint dialog box.
  4. In the Enter Passphrase field, type a password.
    The Enter Passphrase field is a mandatory field and is activated only after you have clicked the check box to download the blueprint with credentials and secrets.
  5. Click Continue .
    The blueprint is downloaded to your local machine.

Viewing a Blueprint

Perform the following procedure to view a blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Click the blueprint that you want to view the details of.
    The selected blueprints details are displayed.

Editing a Blueprint

You can edit a configured blueprint from the blueprints tab. Perform the following procedure to edit a blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page is displayed.
  2. Click the blueprint that you want to edit.
    The blueprint details page is displayed.
  3. Make the necessary edits in the layers ( Services , Actions , and Application Profiles ).
    Note: You cannot delete System level actions.
  4. Click Save .
    The updated blueprint is saved and listed in the blueprints tab.

Deleting a Blueprint

Perform the following procedure to delete a blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page is displayed.
  2. Select the listed blueprint that you want to delete.
  3. Click Actions > Delete .
  4. Click Yes to confirm.
    The blueprint is deleted.

Viewing Blueprint Error

If you have configured wrong details in your blueprint, you can view the error message while saving or publishing a blueprint. Perform the following procedure to view blueprint error message.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. Click the blueprint that you want to view the details.
    The selected blueprint details are displayed. If there is any error in the blueprint, then the error is denoted by ! .
  3. Click ! .
    Figure. Blueprint Error Click to enlarge
    The blueprint errors are displayed.

Recovering Deleted Blueprints

You can recover the deleted application blueprints within a time period of 90 days after you delete an application blueprint. This chapter describes the procedure to recover a deleted blueprint.

Procedure

  1. Click the Blueprint icon in the left pane.
    The Blueprint page displays the list of all the available blueprints.
  2. In the search filter field, enter State:Deleted and press Enter.
    You can view the list of all deleted blueprints based on the 90 days retention period.
  3. Click the blueprint that you want to recover.
  4. From the Action list, select Clone .
    The Clone Blueprint page appears.
  5. In the Blueprint Name field, enter a name for the blueprint and click Clone .
    The name is used as the blueprint name after recovery.
    A clone of the deleted blueprint is created and you can view the recovered blueprint in the blueprints page with the new name.

Marketplace in Calm

Marketplace Overview

The marketplace provides preconfigured application blueprints and runbooks for instant consumption. The marketplace is a common platform for both publishers and consumers.

Figure. Marketplace Click to enlarge

The marketplace has banners to display featured applications. All listed applications display the icon of the platform that supports the application.

You can filter applications or runbooks based on their category and source. You can also search an application or runbook in the marketplace.

Note: The marketplace displays the application blueprints or runbooks that are approved and published using the Marketplace Manager. For more information on approving and publishing blueprints and runbooks, see Approving and Publishing a Blueprint or Runbook.

Before provisioning an application, you can view details such as application overview, changes made in different versions, and application-level actions.

Figure. Marketplace-Application Details Click to enlarge Marketplace Application Details

Viewing Application Details

You can view application details such as licensing, installed resources, hardware requirements, operating systems, platforms, and limitations before you provision the application. You can also view the changes made in different versions and application-level actions.

About this task

Video: Viewing Application Details

Procedure

  1. Click Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. To view the details of an application, click the Get button on the application blueprint.
    Figure. Application Details Click to enlarge Marketplace Application Details

    The Application Details page is displayed.

Filtering Application Blueprints or Runbooks

Perform the following procedure to filter application blueprints or runbooks in the marketplace.

Procedure

  1. Click Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. Click the Filters button.
    The Filters pane appears.
    Figure. Marketplace Filters Click to enlarge Marketplace Filters

  3. Select a category, type, or source value to filter applications and runbooks.
    The Marketplace page displays all available applications and runbooks based on the selected category, type, and source value.

Searching an Application Blueprint or Runbook

Perform the following procedure to search an application blueprint or runbook.

Procedure

  1. Click Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. Enter the name of the application or runbook that you want to search in the Search marketplace field.
    The Marketplace page shows the search results as you enter the name in the Search marketplace field.

Launching a Blueprint from the Marketplace

You can use the Marketplace tab to launch an application blueprint that is approved and published to the marketplace. The application launch page displays the fields that are editable by the consumer.

Procedure

  1. Click the Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. Click the Get button for the application that you want to launch.
    The Application Details page is displayed.
  3. Click Launch .
    The Launch page is displayed.
    Figure. Launch Blueprint Click to enlarge

  4. Enter a name for the application in the Application Name field.

    Following are the rules for naming convention.

    • The name of the blueprint can start with an alphanumeric character or an underscore.
    • The name must have at least one character.
    • Use only space, underscore, and dash as special characters.
    • Do not end the name with a dash.
  5. Enter a description for the application in the Application Description field.
  6. Select the project from the Project list.
  7. Select the environment from the Environment list.
    If you select an environment that is different from the account that you used for blueprint configuration, Calm updates all platform-dependant fields to match with the selected environment configuration.
    For example, you created the application blueprint using an account with an environment (ENV1) so that the platform-dependant fields are similar to ENV1. While launching the application blueprint, if you select a different environment (ENV2), Calm updates all platform-dependant fields to match with the ENV2 configuration.
  8. Select an application profile in the App Profile field.
    Application profile provides different combinations of the service, package, and VM while configuring a blueprint.
  9. In the section for the service configuration, verify the VM, disk, boot configuration, and network configuration. You can edit the fields based on your application requirements.
    Figure. Application Launch - Service Configuration Click to enlarge Service Configuration

  10. If the blueprint is configured with a Nutanix account, do the following:
    1. Under Snapshot Configurations, select a snapshot policy in the Snapshot Policy list.
    2. Based on the policy you select, select a rule in the Select Local Rule or Select Remote Rule list.
      The Select Local Rule or Select Remote Rule list appears based on the Snapshot Location you defined in your blueprint. For more information, see Blueprint Configuration for Snapshots and Restore. The values in the list appear based on the snapshot policy you defined in the project and selected in the Snapshot Policy list. For more information, see Creating a Snapshot Policy. The values also depend on the VM categories you configured in your blueprint.
    The Snapshot Configuration section appears depending on the environment you select while launching the blueprint. If you select a specific environment, you must provide the snapshot policy and snapshot rule to launch the blueprint. The Snapshot Configuration section does not appear in case you select the environment with all project accounts for the launch.
    Note: Ensure that you have a valid NIC in the blueprint.
  11. Click Deploy .
    The application blueprint is displayed under the Application tab.

Environment Patching Behavior

VM configurations in blueprints and environments are associated with accounts. The environment patching depends on the account that you associate with the marketplace blueprint and the environment you configured.

To patch a cloud provider VM that has a specific OS type, Calm finds the corresponding match in the environment. In case there are no matches available, Calm displays a notification.

The following table lists the environment patching behavior for platform-dependent and platform-independent fields:

Table 1. Environment Patching
Fields Condition Patching Behavior
Platform-Dependent Fields When different accounts are associated with the blueprint and environment Values from the environment get preference for patching, irrespective of the values in the blueprint.
Platform-Dependent Fields When the blueprint and the environment have the same account Values from the environment are patched only when the fields do not have any value in the blueprint.
Platform-Independent Fields When different accounts are associated with the blueprint and environment Values from the environment are patched only when the fields do not have any value in the blueprint.
Platform-Independent Fields When the blueprint and the environment have the same account Values from the environment are patched only when the fields do not have any value in the blueprint.

The following table lists the platform-dependent fields for different platforms.

Table 2. Platform-Dependent Fields
Platform Platform-Dependent Fields
Nutanix Image, Categories, Cluster, and NIC
AWS Machine Image, Key, Instance Profile Name, VPC ID, Subnet ID, and Security Group List
GCP Machine Type, Zone, Network, Disk Type, Source Image, and Email
VMware Host, Template, Datastore, Cluster, Storage Pod, Network Name, NIC Type, Disk Location, Disk ISO Path, Folder, and Tag List
Azure Resource Group, Location, Availability Set ID, Resource Group Details, Resource Group Operation, Network Security Group Name, Network Name, Subnet Name, Network Security Group ID, Virtual Network ID, Subnet ID, Publisher, Offer, SKU, Version, Source Image Type, and Source Image ID

Environment Patching Behavior with Nutanix – Example-1

Assume that you have two Nutanix Prism Central accounts PC1 and PC2, and you added these accounts to your project (Project1). You then create two environments in the project with the following VM configuration:

Table 3. Environments
ENV1 ENV2
  • Account: PC1
  • NIC: PC1_Net1
  • Image: PC1_Image1
  • Categories: PC1_category1 and PC1_category2
  • Cluster: PC1_Cluster1
  • Operating System: Linux
  • Account: PC2
  • NIC: PC2_Net1
  • Image: PC2_Image1
  • Categories: PC2_category1 and PC2_category2
  • Cluster: PC2_Cluster1
  • Operating System: Linux

You then create a blueprint with a Nutanix service under Project1 having the following configuration:

  • Account: PC1
  • Image: PC1_Image2
  • Categories: PC1_category3
  • Cluster: PC1_Cluster2
  • NIC: PC1_Net2

When you publish this blueprint in the marketplace and launch the blueprint with a different environment, the environment patching happens as follows:

  • When you select Project1 and ENV2 for launching, the account in the blueprint is PC1, and the account in ENV2 is PC2.

    Because different accounts are associated with the blueprint and environment, all platform-dependent field values are patched from the environment to the blueprint, irrespective of the values already available in the blueprint. The blueprint is launched with the following configuration.

    • Image: PC2_Image1
    • Categories: PC2_category1 and PC2_category2
    • Cluster: PC2_Cluster1
    • NIC: PC2_Net1
  • When you select Project1 and ENV1 for launching, the account in both the blueprint and ENV1 is PC1.

    Because the account is same for both blueprint and environment and all the platform-dependent fields already have values, the patching does not happen. The blueprint is launched with the following configuration.

    • Image: PC1_Image2
    • Categories: PC1_category3
    • Cluster: PC1_Cluster2
    • NIC: PC1_Net2

Environment Patching Behavior with Nutanix – Example-2

Assume that you have a Prism Central account PC1 that is associated with two Prism Elements PE1 and PE2, and you add PC1 to your project (Project1).

Assume that the associated Prism Elements have the following networks.

  • PE1: PE1_Net1 and PE1_Net2
  • PE2: PE2_Net1 and PE2_Net2

You then create two environments with the following VM configuration:

Table 4. Environments
ENV1 ENV2
  • NIC: PE1_Net1
  • Image: PC1_Image1
  • Categories: PC1_category1 and PC1_category2
  • Operating System: Linux
  • NIC: PE2_Net1
  • Image: PC1_Image2
  • Categories: PC1_category3 and PC1_category4
  • Operating System: Linux

You then create a blueprint with a Nutanix service under Project1 having the following configuration:

  • NIC: PE1_Net2
  • Image: PC1_Image3
  • Categories: PC1_category5 and PC1_category6

When you publish this blueprint in the marketplace and launch the blueprint with a different environment, the environment patching happens as follows:

  • When you select Project1 and ENV2 for launching:

    Prism Element accounts are derived from the NIC or subnet. The PE1_Net2 network used in the blueprint associates the blueprint to Prism Element PE1, and the PE2_Net1 network used in ENV2 associates the environment to Prism Element PE2.

    Because these two networks are connected to two different Prism Element account_uuid , Calm considers this case as two different accounts associated with the blueprint and environment. All platform-dependent field values are, therefore, patched from the environment to the blueprint, irrespective of the values already available in the blueprint. The blueprint is launched with the following configuration.

    • NIC: PE2_Net1
    • Image: PC1_Image2
    • Categories: PC1_category3 and PC1_category4
  • When you select Project1 and ENV1 for launching:

    The PE1_Net2 network used in the blueprint and the PE1_Net1 network used in ENV belong to the same Prism Element account.

    Because these two networks share the same Prism Element account_uuid , Calm considers this case as the same account associated with both the blueprint and environment. Platform-dependent fields in this case already have values, and the patching does not happen. The blueprint is launched with the following configuration.

    • NIC: PE1_Net2
    • Image: PC1_Image3
    • Categories: PC1_category5 and PC1_category6

Credentials Patching

Patching of credentials happens only when you publish your blueprints in the marketplace without secrets.

For patching, the credentials of the marketplace blueprint are mapped with the environment using the associated provider account and operating system type. The password or the key value of the corresponding environment is then patched to the blueprint. The credential name and the credential username are never patched from the environment.

For example, if the blueprint and the environment have the following configurations:

Table 5. VM Configuration
Blueprint Environment
  • Credential Name: BP_Credentials
  • Username: BP_User1
  • Password: BP_Password
  • Provider Account: Nutanix
  • Operating System: Linux
  • Credential Name: ENV_Credentials
  • Username: ENV_User1
  • Password: ENV_Password
  • Provider Account: Nutanix
  • Operating System: Linux

The credentials patching in the blueprint happens as follows:

Table 6. Credential Patching
When Blueprint is Published with Secrets When Blueprint is Published without Secrets
  • Credential Name: BP_Credentials
  • Username: BP_User1
  • Password: BP_Password
  • Credential Name: BP_Credentials
  • Username: BP_User1
  • Password: ENV_Password

Patching for Clusters and Subnets

The Cluster field is platform dependent. The environment patching logic of a platform-dependent field depends on the account that you associate with the marketplace item and the VM configuration of the environment.

Table 1. Conditions for Cluster Patching
Condition Patching Behavior
When the cluster reference in the blueprint and in the environment VM configuration is the same. No patching happens. The cluster reference from the blueprint is used for the launch.
When the cluster reference in the blueprint and in the environment VM configuration is different. Patching happens. The cluster value is patched from the environment for the launch.
When the cluster reference in the blueprint is a macro.
Note: Cluster reference can be a macro only when all the subnets are overlay subnets or all the subnets are macros.
No patching happens. The cluster value will remain as a macro.

When the reference is a macro, it is independent of the environment or the account that is being used for launch.

VLAN subnets are platform dependent. The environment patching logic of VLAN subnets depends on the cluster reference of the blueprint and the cluster reference of the associated environment VM configuration.

Overlay subnets are VPC dependent. The environment patching logic of these subnets depends on the VPC reference in the blueprint and the VPC reference of the associated environment VM configuration.

All subnets in the substrate of a blueprint can either have overlay subnets or VLAN subnets. If subnets are overlay subnets, then all the subnets in the substrate must belong to the same VPC.

Table 2. Conditions for Subnet Patching
Condition Patching Behavior
When the VLAN subnets in the blueprint and in the environment VM configuration is the same. No patching happens. VLAN subnets are platform dependent. The VLAN subnet values referred in the blueprint are used.
When the VLAN subnets in the blueprint and in the environment VM configuration is different. Patching happens. VLAN subnets are platform dependent. The VLAN subnet values are patched from the environment.
When the VPC reference of the subnets (overlay subnets) in the blueprint and the environment VM configuration is the same. No patching happens. The subnet values of the blueprint are used for the launch.

Values from the environment is patched only if it is empty in the blueprint or not allowed in the destination environment.

When the VPC reference of the subnets (overlay subnets) in the blueprint and the environment VM configuration is different. Patching happens. The subnet values are patched directly from the environment.
When the network type in the blueprint and the environment VM configuration are different (for example, overlay subnets in the blueprint and VLAN subnets in the environment). Patching happens. The subnet values are patched directly from the environment.
When the subnet reference of the any of the NICs in the blueprint is a macro. Patching follows the usual conditions. However, the macros are never patched.

Executing a Runbook from the Marketplace

You can execute a runbook an approved and published runbook using the Marketplace tab.

Before you begin

Ensure that the runbook that you want to execute is approved and published in the marketplace. For more information, see Approving and Publishing a Blueprint or Runbook.

Procedure

  1. Click the Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. Click the Get button for the runbook that you want to execute.
    The runbook overview page appears.
    Figure. Runbooks Overview Click to enlarge Runbooks overview

  3. Click Execute .
    The Execute Runbook window appears.
    Figure. Execute Runbook Click to enlarge

  4. Select the project for the runbook execution from the Project list.
  5. Optionally, if you want to change the default endpoint for the execution, select an endpoint from the Default Endpoint list.
    Note: If you have published runbook without endpoints, then you need to select the endpoint from the project in which you are executing the runbook.
  6. Optionally, if you want to update the added variable in the runbook, click the respective variable field and edit the variable.
    Note: You can update the variable only if the variable is marked as runtime editable while adding the variable in the runbook.
  7. Click Execute .

Cloning an Application Blueprint or Runbook

You can clone an application blueprint or runbook from the marketplace.

Procedure

  1. Click Marketplace icon in the left pane.
    The Marketplace page is displayed.
  2. Click the Get button for the application blueprint or runbook that you want to clone.
    The Overview page for the application or runbook appears.
    Figure. Blueprint Cloning Click to enlarge Cloning Blueprint

  3. Click Clone .
    The Clone window appears.
  4. Enter the name for the clone.
  5. Select the project that you want to assign to the cloned application blueprint or runbook from the Project list.
  6. Click Clone .
    The cloned blueprint or runbook appears on their respective Blueprints or Runbooks tabs.

Marketplace Manager in Calm

Marketplace Manager Overview

Use Marketplace Manager to manage the list of custom blueprints, ready-to-use marketplace application blueprints, and runbooks. You can approve, reject, launch, publish, unpublish, assign a category, and select projects for a blueprint. You can also approve, reject, publish, unpublish, and execute runbooks.

The Approved tab on the Marketplace Manager page provide you a list of ready-to-use application blueprints and the custom blueprints or runbooks you approved. The Approval Pending tab provides a list of custom blueprints and runbooks that require your approval to be available in the Marketplace for consumption.

Figure. Marketplace Manager Click to enlarge

When you select a blueprint or runbook from the list on any tab, the inspector panel displays the operations you can perform on the selected blueprint or runbook. The inspector panel also displays a brief overview of the blueprint or runbook and allows you to assign projects to blueprint or runbook.

Figure. Inspector Panel Click to enlarge Inspector panel

You can perform the following actions on blueprints or runbooks.

  • You can approve blueprints or runbooks and publish them to the marketplace for consumption. You can also publish the ready-to-use application blueprints to the marketplace. For more information, see Approving and Publishing a Blueprint or Runbook.
  • You can unpublish blueprints or runbooks to remove them from the marketplace. For more information, see Unpublishing a Blueprint or Runbook.
  • You can also delete an unpublished blueprint or runbook. For more information, see Deleting an Unpublished Blueprint or Runbook.

Marketplace Version

Marketplace version enables you to define the initial version number of the blueprint or runbook that is getting published to the marketplace. Marketplace version also enables you to revise the version of a blueprint or runbook that is already published to the marketplace. For information about how to define marketplace version, see Submitting a Blueprint for Approval or Submitting a Runbook for Publishing.

Approving and Publishing a Blueprint or Runbook

You can approve custom blueprints or runbooks that are submitted for approval on the Approval Pending tab. You can also publish the approved blueprints or runbooks to the marketplace after associating them with a project on the Approved tab.

About this task

The Approved tab also displays the ready-to-use application blueprints that are available after enabling the Nutanix Marketplace Apps toggle button on the Settings page. These application blueprints do not require approval and can be published directly to the marketplace after associating them with a project. For more information about enabling the ready-to-use applications, see Enabling Nutanix Marketplace Applications.

Before you begin

  • To publish a blueprint, ensure that you have configured a blueprint and submitted the blueprint for approval. For more information, see Calm Blueprints Overview and Submitting a Blueprint for Approval.
  • To publish a runbook, ensure that you have submitted a runbook for publishing. For more information, see Submitting a Runbook for Publishing.

Procedure

  1. Click Marketplace Manager tab.
    The Marketplace Manager page is displayed.
  2. Click the Approval Pending tab to get the list of all unpublished blueprints and runbook requests.
    A list of all the unpublished blueprint and runbook requests is displayed.
    Figure. Marketplace Manager Approval Pending Click to enlarge Marketplace manager approval pending

  3. Select the blueprint or runbook that you want to approve and publish.
    The inspector panel appears.
  4. Click the check mark button to approve.
  5. Click the Approved tab to get the list of all approved blueprints and runbooks.
    A list of all the approved blueprints and runbooks is displayed.
  6. Select the approved blueprint or runbook that you want to publish.
    Note: You can also select a ready-to-use marketplace application blueprint on the Approved tab for publishing.
  7. In the Inspector Panel, select the category from the Category list.
    You can also add a new application category value and select the value for publishing. To add a new category value for applications, you need to add the value to the AppFamily category. To know how to add a value to a category, see the Category Management section in the Virtual Infrastructure (Cluster) Administration chapter of the Prism Central Guide .
  8. Select one or more projects from the Projects Shared With list.
  9. Click Apply .
  10. Click Publish .
    The blueprint or runbook will be published in the marketplace.

What to do next

  • Launch the published blueprint from the Marketplace tab. For more information, see Launching a Blueprint from the Marketplace.
  • Execute the published runbook from the Marketplace tab. For more information, see Executing a Runbook from the Marketplace.

Unpublishing a Blueprint or Runbook

You can unpublish a blueprint or runbook if you do not want to list it in the Marketplace. You can publish the blueprint or runbook again if required.

Procedure

  1. Click the Marketplace Manager tab.
    The Approved tab lists all the published blueprints and runbooks.
  2. Select the blueprint or runbook that you want to unpublish.
    The inspector panel is displayed.
  3. Click Unpublish .
    The blueprint or runbook is unpublished and does not appear in the marketplace.

Deleting an Unpublished Blueprint or Runbook

You can delete a blueprint or runbook that is not published in the marketplace. If you want to delete a published blueprint or runbook, you first have to unpublish it and then delete it.

Procedure

  1. Click the Marketplace Manager tab.
  2. Do one of the following:
    • To delete a blueprint or runbook that is not yet approved, click the Approval Pending tab.
    • To delete a blueprint or runbook that is approved and unpublished, click the Approved tab.
  3. Select the blueprint or runbook that you want to delete.
    The inspector panel appears.
  4. Click the Delete icon.
    The blueprint or runbook is deleted from the marketplace manager.

Calm Applications

Applications Overview

You create applications in Calm by creating and launching blueprints.

The Applications page displays the list of all published applications under the Applications tab and the list of brownfield applications under the Brownfield Applications tab.

Figure. Applications Page Click to enlarge

The Applications page provides the following details about an application.

  • Name of the application.
  • Source blueprint of the application.
  • State of an application whether the application is in a running or in an error state.
  • Application creation time.
  • Name of the application owner.
  • Time duration when the application was created.
  • Date of the last update of the application.
  • The cost of an application for last 30 days.

Application-Level Actions

You have the following application-level actions.

  • Create
  • Start
  • Restart
  • Stop
  • Delete
  • Soft delete
  • Install NGT applications
  • Manage NGT applications
  • Uninstall NGT applications
  • Create, restore, and delete snapshots
  • Clone applications

You cannot perform the Create action after the blueprint is launched and the application is created. You can perform all other application-level actions according to the application state.

You can also perform advanced application actions such as creating or restoring snapshots, updating VM configuration, or cloning an application. See the Advanced Application Actions chapter in this guide for details.

Application State

The applications page displays the state of the application based on the actions you perform on the Manage tab.

Table 1. Application State
Application State Description
Provisioning When you start an application.
Running When the application is deployed and running after the provisioning state.
Stopping When you have initiated an operation to stop the application.
Stopped When the application is stopped.
Restarting When you have initiated an operation to restart the application after the application is stopped.
Deleting When you have initiated an operation to delete the application.
Deleted When the application is deleted.
Busy When you have installed the NGT services on the VMs of an application.
Updating When you are editing an application.
Error When the application goes to error state due to any action you have performed in the Manage tab.
Failover-in-progress When you have initiated a failover operation on Prism Central for the protected VMs of an application.
Failover-failed When the failover operation for the VMs has failed. The failure state mainly occurs in the following conditions.
  • If there is any error from the Prism Central side.
  • If there is no NIC attached to the VM when you configure the recovery plan for the protected VM.
Note: The Failover-in-progress and Failover-failed states are only applicable for the applications that are running on the Nutanix platform.

Application Details

You can click an application name to get details about the application as shown in the following figure.

Figure. Application Details Click to enlarge

The application page consists of the following tabs.

Overview Tab

The Overview tab consists of the following panels.

  • Application Description
  • Variables
  • Cost Summary
  • App Summary
  • App Status
  • VM Info
Table 1. Overview Tab
Panel Description
Application Description Displays the application description.
Variables Displays the variable list used to create the blueprint. You can click the copy icon next to the variable to copy the variable.
Cost Summary Displays the total cost, current cost for each hour, and the cost incurred in a month for the resources that are running in the blueprint. The cost summary panel also displays a graphical representation of the incurred cost.
Note: The Cost Summary panel is applicable for Nutanix and VMware providers.
App Summary Displays the following application details.
  • Application UUID : Displays a unique identification code for the application. UUID is automatically generated after the application is created and in running state.
  • Blueprint : Displays the blueprint from which the application is created.
  • Cloud : Displays the cloud provider icon that hosts the application.
  • Project : Displays the project that is added to the application.
  • Owner : Displays the role of the user.
  • Created On : Displays the date and time when the application was created.
  • Last Updated On : Displays the date and time when the application was last updated.
App Status Displays the summary of virtual machines (VMs). The panel displays the number of VMs that are in the following state.
  • On
  • Busy
  • Error
  • Off
VM info Displays the following VM details of the application.
  • Name : Displays the VM name.
  • IP Address : Displays the IP address of the VM.
  • Image : Displays the image from which the VM is created.
  • vCPUs : Displays the number of vCPU allocated to the VM.
  • Cores : Displays the number of cores allocated to the VM.
  • Memory : Displays the total memory allocated to the VM.
  • Network Adapters : Displays the network adapters used in the VM. You can use the down arrow key to view the details of the network adapter.
  • VPC : Displays the associated VPC and the connection status.
  • Categories : Displays the categories added to the VMs. You can use the down arrow key to view the details of the network adapter.

Manage Tab

The Manage tab lists the system-generated and user-created actions that you can perform on the application. When you click any of the listed actions, the editor displays the action dependencies.

Figure. Manage Tab Click to enlarge

You can perform the following system-generated actions on an application.

  • Create : Creates an application. You cannot perform this action once the blueprint is launched and application is created.
  • Start : Starts an application.
  • Restart : Restarts an application.
  • Stop : Stops an application.
  • Delete : Deletes an application including the underlying VMs on the provider side.
  • Soft Delete : Deletes the application from the Calm environment but does not delete the VMs on the provider side.
  • Install NGT : Installs NGT service on your VM. To install NGT on your VM, see Installing NGT Apps.
  • Manage NGT : Manages NGT services for your application. You can enable or disable SSR or VSS services. The self-service restore (SSR) allows virtual machine administrators to do a self-service recovery from the Nutanix data protection snapshots with minimal administrator intervention. For more information, see Self-Service Restore section in the Prism Web Console Guide. Volume Shadow Copy Service (VSS; also known as Shadow Copy or Volume Snapshot Service) creates an application-consistent snapshot for a VM and is limited to consistency groups consisting of a single VM.
  • Uninstall NGT : Uninstalls NGT services from the VM. For more information, see Uninstalling NGT Apps.

Nutanix guest tools (NGT) is a software bundle that you can install in a guest virtual machine (Microsoft Windows or Linux) to enable the advanced functionalities provided by Nutanix. For more information on NGT, see the Nutanix Guest Tool section in the Prism Web Console Guide .

Note:
  • NGT services applies only to single VM applications running with Nutanix as the provider.
  • For Kubernetes, the start, stop, and restart actions are disabled.

The inspector panel also displays the action you perform on an application. To view the detailed course of the action, click Action .

Metrics Tab

The Metrics tab allows you to view performance metrics of the VM. The Metrics tab displays a section on the left with a list of metrics.

Note:
  • The Metrics tab applies only to single VM blueprint running with Nutanix as the provider.
  • The identified anomalies are based on VM behavioral machine-learning capabilities.
  • Clicking a metric displays a graph on the right. (Some metrics have multiple graphs.) The graph is a rolling time interval performance or usage monitor. The baseline range (based on the machine-learning algorithm) appears as a blue band in the graph. Placing the cursor anywhere on the horizontal axis displays the current value. To set the time interval (last 24 hours, last week, last 21 days), select the duration from the pull-down list on the right.
    Note: The machine-learning algorithm uses 21 days of data to monitor performance. A graph does not appear for less than 21 days of data.
  • To create an alert for this VM based on either behavioral anomalies or status thresholds, click the Set Alerts link above the graph.

The following table describes the available metrics.

Table 1. Metrics Tab Fields
Metric Description
CPU usage Displays the percentage of CPU capacity currently the VM is using (0–100%).
CPU ready Time Displays the current, high, and low percentage of CPU ready time (0–100%).
Memory usage Displays the percentage of memory capacity currently the VM is using (0–100%).
I/O Bandwidth Displays separate graphs for total, write (only), and read (only) I/O bandwidth used per second (Mbps or KBps) for physical disk requests by the VM.
I/O Latency Displays separate graphs for total, write, and read average I/O latency (in milliseconds) for physical disk requests by the VM.
IOPS Displays separate graphs for total, write, and read I/O operations per second (IOPS) for the VM.
Usage Displays separate graphs for current, snapshot, and shared storage usage (in GiBs) by the VM.
Working set size Displays separate graphs for total, write, and read storage usage (in GiBs) for the VM working set size.
Network packets dropped Displays separate graphs for the number of transmitted and received packets dropped.
Network bytes Displays separate graphs for the amount of transmitted and received bytes (in GiBs).

Recovery Points Tab

The Recovery Points tab allows you to view the captured snapshots, restore applications from snapshots, and delete the snapshots for an application.

Note:

The Recovery Points tab applies only to single VM blueprints running with Nutanix as the provider.

To create snapshots of the single-VM or multi-VM applications that are running on Nutanix platform, use the snapshot action on the Manage tab of the application.

Table 1. Recovery Points Tab Fields
Fields Description
Name Displays the name of the snapshots.
Creation Time Displays the date and time of the snapshot creation.
Location Displays the location where the snapshot was taken.
Expiration Time Displays the expiration time of the snapshot.
Recovery Point Type Displays whether the snapshot type is application-consistent or crash-consistent.

Snapshots Tab

The Snapshot tab allows you to view the captured snapshots, restore applications from snapshots, and delete the snapshots for an application. Use this tab to create snapshots of single-VM applications that are running on VMware or Azure.

Table 1. Snapshots Tab Fields
Fields Description
ID Displays the ID of the snapshots. Snapshot IDs are unique and automatically generated when you take a snapshot.
Name Displays the name of the snapshot.
Description Displays the description of the snapshot.
Parent Displays the parent blueprint application from which the snapshot is taken.
Creation Time Displays the date and time when the snapshot is taken.

AMIs Tab

The AMIs tab allows you to view the captured snapshots, restore applications from snapshots, and delete the snapshots for an application.

Note: This tab is only applicable for single VM blueprints running with AWS accounts.
Table 1. AMI Tab Fields
Fields Description
ID Displays the ID of the snapshots. Snapshot IDs are unique and automatically generated when you take a snapshot.
Name Displays the name of the snapshot.
Description Displays the description of the snapshot.
Creation Time Displays the date and time when the snapshot is taken.

Services Tab

The Services tab lists the included services in the application as displayed in the following figure. You can select the service to view the configuration details in the service inspector panel.
Note: Service tab is only applicable for multi-VM applications.
Figure. Services Tab Click to enlarge

Accessing Web SSH Console

Perform the following procedure to run shell commands on a web SSH console for a service.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application.
    The Overview tab is displayed.
  3. Under the Service tab, click the service.
  4. Click Open Terminal .
    Figure. Web SSH Console Click to enlarge

    The web SSH console is displayed.

Audit Tab

The Audit tab lists the action or actions that are performed on an application as displayed in the following figure. To view the detailed course of the action, click action.

Figure. Audit Tab Click to enlarge

Brownfield Applications Overview

Brownfield applications are created to manage existing VMs that are currently not managed by Calm. To create a brownfield application, Calm must communicate with the VMs that are not managed by Calm. After the application is created, the application runs like any other Calm application.

Figure. Brownfield Applications Page Click to enlarge

Brownfield Applications - Key Points

The following are the key points you must consider before you create a brownfield application.

  • You need administrator privileges to create a brownfield application.
  • For the quota utilization check and VM update configuration to work accurately, you must either select a single VM per service or the VMs that have the same configuration.

    In Calm, the update configuration is stored as a single element per service and applicable from the first VM instance. When you select multiple VMs with different configurations in a service and update the configuration, the update configuration applies to the first VM instance. The same configuration is then followed for all the remaining VM instances.

    Let’s say you selected VM1 and VM2 for the service with a RAM of 4 GB and 8 GB respectively. If you define the update configuration to increase the RAM by 1 GB and run the action, the update applies to VM1 to increase the RAM to 5 GB. The same configuration is then followed for VM2 to change the RAM from 8 GB to 5 GB causing undesirable results in both the update configuration and quota utilization checks.

  • When you add credentials for the VMs, ensure that the credentials are same for all the VMs.
  • After a VM is created, the VM takes some time to be listed for brownfield import.
  • Brownfield applications do not support snapshot and restore.

For information on how to create a brownfield application, see Creating Brownfield Application.

Creating Brownfield Application

Brownfield applications are created to manage existing VMs that are currently not managed by Calm. Perform the following procedure to create brownfield application.

About this task

Video: Creating Brownfield Application

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the Brownfield Applications tab.
    The Brownfield Application page is displayed.
  3. Click + Create Brownfield Application .
    The Brownfield Import window is displayed.
  4. Enter the blueprint application name in the Name field.
  5. Optionally, enter a description about the application in the Description field.
  6. Select a project from the Project list.
  7. Click Proceed .
    The brownfield application editor page is displayed.
  8. To add a service, click + next to the service.
  9. Enter the service name in the Service Name field.
  10. Select one of the following type of deployment.
    • Select Greenfield if all the existing VMs are manged by Calm.
    • Select Brownfield if the existing VMs are currently not managed by Calm.
  11. If you have selected Brownfield , then do the following.
    1. Select the VMs from the Select Machines list.
      Note: For the quota utilization check and VM update configuration to work accurately, ensure that you either select a single VM or the VMs that have the same configuration.
    2. Configure the connection. For more information, see Configuring Check Log-In.
    3. Add credentials. For more information, see Adding Credentials.
  12. If you have selected Greenfield , then configure the VM, package, and service. To configure VM, package, and service refer to Configure Multi-VM, Package, and Service.
  13. Click Save .
    The brownfield application is created and listed under the Brownfield Application list.

What to do next

Launch the brownfield application from the Applications tab. For more information, see Launching Brownfield Applications.

Launching Brownfield Applications

You must launch the configured brownfield applications to be managed by Calm.

About this task

Video: Launching Brownfield Applications

Before you begin

Ensure that you have created the brownfield applications. For more information, see Creating Brownfield Application.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the Brownfield Applications tab.
    The Brownfield Application page is displayed.
  3. Click the brownfield application that you want to launch.
    The blueprint details page is displayed.
  4. Click Launch .
    The brownfield application page is displayed and the application is listed under the Applications tab.

Advanced Calm Application Actions

Installing NGT Apps

Nutanix Guest Tools (NGT) is a software bundle that you can install in a guest virtual machine (Microsoft Windows or Linux) to enable the advanced functionality provided by Nutanix. For more information about NGT, see the Prism Central Guide . Perform the following procedure to install NGT services on your VM. NGT services are only applicable for AHV clusters.

About this task

Note: The Nutanix Guest Agent service is now upgraded to Python 3.6. For successful installation of NGT on Windows VMs, apply the Update for Universal C Runtime in Windows (Microsoft KB 2999226) to upgrade your Windows VMs to Python 3.6.

Before you begin

  • Ensure that NGT requirements and limitations are met. For more information, see the Prism Central Guide .
  • Ensure that you have configured the cluster virtual IP address.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to install NGT on.
    The Overview tab is displayed.
  3. Under Manage tab, click the Install NGT Apps play button.
    Figure. Install NGT Click to enlarge

    The Install NGT Apps screen appears.
  4. To restore desired files from the VM, click the Enable Self Service Restore (SSR) check box. This step is optional.
    The self-service restore (SSR) allows virtual machine administrators to do a self-service recovery from the Nutanix data protection snapshots with minimal administrator intervention. For more information, see the Prism Central Guide.
    The Self-Service Restore feature is enabled for the VM.
  5. To enable VSS, click the Enable Volume Snapshot Service (VSS) check box. This step is optional.
    Volume Shadow Copy Service (VSS; also known as Shadow Copy or Volume Snapshot Service) creates an application-consistent snapshot for a VM and is limited to consistency groups consisting of a single VM. Enabling VSS allows you to take application-consistent snapshots.
  6. Do one of the following:
    1. To restart the VM after NGT installation, click Restart as soon as the install is completed .
    2. To skip the restart of the VM after VM installation, click Skip restart .
  7. Click Enter Credentials and do the following.
    1. In the User name field, enter user name.
    2. In the Password field, enter the password.
  8. Do one of the following.
    • To install NGT, click Done .
    • To mount the NGT on the VM and install it later, click Skip and Mount .
    • If NGT is already mounted on a VM, to unmount the NGT from the VM, click Unmount .
    • To cancel NGT installation, click Cancel .

Managing NGT Apps

After you install NGT service on a VM, you can either enable or disable VSS and SSR services by using the Manage NGT Apps play button. To know more VSS and SSR services, see the Nutanix Guest Tools section in the Prism Web Console Guide .

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application on which you want to manage NGT services.
    The Overview tab is displayed.
  3. Under Manage tab, click the Manage NGT Apps play button.
    The Manage NGT Apps screen appears.
  4. Under the Manage NGT Apps scree, click the Enable or Disable button to enable or disable self-service restore or volume snapshot service respectively.
  5. Click Confirm .
    The changes are saved and you can use the NGT services based on your selection.

Uninstalling NGT Apps

If you do not want to recover application details after the host VM becomes unavailable, uninstall the NGT application. Perform the following procedure to uninstall NGT services for your application.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to install NGT on.
    The Overview tab is displayed.
  3. Under Uninstalling NGT tab, click the Uninstall NGT Apps play button.
    A confirmation message appears to uninstall NGT.
  4. Click Uninstall .
    NGT Apps is uninstalled from the VM.

Snapshot and Restore

A snapshot preserves the state and data of an application virtual machine at a specific point in time. You can create a snapshot of a virtual machine at a particular point in time and restore from the snapshot to recreate the application from that time.

On a Nutanix platform, you can use the snapshot and restore feature in both single-VM and multi-VM applications. On VMware, AWS, and Azure platforms, you can use the snapshot and restore feature only in a single-VM application.

While the snapshot and restore feature is available by default for VMware, AWS, and Azure platforms, you need to add the snapshot/restore configuration to the single-VM or multi-VM blueprint on Nutanix. Adding the configuration to the blueprint generates separate profile actions for snapshot and restore. For more information, see Configuring Single-VM Blueprints with Nutanix for Snapshots and Configuring Multi-VM Blueprints on Nutanix for Snapshots.

Snapshot and Restore for Nutanix Platform

Snapshot and restore of an application VM that runs on a Nutanix platform involves the following configurations and actions:

  • Policy Definition for Snapshots
  • Blueprint Configuration for Snapshots
  • Application Launch with Snapshot Policy
  • Snapshot Creation
  • Snapshot Restore

Policy Definition for Snapshots

As a project admin, you define snapshot policies in a project. Snapshot policies help you define rules for taking snapshots of application VM. The policy determines the overall intent of the snapshot creation process and the duration of managing those snapshots. You can configure your snapshot policy to manage your snapshots on a local cluster, on a remote cluster, or both.

  • Local Snapshots: When you select local snapshots in the policy and use the policy for snapshots, the snapshots of the VMs reside on the same cluster as that of the VMs.
    Figure. Local Snapshots Click to enlarge

  • Remote Snapshots: When you select remote snapshot in the policy and use the policy for snapshots, the snapshots of the VMs running on the primary cluster are stored on a remote cluster. The primary cluster and the remote cluster must be associated with the same Prism Central. When you restore the VMs, the snapshots are restored to the primary cluster to bring up the VMs.
    Figure. Remote Snapshots Click to enlarge

    Remote snapshots are particularly useful when your Prism Central has a computer-intensive cluster managing workloads and a storage-intensive cluster managing your data, snapshots, and so on.

For more information about creating a snapshot policy, see Creating a Snapshot Policy.

Blueprint Configuration for Snapshots

You define snapshot and restore configuration for each service in a blueprint. You can configure the service to create snapshots locally or on a remote cluster. In case your multi-VM blueprint has multiple replicas of the service, you can configure the action to take snapshot only for the first replica or the entire replica set.

The snapshot/restore definition of a service generates the snapshot configuration and its corresponding restore configuration. You can use these configurations to modify your snapshot and restore setup. The snapshot/restore definition also generates application profile actions that you can use to create or restore snapshots. You can add more tasks and actions as part of your snapshot and restore to define actions you might want to take on your services. For example, shutting down the application and the VM before taking the snapshot or restarting the VM or services before a restore.

Note: The snapshot and restore configurations in a service are integrated to each other and cannot be managed individually.

For more information on snapshot and restore configuration, see Blueprint Configuration for Snapshots and Restore.

Application Launch with Snapshot Policy

You associate a policy defined in a project when you launch the application. Depending on the snapshot configuration that you provide in the blueprint, you can select the policy and the cluster in which the snapshot will be stored.

If you defined remote snapshot in the blueprint, then you can view all the policies that allow you to take a remote snapshot. You can select a policy and the corresponding clusters before you launch the application.

For more information, see Launching a Blueprint.

Snapshot Creation

Like other profile actions, the profile actions for snapshot and restore appear on the Manage tab of an application. The snapshots created are listed under the Recovery Points tab of the application. When you create multiple snapshots as part of one action, they appear as a snapshot group. You can expand the group to view the snapshots, their corresponding services, and location. For more information, see Creating Snapshots on a Nutanix Platform.

Snapshot Restore

Restore follows the same configuration that the snapshot has. To restore, you specify the variables and select applicable recovery points depending on the VM. For more information, see Restoring VM Details from Snapshots on a Nutanix Platform.

Creating Snapshots on a Nutanix Platform

Perform the following procedure to create application-consistent or crash-consistent snapshots. Application-consistent or crash-consistent snapshots are used to capture and recover all of the VM and application level details. Application-consistent snapshots can also capture all data stored in the memory and transactions in process.

About this task

Note: Only crash-consistent snapshots are supported for multi-VM applications on a Nutanix platform.

Before you begin

Ensure that you have installed NGT Apps to take application-consistent snapshots. For more information, see Installing NGT Apps.

Procedure

  1. Click the Applications icon in the left pane.
  2. On the Applications page, click the application for which you want to create snapshots.
  3. On the Manage tab, click the snapshot action you created for the application VM.
    The Run Action: Snapshot window appears.
  4. In the Snapshot Name field, enter a name for the snapshot.
    You can use Calm macros to provide a unique name to the snapshot. For example, snapshot-@@{calm_array_index}@@-@@{calm_time}@@ .
  5. For single-VM applications, select App consistent for application-consistent snapshots or Crash consistent for crash-consistent snapshots.
    Note:
    • You can create application-consistent snapshots after you have installed NGT Apps with VSS service enabled. For more information about snapshots, see the Nutanix Guest Tools section in the Prism Web Console Guide .
    • For multi-VM application, the default snapshot type is Crash consistent .
  6. Click Run .
    The saved snapshots are available under Recovery Points tab.

What to do next

You can recover the VM details for an application from the created snapshots. For more information, see Restoring VM Details from Snapshots on a Nutanix Platform.

Restoring VM Details from Snapshots on a Nutanix Platform

You can restore the VM details of an application after the host VM becomes unavailable. Perform the following procedure to restore an application from the snapshots.

Before you begin

Ensure that you have captured the snapshots for an application. For more details, see Creating Snapshots on a Nutanix Platform.
Note:
  • A restored VM or a cloned VM does not have NGT service installed even if the snapshot or the source VM has NGT service installed.
  • Restore operation for a VM fails if the snapshot is configured with static IP address and IP pool is not configured.
  • When you perform a restore operation with a snapshot having static IP address configured, the restored VM comes up with a new IP address from the IP pool specified in IPAM. To ensure that the restored VM has the same static IP address as the old VM,remove the NIC that has this static IP address configured from the old VM, and attach the configuration to the new restored VM. If there is a failure during restore operation, perform an update operation on the VM to ensure that the VM is in valid state.

Procedure

  1. Click the Applications icon in the left pane.
  2. On the Applications page, click the application for which you want to restore the VM details from the snapshots.
  3. On the Manage tab, click the restore action you created for the application.
    The Run Action: Restore window appears.
  4. Select a recovery point from the Select Recovery Point list.
    The Select Recovery Point list shows all the snapshots taken for the application VM.
  5. Click Run .
    The application is restored from the snapshot in a new VM and the existing VM moves to power off state. If you selected the click the Delete older VM after restore check box while configuring the application blueprint, the existing VM is deleted after restoring the application VM.

Creating Snapshots on a VMware Platform

A snapshot preserves the state and data of a virtual machine at a specific point in time. You can create a snapshot of a virtual machine at any time and revert to that snapshot to recreate the application from that time. For more information, see the VMware Documentation . Perform the following procedure to create a snapshot.

Before you begin

Ensure that the VMware Tool is installed and the VM is in powered on state to create the quiesce snapshots.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to create snapshots.
    The Overview tab is displayed.
  3. Click Snapshot .
    Figure. Snapshot VMware Click to enlarge

    The Snapshot screen appears.
  4. In the Snapshot Name field, enter a name for the snapshot.
  5. Optionally, in the Snapshot Description field, enter a brief description about the snapshot.
  6. Optionally, click one of the following options.
    • Snapshot VM's Memory : Use this option to capture the memory of the virtual machine and the power settings. Memory snapshots take longer to create, but allow reversion to a running virtual machine state as it was when the snapshot was created.
    • Enable Snapshot Quiesce : Use this option to pause or alter the state of running processes on the virtual machine and take consistent and usable backup. When you quiesce a virtual machine, VMware Tools quiesce the file system in the virtual machine. The quiesce operation pauses or alters the state of running processes on the virtual machine, especially processes that might modify information stored on the disk during a restore operation.
    By default, Snapshot VM's Memory is selected. If you do not select any option, a crash-consistent snapshot is created, which you can use to reboot the virtual machine. For more information, see the VMware Documentation .
  7. Click Save .
    The saved snapshots are available under Snapshots tab.

What to do next

You can recover the VM details for an application from the snapshots you created. For more information about recovering application level information, see Restoring VM Details from Snapshots on a Nutanix Platform.

Restoring VM Details from Snapshots on a VMware Platform

You can restore the VM details of an application after the host VM becomes unavailable. Perform the following procedure to restore an application VM details from a snapshot.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to restore the VM details from the snapshot you created.
    The Overview tab is displayed.
  3. Click the Snapshots tab.
    The Snapshots tab lists all the snapshots created for the application.
  4. Click Restore next to a snapshot from which you want to restore the VM details.
    A confirmation message appears to restore the VM details.
  5. Click Confirm .
    The application is restored from the snapshot in the same VM.

Creating Snapshots on an AWS Platform

About this task

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. For more information, see AWS Documentation . Perform the following procedure to create a snapshot on a AWS platform.

Before you begin

Ensure that the you have an AWS account with the required privileges to create a snapshot. For more information, see Configuring AWS User Account with Minimum Privilege and AWS Policy Privileges sections.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to create snapshots.
    The Overview tab is displayed.
  3. Click Snapshot .
    Figure. Snapshot AWS Click to enlarge

    The Save Snapshot screen appears.
  4. In the AMI Name field, enter a name for the snapshot.
  5. In the AMI Description field, enter a brief description about the snapshot. This step is optional.
  6. Click the No Reboot check box to avoid shutting down the Amazon EC2 instance before creating the image. This step is optional.
  7. Click Save .
    The saved snapshots are available under the AMI tab.

Restoring VM Details from Snapshots on an AWS Platform

You can restore the VM details of an application after the host VM becomes unavailable. Perform the following procedure to restore an application VM details from a snapshot. Ensure that you have captured the snapshots for the application VM.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to restore the VM details.
    The Overview tab is displayed.
  3. Click the AMIs tab.
    The AMIs tab lists all the snapshots created for the application.
  4. Click Restore next to a snapshot from which you want to restore the VM.
    A confirmation message appears to restore the VM details.
  5. Click Confirm Restore .
    The restore action creates a new VM from the snapshot that has the same configuration as the source application with a different IP address.

Creating Snapshots on an Azure Platform

Creating a snapshot of an application virtual machine on the Azure platform creates a point-in-time copy of your operating system and data disks associated with the VM. The snapshots you create can then be used to create a new VM with the same configurations as the source application VM.

Before you begin

Ensure that you have an Azure account with the required privileges to create a snapshot. For more information, see Configuring Azure User Account with Minimum Privilege.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to create snapshots.
    The Overview tab is displayed.
  3. Click Snapshot .
    Figure. Snapshot Azure Click to enlarge

    The Save Snapshot screen appears.
  4. In the Snapshot Name field, enter a name for the snapshot.
  5. Optionally, in the Snapshot Description field, enter a brief description about the snapshot.
  6. From the Snapshot Type list, select the storage type to store your snapshot. Your options are:
    • Standard HDD
    • Premium SSD
    • Zone-redundant
    For more information about the storage type, refer to the Azure documentation.
  7. Click Save .
    You can track the progress of the snapshot creation process on the Audit tab. The snapshots are stored on the Snapshots tab.

Restoring VM Details from Snapshots on an Azure Platform

You can restore the VM details of an application after the host VM becomes unavailable. The VM snapshot that you create on an Azure platform consists of the snapshot of operating system and data disks. When you restore the VM details, a new VM is created using the snapshots of the disks.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to restore the VM details.
    The Overview tab is displayed.
  3. Click the Snapshots tab.
    The Snapshots tab lists all the snapshots created for the applications.
  4. Click Restore next to the snapshot from which you want to restore the VM.
    The Restore VM dialog box appears.
  5. Enter the restore name for the application VM.
  6. Optionally, select the Delete Previous VM to delete the original VM from which the snapshot was created.
  7. Click Confirm Restore .
    The restore action creates a new VM from the snapshot that has the same configuration as the source application.

Deleting Snapshots

Perform the following procedure to delete the snapshots created for the VM under an application.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application for which you want to delete snapshots.
    The Overview tab is displayed.
  3. Do one of the following.
    • If your application is deployed on a Nutanix cluster, click the Recovery Points tab.
    • If your application is deployed on a VMware platform, click the Snapshots tab.
    • If your application is deployed on an AWS platform, click the AMI tab.
  4. Click the Delete button next to the snapshot you want to delete.
  5. Click Confirm .
    The snapshot is deleted.

Update VM Configurations of Running Applications

The update configuration feature allows you to update the virtual machine of a running application to a higher or lower configuration. Using this feature, you can modify VM specifications such as the vCPU, memory, disks, networking, or categories (tags) of a running production application with minimal downtime.

The process to update VM configuration of a running application on Nutanix is different from other providers.

Note: Updating VM configuration of a running multi-VM application is supported only on a Nutanix platform.

Update VM Configuration of an Application on Nutanix

To update configurations of a running single-VM or multi-VM applications on Nutanix, you need to perform the following steps:

  • Add an update configuration to the application blueprint.

    For more information, see Update Configuration for VM.

  • Run the corresponding action to update VM specifications.

    You can update VM specifications from the Manage tab of the application. While launching the update, you can define the variables, verify the updates defined for the service by looking at the original value and updated value. You can also modify the values if the component is editable. You can also check the cost difference at the top of the page before applying the changes. For more information, see Updating the VM Configuration of an Application on Nutanix.

Update VM Configuration of an Application on Other Providers

The option to update VM configuration of a running single-VM application on VMware, AWS, or Azure is available by default on the Overview tab of the application. The attributes that you can update depends on the provider account you selected for the application.

  • For more information about updating VM configuration on a VMware platform, see Updating the VM Configuration of an Application on a VMware Platform.
  • For more information about updating VM configuration on a AWS platform, see Updating the VM Configuration of an Application on an AWS Platform.
  • For more information about updating VM configuration on a Azure platform, see Update the VM Configuration of an Application on an Azure Platform.

Updating the VM Configuration of an Application on Nutanix

You can run the update configuration to modify the VM specifications, such as the vCPU, memory, disks, networking, or categories of a single-VM or multi-VM application.

About this task

Note:
  • If you update configuration of an application after cloning from a source application, the update fails if the source application has static IP address configured.
  • When you update configuration of an application, the CD-ROM attached to mount NGT services is removed.

Before you begin

Ensure that your blueprint developer has added the update configuration before launching the application blueprint. For more informations, see Update Configuration for VM.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to edit.
  3. On the Manage tab, click the action corresponding to the update configuration.
    The Run Action window appears.
  4. Under the VM Configuration section, enter the change factor value in the Updated field for the vCPUs , Core per vCPU , and Memory (GiB) .
    The ability to edit a VM configuration attribute and the maximum or minimum value to which the attribute can be updated depend on the application blueprint configuration. You can update only those VM configuration attributes that your blueprint developer has enabled for editing. The Updated field does not allow you to enter a value that is beyond the minimum or maximum value configured for the attribute.
  5. Under the Disks section, edit the following.
    • Enter the value in the Updated field to increase the size of the existing disk.
      Note: You cannot decrease the size of an existing disk.

      You can click the delete icon to remove the existing disk.

    • Enter the value in the Updated field to increase or decrease the size of any new disk. The updated value must be within the maximum or minimum value your blueprint developer has configured in the application blueprint.

      You can click the delete icon to remove any new disk if your blueprint developer has enabled it in the application blueprint.

  6. Under the Categories section, delete any existing categories from the application if your blueprint developer has enabled it in the application blueprint configuration.
  7. Under the Network Adapters section, delete any existing NICs from the application if your blueprint developer has enabled it in the application blueprint configuration.
  8. To launch the update configuration, click Run .

Updating the VM Configuration of an Application on a VMware Platform

You can run the update configuration to modify parameters, such as VM configurations, controllers, disks, and network adapters of a single-VM application running on a VMware platform.

About this task

Note:
  • If there is a mismatch of the NICs or Network setting count after updating an application VM and you try to clone the application, the cloned application fails.
  • You cannot add or delete the application properties simultaneously.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to edit.
  3. On the Manage tab, click Update VM Configuration .
    The Update screen for the application VM is displayed.
  4. In the VM Location field, specify the location of the folder in which the VM must reside when you update. Ensure that you specify a valid folder name already created in your VMware account.
    To create a subfolder in the location you specified, select the Create a folder/directory structure here check box and specify a folder name in the Folder/Directory Name field.
    Select the Delete empty folder check box to delete the subfolder created within the specified location, in case the folder does not contain any VM resources. This option helps you to keep a clean folder structure.
  5. Select the CPU Hot Add check box if you want to increase the VCPU count of a running VM.
    Support for CPU Hot Add depends on the Guest OS of the VM.
    Note: With CPU Hot Add , you can only increase the vCPU count. If you decrease the vCPU count or update the Cores per vCPU, the VM will require a restart.
  6. Update the vCPUs and Core per vCPU count.
  7. Select the Memory Hot Plug check box if you want to increase the memory of a running VM.
    Support for Memory Hot Plug depends on the Guest OS of the VM.
    Note: With Memory Hot Plug , you can only increase the memory. If you decrease the memory, the VM will require a restart.
  8. Update the memory in the Memory field.
  9. Under the Controllers section, you can add or update the SCSI or SATA controllers.
    Note: You cannot delete a controller if it is attached to a disk.
  10. Under the Disks section, click the + icon to add vDisks and do the following:
    1. Select the device type from the Device Type list.
      You can either select CD-ROM or DISK .
    2. Select the adapter type from the Adapter Type list.
      You can select IDE for CD-ROM or SCSI , IDE , or SATA for DISK.
    3. Enter the size of the disk in GiB.
    4. In the Location field, select the disk location.
    5. If you want to add a controller to the vDisk, select the type of controller in the Controller list to attach to the disk.
      Note: You can add either SCSI or SATA controllers. The available options depend on the adapter type.
    6. In the Disk mode list, select the type of the disk mode. Your options are:
      • Dependent : Dependent disk mode is the default disk mode for the vDisk.
      • Independent - Persistent : Disks in persistent mode behave like conventional disks on your physical computer. All data written to a disk in persistent mode are written permanently to the disk.
      • Independent - Nonpersistent : Changes to disks in nonpersistent mode are discarded when you shut down or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you shut down or reset.
    Note:
    • You can also edit the disk size and disk mode. However, decreasing the disk size of a saved configuration is not allowed.
    • You can delete a saved disk. However, you cannot add and delete the disks simultaneously.
  11. Under the Network Adapter section, click the + icon to add an NIC and cofigure the Adapter Type and Networks fields.
    Note: You can only update the Networks field of an existing NIC.
  12. Under the Tags section, select tags from the Category: Tag pairs field.
    You can assign tags to your VMs so you can view the objects associated with your VMs in your VMware account. For example, you can create a tag for a specific environment and assign the tag to multiple VMs. You can then view all the VMs that are associated with the tag.
  13. Click Update to run the update configuration.
    The application with updated VM configuration is saved on the Application tab.

Updating the VM Configuration of an Application on an AWS Platform

You can run the update configuration to modify parameters, such as instance type, IAM role, security groups, tags, and storage of a single-VM application running on an AWS platform.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to edit.
  3. On the Manage tab, click Update VM Configuration .
    The Update screen for the application VM is displayed.
  4. Under the VM Configuration section, update the instance type from the Instance Type list.
    The Region , Availability Zone , Machine Image , Key Pairs , and VPC fields are automatically selected. You cannot update these fields.
  5. To update te IAM role, select the role from the IAM Role list.
    An IAM role is an AWS Identity and Access Management entity with permissions to make AWS service requests.
  6. To enable the security group rule, select the Include Classic Security Group check box.
  7. From the Security Groups list, select security groups.
  8. To add tags to the application, add the key and value pair in the Key and Value fields respectively.
  9. To update the storage of the application, do the following under the Storage section:
    1. For the existing storage, update the memory in GB in the Size(GiB) field and volume type of the storage device from the Volume Type list for the root storage.
    2. Click the + icon to add a storage and specify the device, size, and volume type.
  10. Click Update to run the update configuration.
    The application with updated VM configuration is saved on the Application tab.

Update the VM Configuration of an Application on an Azure Platform

You can run the update configuration to modify parameters, such as VM configurations, controllers, disks, or network adapters of a single-VM application running on an AWS platform.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to edit.
  3. On the Manage tab, click Update VM Configuration .
    The Update screen for the application VM is displayed.
  4. Under the VM Configuration section, update the hardware profile from the Hardware Profile list.
    The number of data disks and NICs depends upon the selected hardware profile. For information about the sizes of Windows and Linux VMs, see Windows and Linux Documentation.
    The Instance Name , Resource Group , Location , and Availability Option fields are automatically selected. You cannot update these fields.
  5. Under the Storage Profile section, do the following:
    1. Select the Storage Type and Disk Caching Type and specify the Size and Disk LUN for the existing data disk.
    2. Click the + icon to add a data disk. Select the Storage Type and Disk Caching Type and specify the Size and Disk LUN for the new data disk.
  6. Under the Network Profile section, click the + icon to add NICs as per your requirement and do the following for each NIC:
    1. Select a security group from the Security Group list.
    2. Select a virtual network from the Virtual Network list.
    3. Under Public IP Config , enter a name, and select an allocation method.
    4. Under Private IP Config , select an allocation method.
      If you selected Static as the allocation method, then enter the private IP address in the IP Address field.
    You can also update the Security Group , Subnet , and public or private IP config Allocation Method of the existing NIC.
  7. To add tags to the application, add the key and value pair in the Key and Value fields respectively.
  8. Click Update to run the update configuration.
    The application with updated VM configuration is saved on the Application tab.

Updating Actions and Credentials of an Application

You can add or update the credential, custom actions, post delete tasks, or package uninstall tasks from the Overview tab of a single-VM application.

About this task

Note:
  • For this release, support for credential or action update is not available for the applications running on Xi cloud.
  • Dynamic variables are runtime editable by default, but you cannot mark variable as runtime editable if you add the variables while updating an application.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the single-VM application for which you want to update the credential or actions.
  3. From the Update list, select Update Actions and Credentials .
    The Update screen is displayed.
  4. In the Credentials and Connection area, click Edit .
    The Credentials and Connection page is displayed.
  5. To add a credential, click Add Credential and do the following.
    1. In the Add Credential window, enter name of the credential in the Credential Name .
    2. Enter user name in the Username field.
    3. Select the secret type from the Secret Type list.
      You can either select password or SSH private key.
    4. Do one of the following.
      • If you have selected password, enter the password in the Password field.
      • If you have selected SSH Private Key, enter or upload the SSH private key in the SSH Private Key field.
      Optionally, if the private key is password protected, click +Add Passphrase to provide the password.
  6. To delete an existing credential, click Delete against the credential.
    Note: You can also update the user name or password of an existing credential. However, if you have logged on as an operator, you can only update the password.
  7. Under Connection , to update the credential and check the logon status after creating the application, select the credential from the Credentials list.
    You can update the credential to check the logon status only if you have enabled the Check log-in upon create field while configuring the blueprint.
  8. Optionally, to add a post delete task for the application, in the Post Delete area, click Edit . For more information see Adding a Pre-create or Post-delete Task.
  9. Optionally, to create a task to uninstall a package, click Edit next to the Package area and do the following.
    1. Click + Task .
    2. Enter the task name in the Task Name field.
    3. To create the type of task, select the type from the Type list.
      The available options are:
      • Execute : To create the Execute task type, see Creating an Execute Task.
      • Set Variable : To create the Set Variable task type, see Creating a Set Variable Task.
      • HTTP Task : To create the HTTP Task type, see Creating an HTTP Task.
      • Delay : To create the Delay task type, see Creating a Delay Task.
    4. To add variables to the post delete task, click the Package Uninstall Variables tab.
    5. In the Variables pane, click the + icon to add variable types
    6. In the Name field, enter a name for the variable.
    7. From the Data Types list, select one of the base type variable or import a custom library variable type.
    8. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    9. If you have imported a custom variable type, all the variable parameters are automatically filled.
    10. Select the Secret check-box if you want to hide the value of the variable.
    11. To save the package uninstall task, click Done .
    12. To establish a connection between tasks, click Add Connector and use the arrow to create connection between tasks.
    13. To delete a task, click the Delete button next to the task.
      You can delete a task only while adding a new task. If you are updating the existing task, you cannot delete the task.
  10. Optionally, to add another action to the application, click + Add Action next to the Actions area and do the following.
    1. Click + Add Task .
      The task inspector panel is displayed.
    2. In the task inspector panel, click the Task button.
    3. Enter the task name in the Task Name field.
    4. Select the type of tasks from the Type list.
      The available options are:
      • Execute : Use this task type to run eScripts on the VM. To create the Execute task type, see .Creating an Execute Task
      • Set Variable : Use this task to change variables in a blueprint. To create the Set Variable task type, see Creating a Set Variable Task.
      • HTTP Task : Use this task type to query REST calls from a URL. An HTTP task supports GET, PUT, POST, and DELETE methods. To create the HTTP Task type, see Creating an HTTP Task.
      • Delay : Use this task type to set a time interval between two tasks or actions. To create the Delay task type, see Creating a Delay Task.
      The task is created.
    5. To add another task, click Add Task in the task editor area.
    6. To establish a connection between tasks, click Add Connector and use the arrow to create connection between tasks.
    7. To delete a task, click the Delete button next to the task.
    8. To add variables to the task, click the Variables tab.
    9. In the Variables pane, click the + icon to add variable types in your blueprint.
    10. In the Name field, enter a name for the variable.
    11. From the Data Types list, select one of the base type variable or import a custom library variable type.
    12. If you have selected a base type variable, configure all the variable parameters. For more information about configuring variable parameters, see Creating Variable Types.
    13. If you have imported a custom variable type, all the variable parameters are auto filled.
    14. Select the Secret check-box if you want to hide the value of the variable.
    15. To save the task, click Done .
  11. To save the updated credentials and tasks for the application, click Update .

Creating an Image on a Nutanix Platform

An image is a template for creating new instance or VM. Calm allows you to create images from an existing single-VM or multi-VM application running on a Nutanix platform. Perform the following procedure to create an image from an existing application.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application from which you want to create an image.
  3. To create an image on a single-VM application, click Create Image on the Applications page.
    Figure. Create Image - Single VM Application Click to enlarge

  4. To create an image on a multi-VM application, select the service on the Services tab, and then click Create Image in the Inspector Panel.
    Figure. Create Image - Single VM Application Click to enlarge

  5. Click the check-box next to the disk from which you want to create an image.
    If the application has multiple disk images available, you can also select multiple disks.
  6. Under the Image Details section, type a name and a description for the new image in the Name and Description fields respectively.
    If you have selected multiple disk images, repeat the steps for all the Image Details sections.
  7. Click Save .
    The new image is created and available in the Image list under the VM Configuration section. You can use the image while creating a single-VM or multi-VM application.

Cloning an Application

Perform the following procedure to clone an application. The cloned application has the same VM configuration as the source application from which it is cloned.

About this task

Note: You can clone an application if you are using Nutanix, VMware, or AWS as your provider.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application on which you want to make a clone.
    The Overview tab is displayed.
  3. Click Clone .
    The Clone screen appears.
  4. In the Cloned Application Name field, enter a name for the cloned application.
  5. In the Description field, enter a brief description about the cloned application.
  6. Click Save .
    After you successfully cloned an application, you can view the link to the cloned application in the audit log of the source application.
    Note: In a Nutanix cluster, a restored VM or a cloned VM has NGT service installed if the snapshot or the source VM has NGT service installed.

What to do next

You can click the link of the cloned application to view the Overview tab of the cloned application. To view the source application, click the Clone From field on the Overview tab.

Deleting an Application

You can delete the unwanted applications from the Applications tab.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Select the check box against the application that you want to delete.
    The Action list is displayed at the top of the Application page.
  3. Select Delete from the Action list.
    Delete Application window is displayed.
  4. Click Confirm .
    The application is deleted from the Application tab.

Executing User Level Actions

You can define and create custom or user-level actions while configuring a blueprint. Perform the following procedure to run the user-level actions.

Before you begin

Ensure that you have created a custom action during configuring a blueprint.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to run a user-level action.
    The Overview tab is displayed.
  3. Under the Manage tab, click the action that is created by the user.
    Figure. User Level Action Click to enlarge

    The custom action starts running for the application.

Executing System Level Actions

System-level actions are pre-defined actions that you can run on an application. Perform the following procedure to execute the system-level actions.

Procedure

  1. Click the Applications icon in the left pane.
    The Applications page is displayed.
  2. Click the application that you want to execute a system generated action.
    The Overview tab is displayed.
  3. Under the Manage tab, click one of the following type of action.
    Figure. System Level Action Click to enlarge

    • Create : Creates an application but cannot be performed once the blueprint is created.
    • Start : Starts an application.
    • Restart : Restarts an application.
    • Stop : Stops an application.
    • Delete : Deletes an application including the underlying VMs on the provider side.
    • Soft Delete : Deletes the application from the Calm environment but does not delete the VMs on the provider side.
    • Install NGT Apps : Installs NGT services for your application. To install NGT, see Installing NGT Apps.
    • Manage NGT Apps : Manages NGT services for your application . You can enable or disable app-consistent and crash-consistent snapshots. For more information, see Managing NGT Apps.
    • Uninstall NGT Apps : Uninstalls NGT services from the VM. For more information, see Uninstalling NGT Apps.

Policies in Calm

Scheduler Overview

Scheduler allows you to schedule application action and runbook executions. You can schedule recurring jobs and one-time jobs for critical operations throughout the application life cycle.

You can schedule any user-defined application actions, create or restore application snapshots (only AHV), or any pre-defined system actions such as Start, Stop, Restart, Delete, and Soft Delete. For example, you can schedule a Stop action and a Start action on a single-VM Calm application to run at a particular date and time.

Scheduler supports two types of entities.

  • Application action. You can use scheduler to schedule application actions, such as Start and Stop, to run at a particular date and time. You can also schedule any custom-defined actions and snapshot create and restore action for AHV.
  • Runbook execution. You can schedule runbooks to run on a particular date and time.

Scheduler jobs have a role ownership. A user can modify the job that you created if the user has access to the entity and Allow Collaboration is enabled in the associated project. For example, if you create a scheduler job for an application action as a developer, a consumer that has access to the same application can modify the job. If Allow Collaboration is disabled in the project, then only the creator of the scheduler job can modify the job. For information on the role required to schedule application action and runbook execution, see Role-Based Access Control in Calm.

Creating a Scheduler Job

Create a scheduler job to perform an application action or runbook execution.

Before you begin

Ensure that you have enabled the policy engine on the Settings page. For details about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Scheduler tab, click the +Create Job button to create a job.
    The Create Job page appears.
    Figure. Create Scheduler Job Click to enlarge

  3. In the Job Name field, type a name for the job.
  4. Enter a description for the job. This step is optional.
  5. From the Select Action list, select an entity type that you want the scheduler job to run on. Your options are:
    • Select Application Action to schedule application actions such as start and stop, schedule any user-defined actions, or snapshot create and restore action for AHV.
    • Select Execute Runbook to schedule the execution of a runbook.
  6. From the Project list, select the project associated with your application or runbook.
  7. Click Action Details .
  8. If you have selected Application Action as the action type, then do the following:
    Figure. Application Action Click to enlarge

    1. From the Select Application list, select the application for which you want to schedule an action.
      The Select Application list displays only those applications that are associated with the project you selected on the Job Details tab.
    2. From the Select Application Action list, select an application-level action or a user-defined action.
    3. If you select any user-defined action, then review or edit the variables for the selected action.
  9. If you have selected Execute Runbook as the action type, then do the following:
    Figure. Execute Runbook Click to enlarge

    1. From the Runbook list, select the runbook for which you want to schedule an action.
      The Runbook list displays only those runbooks that are associated with the project you selected on the Job Details tab.
    2. From the Default Endpoint list, select a default endpoint for the runbook execution.
    3. Verify the variables and endpoint data (such as base URLs, IP addresses, and VMs) defined for the runbook.
  10. Click Set Schedule .
  11. Under Schedule Type , select Recurring Job to run the job at regular intervals or One-Time Job to run the job only once.
  12. If you have selected Recurring Job , then do the following:
    Figure. Recurring Job Click to enlarge

    1. Under Starts , define the start date and time in the Start on and Start at fields if you want to start the job at a particular date and time. This step is optional.
    2. Under Ends , select Never to run the job indefinitely or On to define the ending date and time for the job.
    3. In the Select Timezone field, select a location to define the time zone for the schedule.
    4. Under How often does the job occur section, select an option in the Every field to define the frequency of the job schedule.
      You can also enter a cron expression to define the frequency of the job schedule.
      Depending on the option you select, you have to define specific criteria for the job frequency. For example, when you select Year , you also have to define the months, days, day of the week, hours, and minutes.
  13. If you have selected One-Time Job , then do the following:
    Figure. One-Time Job Click to enlarge

    1. In the Executes on field, specify the execution date.
    2. In the Executes at field, specify the execution time.
    3. In the Select Timezone field, select a location to define the time zone.
  14. Click Save .

Viewing and Updating Scheduler Jobs

You can view or update a scheduler job on the Scheduler tab of the Policies page.

About this task

Scheduler jobs have a role ownership. You can update a job that a different user has created only when you have access to the entity and collaboration is allowed in the associated project.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Scheduler tab, click the job that you want to view or update.
    Figure. Scheduler Job Click to enlarge

  3. On the job details page, do the following:
    • On the Job Info tab, view the action type, runbook or application name, the next scheduled date, execution time or recurrence, and the state of the job. A job can show one of the following states.
      • Active: When the job is created or updated without any errors and the job has not completed the last execution and has not crossed the last execution date.
      • Inactive: When the job is created or updated with errors.
      • Expired: When the last execution of the job is complete or the job has already crossed the last execution date.
      Figure. Job Info Click to enlarge

    • On the Action Details tab, view the following action details:
      • For an application action job, view the associated application and application action.
      • For a runbook job, view the associated runbook, default endpoint, variable details, and endpoint date.
    • On the Execution History tab, view the scheduled time, execution status, and execution time. A job can have one of the following execution statuses.
      • Executed: When the job is already executed as scheduled.
      • Running: When the job is running as scheduled.
      • Success: When a job run is completed successfully.
      • Aborted: When you manually cancel a running or scheduled job.
      • Failed: When the job failed to execute because of some errors.

      You can also click View Logs for any executed job to go to the Audit tab and view the logs.

      Figure. Execution History Click to enlarge

  4. To edit the job, click Update and edit the details of the job. For more information about the fields, see Creating a Scheduler Job.

Deleting a Scheduler Job

You can delete a scheduler job on the Scheduler tab of the Policies page.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Scheduler tab, click the job that you want to delete.
    Figure. Scheduler Job Click to enlarge

  3. From the Action list, click Delete .
  4. In the Confirm Delete window, click Delete .

Approval Policy Overview

Caution: This feature is currently in technical preview. Do not use any technical preview features in a production environment.

An approval policy adds a level of governance to determine which application deployment requests or actions require approvals before they are initiated. You can use approval policies to manage your infrastructure resources, their associated costs, and compliance more effectively.

For example, consider a marketplace item that consumes a significant part of your available resources. You can use an approval policy to enable your IT administrator to review all deployment requests for that marketplace item and ensure that all requests are justified.

You can also use approval policies to enable a project administrator to review all the changes that are done as part of orchestration to a critical application instance.

Approval Policy Creation and Management

  • The Approvals feature is disabled by default. You must enable the feature from the Settings page before creating your approval policies. See Enabling Approvals.
  • You must enable the policy engine to create and manage approval policies.
  • Each approval policy is a defined set of conditions that you configure for specific entities in Calm. See Conditions in Approval Policies.
  • A policy can have multiple conditions. An approval request is generated when an event meets all the conditions defined in the policy.
  • As a Prism Central Admin or Project Admin, you can create approval policies for runbook execution, application launch, and application day-2 operations (system-defined or user-defined actions).
  • A policy can have more than one set of approvers, and the approvals are done sequentially during the policy enforcement. For example, if the policy has Set 1 and Set 2 approvers, then during the policy enforcement, Set 1 approvers have to approve the request before Set 2.
  • The approver must be an existing user of Prism Central.
  • You can enable a policy to enforce the policy on an event that matches the entity, action, and conditions of the policy or disable the policy to skip policy enforcement.
  • You can enable or disable approvals from the Settings page to enforce all enabled approval policies in Calm or disable all approval policy enforcement.
  • When the policy engine VM does not respond, the runbook execution, application launch, or application day 2 operations that match the approval policy conditions fail to process completely. To process those events, you must disable policy enforcement. See Disabling Policy Enforcement.
  • You can clone an existing policy and edit its information to quickly create a new policy.
  • You can delete an approval policy.
  • The approval feature is currently not supported for VMware update config, Azure update config, or AWS update config.

Approval Process

  • An approver receives an email notification when an approval action is required for a request.
    • For email notifications, the SMTP server must be configured in Prism Central. To know how to configure the SMTP server, see the Prism Central Guide .
    • Notifications are sent based on the value of the E-mail field in your Active Directory. To receive approval notifications, ensure that the value is specified in the E-mail field of the Active Directory.
      Figure. Active Directory Configuration Click to enlarge

  • As an approver, you can view a list of all pending approval policies and can approve or reject the request with a reason. When you approve a request, the event moves to the next task. When you reject a request, the requester is notified about the rejection of the request.
  • As an approver, you cannot make any updates to the original approval request.
  • If an Active Directory group is added as an approver, any user from the group can approve the request to move the event to the next task.
  • A Prism Central admin cannot override and approve pending requests. All pending requests have to be approved by the approvers assigned in the enforced policy.
  • The Audit tab of an application displays the confirmation of the enforced policy. The Policy Execute - Approval task is added on the Audit tab, and the task remains in the POLICY_EXEC status until the request is approved or rejected.
    Figure. Policy Execute - Approval Click to enlarge

Note: Limitation: Restarting the Epsilon or Policy-Epsilon container or a Calm upgrade deletes the workflows and affects the entities that are in the approval pending status.

Approvals Page

  • The Policy Configurations tab provides the option to create an approval policy and lists all the approval policies you created as an admin for management.
  • The Approval Requests tab displays all requests that you need to approve on the Pending on me tab and all requests generated on the All requests tab.
  • The My Requests tab displays all the requests that you created. The Pending tab displays all pending requests, and the Reviewed tab displays all requests that the approvers reviewed. When you click a request on the Pending tab, you can view the approvers who are required to approve your request. If you are an admin, you can also view the details of the enforced policy.

Creating an Approval Policy

As a Prism Central Admin or Project Admin, you can create approval policies for runbook executions, application launch, and application day-2 operations (system-defined or user-defined actions).

About this task

Each approval policy is a defined set of conditions that you apply to specific entities in Calm. An approval request is generated when an associated event meets all the conditions defined in the policy.

Before you begin

Ensure that you have enabled the policy engine on the Settings page. For details about enabling the policy engine, see Enabling policy Engine.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Approvals tab, click the + Create Approval Policy button to create a policy.
  3. On the Basic Information tab , provide the basic information such as the name, project, and the entity with its associated action. To do that:
    Figure. Basic Information Click to enlarge

    1. In the Name field, provide a name for the approval policy.
    2. In the Description field, provide a description for the policy. This step is optional.
    3. From the Select the project this policy is applicable to list, select a project with which you want to associate the approval policy.
    4. From the Entity Type list, select the entity to which you want to apply the approval policy.
      You can select Runbook or Application .
    5. From the Action list, select the action during which the approval policy must be enforced.
      The options in the Action list appear based on the entity you selected.
    6. Click Next .
  4. On the Set Conditions tab, specify the attribute, its associated operator, and the value for the approval policy. To do that:
    Figure. Policy Condition Click to enlarge

    1. From the Attribute list, search the attribute for the policy enforcement.
      The options in the Attribute list change based on the entity and the associated action you selected on the Basic Information tab.
      To search for a provider-specific attribute, type the provider name in the Attribute field.
    2. From the Operator list, select the operator for the policy attribute.
      The options in the Operator list change based on the attribute you selected as the condition.
    3. In the Value field, specify the value for the attribute-operator condition.
      With some attribute-operator combinations, an information icon appears that displays the supported units or values for the Value field. You can use the information to specify the appropriate values in the field.
      For system actions, you must specify the name in the action_<system action> format. For example, for the Restart action, you must specify the value as action_restart . For the list of supported system action names for approval policies, see Conditions in Approval Policies.
      For Azure locations, you must specify the Azure location name instead of the Azure location displayName. For example, instead of using Central US (the Azure location displayName) in the Value field, use centralus (the Azure location name).
    4. Click Done next to the condition name.
    5. To add another condition to the policy, click + Add Condition and then specify the attribute, operator, and value.
      You can also click the Copy icon next to the condition name of an existing condition to quickly create a new condition and edit its details.
      You can add multiple conditions in the policy. An approval request is generated when an event meets all the conditions defined in a policy. To view the list of conditions, see Conditions in Approval Policies.
  5. Click Next .
  6. On the Select Approvers tab, specify the set name, approver rule, and approvers for the policy. To do that:
    Figure. Select Approvers Click to enlarge

    1. In the Set Name field, specify the name of the policy set.
    2. Under approval rule, select one of the following:
      • Any one can approve : Select this option if you want any approvers selected for the set to approve the action.
      • All need to approve : Select this option if you want all approvers in the set to approve the action.
    3. From the Approvers list, select the approvers you want to include in the set.
      You can select and add multiple approvers in a set.
    4. Click Done next to the set name.
    5. To create another set, click + Add Approver Set and specify the set name, approver rule, and approvers.
      You can also click the Copy icon next to the set name of an existing set to quickly create a new set and edit its details.
      You can add multiple sets of approvers in the policy. The approver sets are applied sequentially during the policy enforcement. For example, if you have configured Set 1 and Set 2 approvers in your policy, then during policy enforcement, Set 1 approvers have to approve the request before Set 2.
  7. Click Save .
  8. In the Policy Saved confirmation window, select the Yes, enable this policy button to enable the policy.
    You can click the No, keep it disabled button if you want to create the policy in the disabled state.

Conditions in Approval Policies

You can configure approval policies for specific events with different set of conditions. For example, to configure an approval policy for a marketplace item, you can use the following values:

  • Entity Type : Application
  • Action : Launch
  • Attribute : Blueprint Name
  • Operator : Contains
  • Value : <Name of the Marketplace Item for which you want to create an approval policy>

The following table lists the different conditions that you can define for different events in approval policies. To search for a provider-specific attribute, type the provider name in the Attribute field.

Table 1. Conditions in Approval Policies
Entity Type and Action Provider Attribute Operator

Entity Type: Runbook

Action: Execute

All Runbook Name Equals, Contains, Like
Task Name Equals, Contains, Like
Endpoint Name Equals, Contains, Like

Entity Type: Application

Action: Launch

All Substrate Type Equals, Contains, Like
Blueprint Name Equals, Contains, Like
Application Name Equals, Contains, Like
Application Profile Name Equals, Contains, Like
Estimated Application Profile Cost Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
Account Name Equals, Contains, Like
VM Name Equals, Contains, Like
Service Name Equals, Contains, Like
App Replicas Count Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
OS Type Equals, Contains, Like
Azure Specific Attributes Azure Tag Equals, Contains, Like
Azure Location Equals, Contains, Like
Azure Instance Name Equals, Contains, Like
Azure Resource Group Equals, Contains, Like
Azure Availability Zone Equals, Contains, Like
Azure Availability Set Equals, Contains, Like
Azure Hardware Profile Equals, Contains, Like
Azure Data Disk Name Equals, Contains, Like
Azure Data Disk Type Equals, Contains, Like
Azure Data Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
Azure Network Profile Subnet Equals, Contains, Like
Azure Network Profile NIC Name Equals, Contains, Like
Azure Network Profile Virtual Network Equals, Contains, Like
Azure Network Profile Network Security Group Equals, Contains, Like
VMware Specific Attributes VMware Instance Name Equals, Contains, Like
VMware Datastore Cluster Equals, Contains, Like
VMware Datastore Equals, Contains, Like
VMware Cluster Equals, Contains, Like
VMware Host Equals, Contains, Like
VMware Sockets Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
VMware Cores Per Socket Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
VMware Memory Equals, Contains, Like
VMware Adapter Type Equals, Contains, Like
VMware Network Equals, Contains, Like
VMware Disk Type Equals, Contains, Like
VMware Tag Equals, Contains, Like
VMware Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
VMware Template Name Equals, Contains, Like
AHV Specific Attributes AHV vCPU Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Cores Per vCPU Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Memory Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Category Equals, Contains, Like
AHV VPC Name Equals, Contains, Like
AHV vLAN Name Equals, Contains, Like
AHV Disk Type Equals, Contains, Like
AHV Disk Image Name Equals, Contains, Like
AHV Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Boot Configuration Type Equals, Contains, Like
AWS Specific Attributes AWS Instance Type Equals, Contains, Like
AWS Region Equals, Contains, Like
AWS Tag Equals, Contains, Like
AWS Root Volume Type Equals, Contains, Like
AWS Data Volume Type Equals, Contains, Like
AWS Root Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AWS Data Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AWS IAM Role Equals, Contains, Like
AWS VPC ID Equals, Contains, Like
AWS Security Group ID Equals, Contains, Like
AWS Subnet ID Equals, Contains, Like
AWS Machine Image ID Equals, Contains, Like
GCP Specific Attributes GCP Instance Name Equals, Contains, Like
GCP Machine Type Equals, Contains, Like
GCP Zone Equals, Contains, Like
GCP Boot Disk Storage Type Equals, Contains, Like
GCP Boot Disk Source Image Equals, Contains, Like
GCP Labels Equals, Contains, Like

Entity Type: Application

Action: Day 2 Operation

All Application Name Equals, Contains, Like
Application Profile Cost Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
App Replicas Count Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
Action Name Equals, Contains, Like
AHV Specific Attributes (for Update Config Only) AHV vCPU Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Cores Per vCPU Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Memory Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV Category Equals, Contains, Like
AHV vLAN Name Equals, Contains, Like
AHV VPC Name Equals, Contains, Like
AHV Device Type Equals, Contains, Like
AHV Disk Size Equals, Less than, Greater than, Greater than or Equals, Less than or Equals
AHV (for Snapshots) AHV Snapshot Location Equals, Contains, Like
AHV Snapshot Replica Equals, Contains, Like
AHV Snapshot Name Equals, Contains, Like

Day 2 operations are combination of multiple actions. Ensure that you use the supported attributes for different day 2 operations to enforce the policy appropriately. For example, when you configure a policy with scale in or scale out task, the supported attributes can be App Replicas Count and Application Profile Cost.

The following table provides the day 2 operation with the supported attributes.

Table 2. List of Supported Attributes for Day 2 Operations
Day 2 Operation Supported Attributes
AHV Update Config Estimated Application Profile Cost, AHV vCPU, AHV Cores Per vCPU, AHV Memory, AHV Category, AHV VPC Name, AHV vLAN Name, AHV Disk Size, and AHV Device Type
Scale-in or Scale-out task App Replicas Count and Application Profile Cost
AHV Snapshot Config AHV Snapshot Name, AHV Snapshot Replica, and AHV Snapshot Location
Supported Attributes for All Day 2 Operations Application Name and Action Name

For system actions, you must specify the name in the action_<system action> format. The following table lists the system action names supported for approval policies.

Table 3. Supported System Action Names
System Action Names
Start action_start
Restart action_restart
Stop action_stop
Delete action_delete
Soft Delete action_soft_delete
Snapshot Create action_snapshot_create
Restore action_restore
Update action_update

Cloning an Approval Policy

To quickly create a new policy, you can clone an existing policy and edit its basic information, conditions, and approvers.

About this task

You cannot clone an approval policy that is in the Draft state.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Approvals tab, click the vertical ellipsis next to the policy that you want to clone and then click Clone .
    Figure. Clone Approval Policy Click to enlarge

  3. In the Clone Approval Policy window, provide a name for your new policy in the Approval policy name field, and then click the Clone button.
  4. On the Basic Information , Set Conditions , and Select Approvers tab, edit the fields that you need to change in your new policy. For information about the fields, see Creating an Approval Policy.
  5. Click Save to save the approval policy.
    You can also clone a policy from the policy details page.

Enabling or Disabling an Approval Policy

You can enable a policy to enforce the policy on an event that matches the entity, action, and conditions of the policy or disable the policy to skip policy enforcement.

Procedure

  1. Click the Policies icon in the left pane.
  2. To enable or disable a policy, do one of the following on the Approvals tab:
    Figure. Enable Approval Policy Click to enlarge

    • To enable a disabled policy, click the vertical ellipsis next to the policy that you want to enable and then click Enable .
    • To disable an enabled policy, click the vertical ellipsis next to the policy that you want to disable and then click Disable .
    You can also enable or disable a policy from the policy details page.

Deleting an Approval Policy

As a Prism Central Administrator or Project Administrator, you can delete an approval policy if the policy is no longer required for the event.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Approvals tab, click the vertical ellipsis next to the policy that you want to delete and then click Delete .
    Figure. Delete Approval Policy Click to enlarge

  3. In the Confirm Delete window, click the Delete button.

Viewing an Approval Policy Details

After you have created a policy, you can view the details of the policy on the policy details page.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Approvals tab, click the policy that you want to view.
    Figure. Policy Details Click to enlarge

    The policy details page has the following tabs:
    • Basic Information : Use this tab to view the associated project, event, and the details related to the policy creation and updates. You can also use the Enable Policy or Disable Policy option on this tab to enable or disable the policy.
    • Conditions : Use this tab to view all the conditions that are associated with the policy.
    • Approvers : Use this tab to view the approvers sets that are associated with the policy.
    • Execution History : Use this tab to view the execution history of policies.
  3. To clone the policy, click the Clone Policy button. See Cloning an Approval Policy.
  4. To edit the policy, click the Edit button and update the required fields on the Basic Information , Set Conditions , and Select Approvers tab. For information about the fields, see Creating an Approval Policy.
  5. To delete the policy, click the Delete Policy button.

Approving or Rejecting an Approval Request

An an approver, you can view a list of all pending approval policies on the Approval Requests tab and can either approve or reject the request with a reason.

About this task

When you approve a request, the event moves to the next task. When you reject a request, the requester is notified about the rejection of the request. If you are the requester, you can view your pending requests and the status of your reviewed request on the My Requests tab.

Procedure

  1. Click the Policies icon in the left pane.
  2. On the Approvals tab, click the Approval Requests tab in the left pane.
  3. On the Pending on me tab, do the following:
    1. Click the request that is pending for approval.
    2. View the Basic Information and Condition Details of the applicable policy.
      Figure. Pending Request for Approval Click to enlarge

      If you are an admin, you can click Go to Application to go to the Overview tab of the application and view the details. You can also click the View Policy tab to view the enforced policy.
    3. In the Add Comment field, provide a reason for the approval or rejection of the request. This step is optional.
    4. To approve the request, click the Approve button.
    5. To reject the request, click the Reject button.
    6. Click Yes to confirm approval or rejection.
    You can also view the details of the request such as the requester, date of initiation, conditions, and so on the Pending on me tab and click the approve or reject

Library in Calm

Library Overview

Library allows you to save user-defined tasks (scripts) and variables that you can use persistently for other application blueprints. You do not have to define the same tasks and variables for each blueprint.

You can also share tasks and variables listed as part of library across different projects. You can also customise an existing task or variable.

The Library tab lists all the published user-defined tasks and the created variable types to be used across multiple blueprints.

Figure. Library Click to enlarge

Note:
  • To list a task in the Library, you must publish the task by using Publish to Library functionality under service package while configuring your blueprints.
  • To view the list of variables, you must create and save the variables in the Library. For more information, see Variable Types Overview.

Variable Types Overview

You create custom variable types for added flexibility and utility. Beyond just string and integer data types, you can create more data types such as Date/Time, list, and multi-line string. You can define list values as a static list of values or can attach a script (eScript or HTTP task) to retrieve the values dynamically at runtime.

While creating a custom variable type, you associate a project to the variable type. You can also share the variable type with multiple other projects using the "Share" option on the same page.

Creating Variable Types

Create variable types so that you can use the variables during blueprint creation. You can also share the created variable types across multiple projects.

Procedure

  1. Click the Library icon in the left pane.
  2. Click the Variable Types tab.
    Figure. Variable Type Click to enlarge

  3. Click Create Variable Type if you are creating the first variable or + Add Variable Types if you are adding a new variable to the list of variables.
    The Create Variable Type window appears.
  4. From the Projects list, select the project and click Done .
    Note: You associate a project when you create a custom variable type. You can also share the variable type with other projects using the Share option. Members of the same project can use the variable types while creating a blueprint.
  5. In the Name field, type a name for the variable type.
    Figure. Variable Type Click to enlarge

  6. In the Description field, type a brief description about the variable type.
  7. From the Data Type list, select the base type for the variables.
    The base type defines the type of variable you use while configuring a blueprint. You can select one of the following data types.
    • String
    • Integer
    • Multi-line string
    • Date
    • Time
    • Date Time
  8. Select one of the following input type.
    • Use Simple to add a default value.
    • Use Predefined to assign static values.
    • Use eScript to attach a script that is run to retrieve values dynamically at runtime. Script can return single or multiple values depending on the selected base data type.
    • Use HTTP to retrieve values dynamically from the defined HTTP end point. Result is processed and assigned to the variable based on the selected base data type.
  9. If you have selected Simple , then type the value for the variable in the Value field.
  10. If you have selected Predefined , then type the value for the variable in the Option field. To add multiple values for the variable, do the following.
    1. Click + Add Option .
    2. In the Option field, type the value.
    3. To make any value as default, select the Default radio button next to the value.
  11. If you have selected eScript , then add the eScript in the field.
    You can click Publish to publish the script to the library.
    Note:
    • You cannot add macros to eScripts.
    • If you have selected Multiple Input (Array) checkbox with input type as eScript, then ensure that the script returns a list of values separated by comma. For example, CentOS, Ubuntu, Windows.
  12. If you have selected HTTP , then configure the following fields.
    1. In the Request URL field, specify the URL of the server that you want to run the methods on.
    2. In the Request Method list, select a request method. The available options are GET, PUT, POST, and DELETE.
    3. In the Request Body field, enter or upload the request.
    4. In the Content Type list, select a type of the output format. The available options are XML , JSON, and HTML.
    5. In the Connection Timeout (sec) field, type the timeout interval in seconds.
    6. Select the authentication type.
      • If you select Basic , then specify the User name and Password .
      • If you select Basic (With Credentials) , then you can set the credentials in the blueprint after copying the task.
      By default, Authentication is set to None . This step is optional.
    7. To verify the URL of the HTTP endpoint with a TLS certificate, select the Verify TLS Certificate check box. This step is optional.
    8. To use a proxy server that you configured in Prism Central, select the Use PC Proxy configuration check box. This step is optional.
    9. In the Retry Count field, type the number of attempts the system must perform to create a task after each failure. By default, the retry count is one, which indicates that the task creation procedure stops after the first attempt.
    10. In the Retry Interval field, type the time interval in seconds for each retry if the task fails. By default, the Retry Interval value is set to one second.
    11. Under Headers , enter the HTTP header key and value in the Key and Value fields.
    12. To publish the HTTP header key and value pair as secret, select the Secrets check box.
    13. Under Expected Response Options , type the Response Code for the Response Status you select. You can select Success or Failure as the response status for the task.
  13. To check the Regex, do the following.
    1. Select the Validate with Regular Expression check box.
    2. Click Test Regex .
    3. Provide the value for the Regex in the Value field.
      Note: You can enter Regex values in PCRE format. For more details, see from http://pcre.org/.
    4. To test the expression, click Test Regex .
  14. Click Save .
    The Variable is saved to the Library.
  15. To share the variable type with other projects, do the following.
    1. Click Share .
      The Share Variable Type screen appears.
    2. From the Select projects to share with list, add the projects with which you want to share the saved variable.
    3. Click Done .

What to do next

Use this variable type while define variables in a blueprint. For more details, see Calm Blueprints Overview.

Task Library Overview

You can create tasks while configuring a blueprint and publish these tasks to the library. Calm allows you to import these published tasks while configuring other blueprints across multiple projects.

To refer to the video about task library, click here.

Adding Task to a Project

Add a task to a project so that you can use the task while configuring blueprints for the selected project.

Procedure

  1. Click the Library icon in the left pane.
    The Tasks tab displays the list of all published tasks.
  2. Select the task that you want to assign to a project.
    The task inspector panel appears.
  3. Select project from the Project Shared With list.
  4. Click Save .
    The task is added to the project.

Deleting a Task from the Task Library

Delete unwanted tasks from the Library. The deleted tasks can no longer be used in any project while configuring a blueprint.

About this task

Video: Deleting a Task from the Task Library

Procedure

  1. Click the Library icon in the left pane.
    The Tasks tab displays the list of all published tasks.
  2. Select the task that you want to delete.
    The task inspector panel appears.
  3. Click Delete .
  4. In the confirmation window, click Delete .
    The task is deleted from the Library.

Runbooks in Calm

Runbooks Overview

A runbook is a framework to automate routine tasks and procedures that pan across multiple applications without the involvement of a blueprint or an application.

A runbook is a collection of tasks that you can define to run sequentially at different endpoints. For more information about endpoints, see Endpoints Overview.

Figure. Runbooks Click to enlarge Runbooks Overview

You can define the following types of tasks in a runbook.

Table 1. Tasks in a Runbook
Task Description
Execute To run Shell, PowerShell, and eScript (custom python) scripts.
Set Variable To run a script and create variables.
Delay To set a delay interval between two tasks or actions.
HTTP To perform REST calls to an HTTP endpoint.
While Loop To iterate over multiple tasks until the defined condition is met.
Decision To define different flows or paths based on the exit condition.
VM Power On To power on the VMs that are present in the VM endpoint type.
VM Power Off To power off the VMs present in the VM endpoint type.
VM Restart To restart the VMs present in the VM endpoint type.

For more information about creating a runbook, see Creating a Runbook.

Runbook Sharing across Projects

To share an active runbook across different projects, you can submit the runbook to be published as a Marketplace item. When the runbook is available at the marketplace, members from different projects to which the runbook is assigned can view and execute it.

When you submit a runbook for publishing, your administrator approves and publishes the runbook at the Marketplace. While publishing, your administrator selects the projects that can view and execute the runbook. You can publish runbooks with or without endpoints and with or without secret values (credential passwords or keys and secret variables). For more information, see Submitting a Runbook for Publishing.

You can select endpoints with virtual machines as the target type to execute power operation tasks such as power off, power on, or restart. Executing these tasks on Virtual machines is particularly helpful in cases where you need to run a set of scripts on multiple VMs and then restart the VMs. For example, when you want to upgrade a software on your VMs. For more information about creating an endpoint, see Creating an Endpoint.

You cannot modify the runbook after it is published. You can either execute the runbook or clone the runbook within your project from the marketplace.

Creating a Runbook

A runbook is a collection of tasks that you can define to run sequentially at different endpoints.

Procedure

  1. Click the Runbooks icon in the left pane.
  2. Click Create Runbook .
  3. Configure the following on the Create Runbook page.
    Figure. Create Runbook Click to enlarge Runbooks Overview

    • In the Name field, type a name for the runbook.
    • In the Description field, type a brief description about the runbook.
    • From the Project list, select a project to which you want to add the runbook. If you do not select any project, the default project is selected.
    • From the Default Endpoint list, select a default endpoint. This step is optional.

      Calm uses the default endpoint only when you do not configure any endpoint at the task level.

  4. Click Proceed .
  5. On the Editor tab, click +Add Task and do the following.
    Figure. Runbook Editor Click to enlarge

    1. In the Task Name field, type a name for the task.
    2. From the Type list, select the task type.
      • Select the Execute task to run Shell, PowerShell, and eScript (custom python) scripts or Set Variable task to run a script and create variables. For more information on how to configure the Execute or Set Variable task type, see Creating a Runbook with an Execute or Set Variable Task.
      • Select the Delay task to set a delay interval between two tasks or actions. For more information on how to configure the Delay task type, see Creating a Runbook with a Delay Task.
      • Select the HTTP task to perform REST calls to an HTTP endpoint. For more information on how to configure an HTTP task type, see Creating a Runbook with an HTTP Task.
      • Select the While Loop task to iterate over multiple tasks until the defined condition is met. For more information on how to configure the While Loop task type, see Creating a Runbook with a While Loop Task.
      • Select the Decision task to define different flows or paths based on the exit conditions.

        The task is further subdivided into True and False condition. You must repeat the steps to add the tasks and configure the task type.

      • Select the VM Power Off , VM Power On , or VM Restart task to power off, power on, or restart the VMs that are present in the VM endpoint type. You must select the target VM endpoint for these task types.
  6. To add a credential, do the following on the Configuration tab.
    1. Click Add/Edit Credentials .
    2. Click + Add Credential .
    3. In the Name field, type a name for the credential.
    4. Under the Type section, select the type of credential that you want to add.
      • Static : Credentials store keys and passwords in the credential objects that are contained in the blueprints.
      • Dynamic : Credentials fetch keys and passwords from an external credential store that you integrate with Calm as the credential provider.
    5. In the Username field, type the user name.
      For dynamic credentials, specify the @@(username)@@ that you defined while configuring the credential provider.
      Note: A dynamic credential provider definition requires username and secret. The secret variable is defined by default when you configure your credential provider. However, you must configure a runbook in the dynamic credential provider definition for the username variable before you use the variable in different entities.
    6. Select either Password or SSH Private Key as the secret type.
    7. Do one of the following to configure the secret type.
      • If you have selected Static as the credential type and Password as the secret type, then type the password in the Password field.
      • If you have selected Static as the credential type and SSH Private Key as the secret type, then enter or upload the key in the SSH Private Key field.
      • If you have selected Dynamic as the credential type and Password or SSH Private Key as the secret type, then select a credential provider in the Provider field. After you select the provider, verify or edit the attributes defined for the credential provider.
      If the private key is password protected, click +Add Passphrase to provide the passphrase. For dynamic credentials, you must configure a runbook in the dynamic credential provider definition for the passphrase variable and then use the @@{passphrase}@@ variable.
      The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
    8. Click Done .
    Note: The credential that you add on the Configuration tab overrides the credential you added in the endpoint.
  7. To add a variable, do the following on the Configuration tab. This step is optional.
    1. Click Add/Edit Variable .
    2. If you want to use an existing variable, select the variable that you want to use for the runbook.
    3. If you want to create a new variable, click Add Variable . For more information on how to create variables, see Creating Variable Types.
  8. To add a default endpoint, select the endpoint from the Default Endpoint list. This step is optional.
    Note: The endpoint you select on the Configuration tab supersedes the endpoint you add on the Editor tab.
  9. To add endpoint information, select the endpoint from the Endpoint list and enter a description for the endpoint in the Description field. This step is optional.
  10. Click Save

What to do next

  • You can execute the runbook. For more information, see Executing a Runbook.
  • You can submit the runbook for approval and publishing. For more information, see Submitting a Runbook for Publishing.

Creating a Runbook with an Execute or Set Variable Task

Create a runbook with the Execute task to run Shell, PowerShell, and eScript (custom python) scripts. Create a runbook with the Set Variable task to run a script and create variables.

Before you begin

Ensure that you have created a runbook and have added a task for the runbook. For more information, see Creating a Runbook.

Procedure

  1. On the runbook Editor tab, click +Add Task .
  2. In the Task Name field, type a name for the task.
  3. From the Type list, select the Execute or Set Variable task.
  4. In the Script Type list, select Shell , Powershell , or eScript .
    For Shell, PowerShell, and eScript scripts, you can access the available list of macros by using @@{ .
  5. From the Endpoint list, select an endpoint on which you want to run the task. This step is optional.
    You can also select Add New Endpoint from the list and create an endpoint. For more information about creating an endpoint, see Creating an Endpoint.
    If you do not select an endpoint, then Calm uses the default endpoint that you defined while creating the runbook.
  6. For the EScript task, select the tunnel that you can use to get access to the endpoint within the VPC in the Select tunnel to connect with list. This step is optional.
    The Select tunnel to connect with list shows only those tunnels that are allowed in the project you selected for the endpoint.
  7. For the Shell or PowerShell script type, select an existing credential from the Credential list or add a new credential to override the credential for the task. This step is optional. To add a new credential, do the following:
    1. In the Credential list, select Add New Credential .
    2. In the Name field, type a name for the credential.
    3. Under the Type section, select the type of credential that you want to add.
      • Static : Credentials store keys and passwords in the credential objects that are contained in the blueprints.
      • Dynamic : Credentials fetch keys and passwords from an external credential store that you integrate with Calm as the credential provider.
    4. In the Username field, type the user name.
      For dynamic credentials, specify the @@(username)@@ that you defined while configuring the credential provider.
      Note: A dynamic credential provider definition requires username and secret. The secret variable is defined by default when you configure your credential provider. However, you must configure a runbook in the dynamic credential provider definition for the username variable before you use the variable in different entities.
    5. Select either Password or SSH Private Key as the secret type.
    6. Do one of the following to configure the secret type.
      • If you have selected Static as the credential type and Password as the secret type, then enter the password in the Password field.
      • If you have selected Static as the credential type and SSH Private Key as the secret type, then enter or upload the key in the SSH Private Key field.
      • If you have selected Dynamic as the credential type and Password or SSH Private Key as the secret type, then select a credential provider in the Provider field. After you select the provider, verify or edit the attributes defined for the credential provider.
      If the private key is password protected, click +Add Passphrase to provide the passphrase. For dynamic credentials, you must configure a runbook in the dynamic credential provider definition for the passphrase variable and then use the @@{passphrase}@@ variable.
      The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
    7. Click Done to add the credential.
  8. In the Script panel, enter or upload the script that you want to run.
  9. To test the script in the Calm playground, click Test script and do the following.
    You can use the Calm playground to run the script, review the output, and make any required changes.
    1. On the Authorization tab, specify the IP Address and Port .
    2. In the Select tunnel to connect with list, select the tunnel that you can use to get access to the VM within the VPC. This step is optional.
      The Select tunnel to connect with list shows only those tunnels that are allowed in the project you selected for the endpoint.
    3. Specify the Credential for the test machine.
      You can also specify the Username and Password for the test machine instead of the credential.
    4. Click Login and Test .
    5. On the Test Script tab, view or edit your script in the Script field.
    6. For macros in your script, provide the values in the macro inspector panel.
    7. Click Assign and Test .
      The Output field displays the test result.
    8. To go back to the Runbook Editor page, click Done .
  10. To publish this task to the task library, click Publish to Library and then click Publish .
  11. Click Save to save the runbook.

Creating a Runbook with a Delay Task

Create a runbook with the Delay task to set a delay interval between two tasks or actions.

Before you begin

Ensure that you have created a runbook and have added a task for the runbook. For more information, see Creating a Runbook.

Procedure

  1. On the runbook Editor tab, click +Add Task .
  2. In the Task Name field, type a name for the task.
  3. From the Type list, select the Delay task.
  4. In the Sleep Interval field, type the sleep time interval in seconds for the task.
  5. Click Save to save the runbook.

Creating a Runbook with an HTTP Task

Create a runbook with the HTTP task to perform REST calls to an HTTP endpoint.

Before you begin

Ensure that you have created a runbook and have added a task for the runbook. For more information, see Creating a Runbook.

Procedure

  1. On the runbook Editor tab, click +Add Task .
  2. In the Task Name field, type a name for the task.
  3. From the Type list, select the HTTP task.
  4. From the Endpoint list, select the endpoint where you want to execute the HTTP task. This step is optional.
  5. In the Request Method list, select one of the following request methods.
    • Select GET to retrieve data from a specified resource.
    • Select POST to send data to a server to create a resource, and enter or upload the POST request in the Request Body field.
    • Select DELETE to send data to a server to delete a resource, and enter or upload the DELETE request in the Request Body field.
    • Select PUT to send data to a server to update a resource, and enter or upload the PUT request in the Request Body field.
  6. In the Relative URL field, enter the URL of the server on which you want to run the methods.
  7. In the Content Type list, select the type of the output format.
    The available options are HTML , JSON , and XML .
  8. In the Headers section, type the HTTP header key and value in the Key and Value fields respectively.
  9. If you want to publish the HTTP header key and value pair as secrets, select the Secret check box.
  10. In the Expected Response Options area, do the following configurations.
    1. Select Success or Failure as the Response Status , and type the Response Code for the status.
      Note: If the response code is not defined, then by default all the 2xx response codes are marked as success, and any other response codes are marked as failure.
    2. Under Set Variables from response , type the variables for the specified response path. The example of json format is $.x.y and xml format is //x/y . For more information about json path syntax, see http://jsonpath.com.
      Note: To retrieve the output format in HTML format, add a * in the syntax.
  11. If you want to test the script in the Calm playground, click Test Request .
    The test result appears in the Output field .
  12. Click Save to save the runbook.

Creating a Runbook with a While Loop Task

Create a runbook with the While Loop task to iterate over multiple tasks until the defined condition is met.

Before you begin

Ensure that you have created a runbook and have added a task for the runbook. For more information, see Creating a Runbook.

Procedure

  1. On the runbook Editor tab, click +Add Task .
  2. In the Task Name field, type a name for the task.
  3. From the Type list, select the While Loop task.
  4. In the Iterations field, type the number of times you want to iterate the task till the task meet a condition. The default value is 1.
  5. From the Exit Condition list, select a condition after which the task iteration must stop. The available options are as follows.
      • Select Success if you want to stop the task iteration after the status of the task is success.
      • Select Failure if you want stop the task iteration after the status of the task is failure.
      • Select Don't care if you want to continue the task iteration irrespective of the status of the task.
  6. Click Save to save the runbook.

Submitting a Runbook for Publishing

Submit a runbook for publishing so that your admin can approve and publish it at the marketplace. Members from the associated projects can view and execute the runbooks that are published at the marketplace.

About this task

Video: Submitting a Runbook for Publishing

Procedure

  1. Click the Runbooks icon in the left pane.
  2. Do one of the following:
    • To publish an existing runbook, click to open a runbook from the list.
    • To publish a new runbook, click Create Runbook and create a new runbook. For information on how to create a runbook, see Creating a Runbook.
  3. On the Runbook Editor page, click the Publish button.
    The Publish Runbook dialog box appears.
    Figure. Publish Runbook Click to enlarge Runbook Publishing

  4. To publish a new runbook, do the following.
    1. Select New Marketplace Runbook .
    2. Provide a name for the runbook to be used at the marketplace.
    3. Enable the Publish with secrets toggle button to encrypt secret values such as the credential passwords, keys, and secret variables.
      By default, the secret values from the runbook are not preserved when you publish them. Enable this option if you do not want to fill the secret values or to be patched from the environment when you execute the runbook.
    4. Enable the Publish with endpoints toggle button to preserve the endpoint values.
      By default, the endpoints from the runbook are not preserved when you publish them. Enable this option if you do not want to fill the endpoint values when you execute the runbook.
    5. In the Initial Version field, type a new version number.
    6. Provide a description for the runbook. This step is optional.
  5. To publish a newer version of an already published runbook, do the following:
    1. Select New version of an existing Marketplace Runbook .
    2. From the Marketplace Item list, select the existing runbook.
    3. Enable the Publish with secrets and Publish with endpoints toggle buttons.
    4. Specify the version for the runbook for the existing runbook.
    5. Provide a description for the runbook. This step is optional.
  6. Click Submit for Approval .
    Calm submits the runbook to the Marketplace Manager for your admin to approve and publish the runbook to the Marketplace.

What to do next

If you are the admin, you can approve and publish the runbook from the Marketplace Manager. To know more about approving and publishing the runbook, see Approving and Publishing a Blueprint or Runbook. If you are a project admin or a developer, you can request your admin to approve and publish the runbook at the Marketplace.

Executing a Runbook

You can execute a runbook to run the tasks sequentially on an endpoint.

About this task

Video: Executing a Runbook

Procedure

  1. Click the Runbooks icon in the left pane.
  2. Select the runbook that you want to execute.
    Figure. Execute Runbook Click to enlarge

  3. From the Action list, select Execute .
    The Execute Runbook page appears.
  4. To change the default endpoint for the execution, select an endpoint from the Endpoints list. This step is optional.
  5. To update the added variable to the Runbook, click the respective variable field and edit the variable. This step is optional.
    Note: You can update the variable only if you mark the variable as runtime editable while adding it in the Runbook.
  6. Click Execute .
    The runbook execution starts and you are directed to the Runbook execution page.

What to do next

You can view all the executions in the Execution History tab.

Deleting a Runbook

Perform the following procedure to delete a runbook.

About this task

Video: Deleting a Runbook
You must have the role of an administrator or a developer to delete a runbook.

Procedure

  1. Click the Runbooks icon in the left pane.
  2. Select the runbook that you want to delete.
  3. From the Action list, select Delete .
  4. In the Confirm Delete window, click Delete .

Endpoints in Calm

Endpoints Overview

Endpoints are the target resources where the tasks defined in a runbook or blueprint are run.

The endpoints are collection of IP addresses or VMs. The collection of VMs can be a static selection or can be dynamic with filter rules applied.

Figure. Endpoints Click to enlarge Endpoints page

You have the following types of endpoints.

  • A Windows machine
  • A Linux machine
  • An HTTP service endpoint

To know how to create an endpoint, see Creating an Endpoint.

Endpoints with Virtual Machines

For Windows or Linux endpoint type, you can select virtual machines as the target type. Selecting VMs as target type is useful in cases where you run a set of scripts on multiple VMs and then restart the VMs. For example, you can select VMs as target type to upgrade a software on your VMs.

After you select VMs as the target type, you must select the provider account to list all the associated VMs. You can filter the list of VMs. You can either select the VMs manually or enable the option to automatically select the filtered VMs for your endpoint.

Creating an Endpoint

Create an endpoint to run the tasks that you define in a runbook or blueprint.

About this task

You must have the role of an administrator or a developer to create an endpoint.

Procedure

  1. Click the Endpoints icon Endpoints icon in the left pane.
  2. Click Create Endpoint .
  3. In the Create Endpoint window, type a name and description for the endpoint.
  4. From the Project list, select the project to which you want to assign the endpoint.
  5. Select the type of the endpoint. You can select Windows , Linux , or HTTP as the endpoint type.
  6. If you have selected HTTP as the endpoint type, do the following.
    Figure. HTTP Endpoint Type Click to enlarge

    1. In the Select tunnel to connect with list, select the tunnel that you can use to get access to the endpoint within the VPC. This step is optional.
      The Select tunnel to connect with list shows only those VPC tunnels that are allowed in the project that you selected for the endpoint.
    2. In the Base URL field, enter the base URL of the HTTP endpoint. A base URL is the consistent part of the endpoint URL.
    3. To verify the URL of the HTTP endpoint with a TLS certificate, click the Verify TLS Certificate check box. Use this option to securely access the endpoint. This step is optional.
    4. To use a proxy server that you configured in Prism Central, select the Use PC Proxy configuration check box.
      Note: Ensure that Prism Central has the appropriate HTTP proxy configuration.
    5. In the Retry Count field, type the number of attempts the system must perform to create a task after each failure.
      The default value is 1, which implies that the task creation process stops after the first attempt.
    6. In the Retry Interval field, type the time interval in seconds for each retry if the task fails. The default value is 10 seconds.
    7. In the Connection Timeout field, type the time interval in seconds after which the connection attempt to the endpoint stops. The default value is 120 seconds.
    8. To add an authentication method to connect to an HTTP endpoint, click Authentication and select Basic from the Type field. Type a username and password to authenticate the endpoint. This step is optional.
      By default, the authentication type is set to None .
  7. If you have selected Windows or Linux as the endpoint type, then select a target type. The target type can be IP Addresses or VMs .
    Figure. Endpoint Target Type Click to enlarge

  8. If you have selected IP Addresses as the target type, then do the following:
    1. In the Select tunnel to connect with list, select the tunnel that you can use to get access to the endpoint within the VPC. This step is optional.
      The Select tunnel to connect with list shows only those VPC tunnels that are allowed in the project that you selected for the endpoint.
    2. In the IP Addresses field, type the IP address to access the endpoint device.
  9. If you have selected VMs as the target type, do the following:
    1. In the Account list, select an account.
      The Account list displays all the provider accounts that you configured in the project. After you select the provider account, the window displays the VMs associated with the provider account.
    2. To filter VMs, use the Filter By options.
      You can use different attribute, operator, and value criteria to get accurate results.
    3. Select the VMs that you want to add to your endpoint.
      You can also use the Auto Select VMs toggle button to automatically add the filtered VMs to your endpoint.
      Note: The resolution of the VMs from the filters happens at the runbook execution.
  10. In the Connection Protocol field, select the connection protocol to access the endpoint. You can select either HTTP or HTTPS . This field appears only for the Windows endpoint type.
  11. In the Port field, type the port number to access the endpoint.
  12. Add a credential for Windows or Linux endpoint type to access the endpoint.
    1. Click Credential .
    2. Select the type of credential that you want to add under the Type section.
      • Static : Credentials store keys and passwords in the credential objects that are contained in the blueprints.
      • Dynamic : Credentials fetch keys and passwords from an external credential store that you integrate with Calm as the credential provider.
    3. In the Username field, type a username for the endpoint credential.
      For dynamic credentials, specify the @@(username)@@ that you defined while configuring the credential provider.
      Note: A dynamic credential provider definition requires username and secret. The secret variable is defined by default when you configure your credential provider. However, you must configure a runbook in the dynamic credential provider definition for the username variable before you use the variable in different entities.
    4. Select either Password or SSH Private Key as the secret type.
    5. Do one of the following to configure the secret type.
      • If you have selected Static as the credential type and Password as the secret type, then type the password in the Password field.
      • If you have selected Static as the credential type and SSH Private Key as the secret type, then enter or upload the key in the SSH Private Key field.
      • If you have selected Dynamic as the credential type and Password or SSH Private Key as the secret type, then select a credential provider in the Provider field. After you select the provider, verify or edit the attributes defined for the credential provider.
      If the private key is password protected, click +Add Passphrase to provide the passphrase. For dynamic credentials, you must configure a runbook in the dynamic credential provider definition for the passphrase variable and then use the @@{passphrase}@@ variable.
      The type of SSH key supported is RSA. For information on how to generate a private key, see Generating SSH Key on a Linux VM or Generating SSH Key on a Windows VM.
  13. Click Save .

What to do next

You can add the endpoint to a runbook. For more details, see Creating a Runbook.

Deleting an Endpoint

Perform the following procedure to delete a endpoint.

About this task

You must have the role of an administrator or a developer to delete an endpoint.

Procedure

  1. Click the Endpoints icon Endpoints icon in the left pane.
  2. Select the endpoint that you want to delete.
  3. From the Action list, select Delete .
  4. In the Confirm Delete window, click Delete .

Calm Backup and Restore

Calm Data Backup and Restore

You can take a backup of the Calm data to a specified location on your machine and restore the data to a new Prism Central. You back up the following data:

  • Zookeeper Data
    • Calm instance data
  • Elastic Search Data
    • Task Run Logs
    • App Icons
    • Marketplace branding Logos
  • IDF PC Tables
    • project
    • Entity Capabilities
  • IDF Calm Tables
    • "management_server_account"
    • "marketplace_item"
    • "nucalm_action"
    • "nucalm_action_run"
    • "nucalm_app_beam_status"
    • "nucalm_app_blueprint"
    • "nucalm_app_failover_status"
    • "nucalm_application"
    • "nucalm_application_cfg"
    • "nucalm_app_protection_status"
    • "nucalm_budget"
    • "nucalm_consumption"
    • "nucalm_cost"
    • "nucalm_credential"
    • "nucalm_deployment"
    • "nucalm_deployment_cfg"
    • "nucalm_deployment_element"
    • "nucalm_environment"
    • "nucalm_library_task"
    • "nucalm_library_variable"
    • "nucalm_lifecycle"
    • "nucalm_loadbalancer"
    • "nucalm_loadbalancer_cfg"
    • "nucalm_package"
    • "nucalm_package_cfg"
    • "nucalm_package_element"
    • "nucalm_platform_instance_element"
    • "nucalm_policy_rule"
    • "nucalm_price_item"
    • "nucalm_price_item_status"
    • "nucalm_published_service"
    • "nucalm_published_service_cfg"
    • "nucalm_recovery_plan_job_sync_status"
    • "nucalm_runbook"
    • "nucalm_run_log"
    • "nucalm_secret"
    • "nucalm_service"
    • "nucalm_service_cfg"
    • "nucalm_service_element"
    • "nucalm_service_upgrade_history"
    • "nucalm_service_version"
    • "nucalm_substrate"
    • "nucalm_substrate_cfg"
    • "nucalm_substrate_element"
    • "nucalm_sync_status"
    • "nucalm_task"
    • "nucalm_user_file"
    • "nucalm_variable"
    • "nucalm_worker_state"

Backing up and Restoring Calm Data

You can take a backup of the entire Calm data to a specified location on your machine and restore the data to a new Prism Central.

About this task

Note: Before you restore your Calm data, ensure that:
  • You do not have any running applications or blueprints on the Prism Central on which you restore the Calm data. Any applications or blueprints available on your Prism Central might not work properly after you restore the data.
  • The Prism Central on which you restore the data has the same Prism Elements as that of the backed-up Prism Central. In case of any variations, the accounts or projects associated with your local Prism Element through the Prism Central might not work properly.
  • The restored version of the Calm must be the same as that of the backed-up version. For example, if you have taken a backup of version 3.0, you must restore using version 3.0. You cannot use version 3.1 or 3.2 to restore the Calm data.
  • You have enabled Calm on the destination Prism Central, and the Calm instance is new.

Procedure

  1. Log on to the Calm container by using the SSH session and run the following command.
    docker exec -it nucalm bash
    The calmdata binary is available in the /home/calm/bin folder.
  2. In the SSH terminal, change the directory to the Calm container by running the following command.
    # cd /home/calm/bin
  3. Create a folder to store the backup data. The backup folder must be empty.
  4. To take a backup of the Calm data, run the following command.
     # ./calmdata backup --dump-folder <folder>
    The default folder is located in /tmp/default path. Replace folder with the new folder.
    Note: Ensure that the the backup folder has only the calmdata tar file dump.
  5. For IAMv2-enabled setup, run the IAM backup script to take a backup of the ACPs.
    1. SSH to the leader node of the Prism Central.
    2. Run the following command to find the leader node in a scale-out setup:
      sudo kubectl -s 0.0.0.0:8070 -n ntnx-base get pods
      The command must return all point of deliveries without any error.
    3. Run the following commands on the leader node.
      cd ~/cluster/bin/
      vi backup_iam.sh
    4. Copy the script from the Nutanix Downloads page and paste the script in the backup_iam.sh file.
    5. Run the following script.
      sh backup_iam.sh
      The backup tar will be saved on this PC at /usr/local/nutanix/iam-backup .
  6. To get the backup dump from the container, run the following command.
    docker cp <nucalm_container_id>:<backup_tar_file_path> <PC_path_to_copy>
    For example,
    docker cp f4af4798e47d:/backup/3.5.0_backup.tar /home/nutanix/local_backup/
    The command copies the calmdata backup tar file from the nucalm container to the Prism Central file system.
  7. Use the scp command to copy the calmdata backup tar file from the Prism Central file system to the new Prism Central.
    In an IAMv2-enabled setup, use the scp command to copy the IAM backup zipped file from the Prism Central file system to the following location on the new Prism Central.
    /usr/local/nutanix/iam-backup
  8. Login to the new Prism Central.
  9. To copy the calmdata backup tar file into the nucalm container of the new Prism Central, run the following command.
    docker cp <back_up_tar_file_path> <nucalm_container_id>:<restore_path_dump_folder>
  10. To restore the backed up Calm data, do the following:
    1. With the session, run the following command on the new Prism Central.
      # ./calmdata restore --dump-folder <folder> 
      The default folder is located in the /tmp/default path. Replace the folder with the new folder.
    2. Run the following command irrespective of whether IAMv2 is enabled or disabled in your setup.
      docker exec -ti nucalm bash
      activate; 
      code ; 
      python scripts/update_policy_vm_host_data.pyc
    3. Log on to the Policy Engine VM and run the following commands:
      sudo systemctl stop policy-container
      sudo systemctl stop policy-epsilon-container
      sudo systemctl stop chronos-container
      docker rm -f policy
      docker rm -f policy-epsilon
      docker rm -f chronos
      sudo systemctl start policy-container
      sudo systemctl start policy-epsilon-container
      sudo systemctl start chronos-container
  11. For IAMv2-enabled setup, run the IAM restore script.
    1. SSH to the Prism Central leader node to restore.
    2. Run the following commands on the leader node.
      cd ~/cluster/bin/
      vi restore_iam_from_file.sh
    3. Copy the script from the Nutanix Downloads page and paste the script in the restore_iam_from_file.sh file.
    4. Run the following script.
      sh restore_iam_from_file.sh

Flag Options for Backup

Use the following flag options for your Calm data backup:

Table 1. Flag Options
Options Description
dump-folder The folder where you want to place the backup data. The default folder is located at /tmp/default .
Note: Create this folder before taking the backup. When you restore, the restore binary must be present at this location.

Example:

# ./calmdata backup --dump-folder="/tmp/new"
max-threads The maximum number of threads to use to take the backup. The default value is 5.

Example:

# ./calmdata backup --max-thread=5
fetch-limit The maximum number of entries to fetch in batches of 100 per call. The default and the maximum value is 100. Decreasing the value of fetch-limit increases the time taken to back up Calm data.

Example:

# ./calmdata backup --fetch-limit=100
idf-timeout The timeout for IDF (database). Increase the value of IDF timeout if you encounter backup failure due to timeout. The default value is 60.

Example:

# ./calmdata backup --idf-timeout=120
backup-deleted-entities The flag to include deleted entities in the backup. The backup does not include deleted entities when the value is False. The default value is True.

Example:

# ./calmdata backup --backup-deleted-entities=false

Backing Up and Restoring Policy Engine Database

When you enable the policy engine for your Calm instance, Calm creates and deploys a new VM for the policy engine in your Prism Central network. After the policy engine VM deployment, you can anytime create a backup of your policy engine database. You can use the backup to restore the policy engine to the earlier state on your existing policy engine VM or on a new policy engine VM.

About this task

You must run the backup and restore commands from your Prism Central instance.

Procedure

  1. To create a backup of your policy engine, run the following command:
    ssh nutanix@<policy_vm_ip> /home/nutanix/scripts/backup.sh
    Where <policy_vm_ip> is the IP address of the policy engine VM.
    This command creates a local backup on the policy engine VM at /home/nutanix/data/backups/ .
    To restore the policy engine to a new policy engine VM, copy the backup to Prism Central using the scp command and then to the new policy engine VM.
    Note: When you run the command to create the backup, the policy engine remains unavailable until the backup is created.
  2. To view all the backups that are available on your policy engine VM, use the following command:
    ssh nutanix@<policy_vm_ip> /home/nutanix/scripts/restore.sh –-list
  3. To restore policy engine to its earlier state, run the following command:
    ssh nutanix@<policy_vm_ip> /home/nutanix/scripts/restore.sh -f=<backup_name>
    Where <policy_vm_ip> is the IP address of the policy engine VM and <backup_name> is the local backup file available on the policy engine VM.
    Note: When you run the command to restore your policy engine on the existing policy engine VM, the policy engine remains unavailable until it is restored.

Calm Scripts

Sample Scripts for Installing and Uninstalling Services

Calm task library public repository contains scripts for installing and uninstalling different services. To access the repository, click here.

Sample Scripts to Configure Non-Managed AHV Network

The following sections provide the sample scripts of Cloud-init and SysPrep to configure the static IP address range for non-managed AHV network.

Cloud-init Script for Linux

Note: You can assign a static IP to a non-managed network only when the disk image contains a network card set for the static IP. You cannot assign the static IP if the NIC is configured for DHCP in the disk image.
#cloud-config
cloud_config_modules: 
  - resolv_conf
  - runcmd
write_files:
  - path: /etc/sysconfig/network-scripts/ifcfg-eth0
    content: |
      IPADDR=10.136.103.226
      NETMASK=255.255.255.0
      GATEWAY=10.136.103.1
      BOOTPROTO=none
      ONBOOT=yes
      DEVICE=eth0

runcmd:
  - [ifdown, eth0]
  - [ifup, eth0]
  
manage_resolv_conf: true
resolv_conf:
  nameservers: ['8.8.4.4', '8.8.8.8']
  searchdomains:
    - foo.example.com
    - bar.example.com
  domain: example.com
  options:
    rotate: true
    timeout: 1

SysPrep Script for Windows

<?xml version="1.0" encoding="UTF-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
   <settings pass="specialize">
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <ComputerName>Windows2016</ComputerName>
         <RegisteredOrganization>Nutanix</RegisteredOrganization>
         <RegisteredOwner>Acropolis</RegisteredOwner>
         <TimeZone>UTC</TimeZone>
      </component>
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-UnattendedJoin" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <Identification>
            <Credentials>
               <Domain>contoso.com</Domain>
               <Password>secret</Password>
               <Username>Administrator</Username>
            </Credentials>
            <JoinDomain>contoso.com</JoinDomain>
            <UnsecureJoin>false</UnsecureJoin>
         </Identification>
      </component>
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-TCPIP" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <Interfaces>
            <Interface wcm:action="add">
               <Identifier>Ethernet</Identifier>
               <Ipv4Settings>
                  <DhcpEnabled>false</DhcpEnabled>
                  <RouterDiscoveryEnabled>true</RouterDiscoveryEnabled>
                  <Metric>30</Metric>
               </Ipv4Settings>
               <UnicastIpAddresses>
                  <IpAddress wcm:action="add" wcm:keyValue="1">10.0.0.2/24</IpAddress>
               </UnicastIpAddresses>
               <Routes>
                  <Route wcm:action="add">
                     <Identifier>10</Identifier>
                     <Metric>20</Metric>
                     <NextHopAddress>10.0.0.1</NextHopAddress>
                     <Prefix>0.0.0.0/0</Prefix>
                  </Route>
               </Routes>
            </Interface>
         </Interfaces>
      </component>
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-DNS-Client" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <UseDomainNameDevolution>true</UseDomainNameDevolution>
         <DNSDomain>contoso.com</DNSDomain>
         <Interfaces>
            <Interface wcm:action="add">
               <Identifier>Ethernet</Identifier>
               <DNSDomain>contoso.com</DNSDomain>
               <DNSServerSearchOrder>
                  <IpAddress wcm:action="add" wcm:keyValue="1">10.0.0.254</IpAddress>
               </DNSServerSearchOrder>
               <EnableAdapterDomainNameRegistration>true</EnableAdapterDomainNameRegistration>
               <DisableDynamicUpdate>true</DisableDynamicUpdate>
            </Interface>
         </Interfaces>
      </component>
      <component xmlns="" name="Microsoft-Windows-TerminalServices-LocalSessionManager" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64">
         <fDenyTSConnections>false</fDenyTSConnections>
      </component>
      <component xmlns="" name="Microsoft-Windows-TerminalServices-RDP-WinStationExtensions" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64">
         <UserAuthentication>0</UserAuthentication>
      </component>
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Networking-MPSSVC-Svc" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <FirewallGroups>
            <FirewallGroup wcm:action="add" wcm:keyValue="RemoteDesktop">
               <Active>true</Active>
               <Profile>all</Profile>
               <Group>@FirewallAPI.dll,-28752</Group>
            </FirewallGroup>
         </FirewallGroups>
      </component>
   </settings>
   <settings pass="oobeSystem">
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <UserAccounts>
            <AdministratorPassword>
               <Value>secret</Value>
               <PlainText>true</PlainText>
            </AdministratorPassword>
         </UserAccounts>
         <AutoLogon>
            <Password>
               <Value>secret</Value>
               <PlainText>true</PlainText>
            </Password>
            <Enabled>true</Enabled>
            <Username>Administrator</Username>
         </AutoLogon>
         <FirstLogonCommands>
            <SynchronousCommand wcm:action="add">
               <CommandLine>cmd.exe /c netsh firewall add portopening TCP 5985 "Port 5985"</CommandLine>
               <Description>Win RM port open</Description>
               <Order>1</Order>
               <RequiresUserInput>true</RequiresUserInput>
            </SynchronousCommand>
         </FirstLogonCommands>
         <OOBE>
            <HideEULAPage>true</HideEULAPage>
            <SkipMachineOOBE>true</SkipMachineOOBE>
         </OOBE>
      </component>
      <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
         <InputLocale>en-US</InputLocale>
         <SystemLocale>en-US</SystemLocale>
         <UILanguageFallback>en-us</UILanguageFallback>
         <UILanguage>en-US</UILanguage>
         <UserLocale>en-US</UserLocale>
      </component>
   </settings>
</unattend>

Supported eScript Modules and Functions

Calm supports the following eScript modules.

Table 1. Supported eScript Modules
Module Module supported as
datetime _datetime
re re
difflib difflib
base64 base64
pprint pprint
pformat pformat
simplejson json
ujson ujson
yaml yaml
Uuid uuid
requests requests
boto3 boto3
azure azure
kubernetes kubernetes

The following example displays the usage of boto3 module.

import boto3
ec2 = boto3.client('ec2', aws_access_key_id='{}', aws_secret_access_key='{}', region_name='us-east-1')
print ec2.describe_regions()

The following example displays the usage of Azure module.

# subscription_id macro contains your Azure Subscription ID
# client_id macro contains your Client ID
# tenant macro contains you Tenant ID
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
credentials = ServicePrincipalCredentials(
    client_id=@@{client_id}@@,
    secret='secret',
    tenant=@@{tenant}@@
)
client = ResourceManagementClient(credentials, @@{subscription_id}@@)
for item in client.resource_groups.list():
    print(item)

The following example displays the usage of Kubernetes module.

from kubernetes import client as k8client
aToken="eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJl
cm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWN
jb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNl
YWNjb3VudC9zZWNyZXQubmFtZSI6InNhcmF0aC10b2tlbi1ubWo1cSIsImt1YmVybm
V0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzYXJhdG
giLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQ
iOiIzODM1Zjk5MC0zZGJhLTExZWEtODgwNy01MDZiOGQzYjFhYjIiLCJzdWIiOiJzeXN0ZW06c2Vyd
mljZWFjY291bnQ6ZGVmYXVsdDpzYXJhdGgifQ.dLJCdlOGktRsXfxDItBdbYxDYJtnFS9pptQaKr
JS1QfWAiT93l_iPExZZ_7JGQ2t7glpe-DNEwfjKiqUkDKmuHZSxN9fV6PHjTc8CGOn1q4LV7
tFFkh4HNi-JjhLPkRRQUM6_y5qQSrx9asDEGVLGsoHjuMLhELi4Ghq1EOgcRxPCTQD6lq_C203Dap
PESdqPl7JsmIVBCkFUT4A8A4sseiOqq9ogX-QKvAwoI7yq97BMJLX7q868cNBRsbFzct1tS-CEx-RCPM95
qAzdLNUOrIszVVgSd7jLxIg_tqUH_yEj4T0cePsbLhrCBPRt6bHFCyg3RkIKRoIN2YBq0wPWw"
configuration=k8client.Configuration()
configuration.host="https://10.46.4.213:6443"
configuration.verify_ssl=False
configuration.debug=True
configuration.api_key={"authorization":"Bearer "+ aToken}
k8client.Configuration.set_default(configuration)
v1=k8client.CoreV1Api()
nodes=v1.list_node(watch=False)
print nodes.items[0].metadata.name

Calm supports the following eScript functions.

urlreq

The API exposes REST interface as a set of objects. This action is implemented using python requests module.

urlreq(url, verb='GET', auth=None, c=None, user=None, passwd=None, params=None,
headers=None, timeout=None, send_form_encoded_data=True, allow_redirects=True,
cookies=None, verify=True, proxies=None)

requests.Response object is returned.

Table 2. Arguments
Arguments Description
url string, url to request
verb

string, verb is GET by default. POST, HEAD, PUT, PATCH, and DELETE are other valid entries.

auth string (optional), BASIC and DIGEST are the valid entries.

For authentication purposes, the order is as follows.

  • username and password is authenticated by using user and passwd fields.
  • username and password is authenticated by using name of credential supplied using c field.
  • username and password is authenticated by using credential attached to the task.
user string (optional), username used for authentication.
passwd string (optional), password used for authentication.
params dict (optional), if verb is GET, HEAD or DELETE, parameters are sent in the query string for the request otherwise they are sent in the body of the request.
headers dict (optional), Dictionary of HTTP headers needs to be send along with the request.
timeout integer (optional), you can configure requests to stop waiting for a response after a given number of seconds with the timeout parameter. timeout only elects the connection process itself, not the downloading of the response body.
send_form_encoded_data boolean (optional), = True by default. If False, parameters dict is first dumped using simplejson.dumps() and then passed as a string.
allow_redirects

boolean (optional), = True by default. Specifies whether redirects should be allowed or not.

cookies dict (optional), cookies dict to be sent along with the request.
verify boolean (optional), = True by default. Specifies whether SSL certificates should be verified or not.
proxies dict (optional), Dictionary mapping protocol to the URL of the proxy

Rules for authentication in the order of priority.

  • If they are not None , use user and passwd fields.
  • If c is not None , authenticate username and password from the credential name supplied.
  • If the above two criteria does not match, username and password are authenticated by using the credential attached to the task.

For example

params = {'limit': 1}
headers = {'content-type': 'application/octet-stream'}
r = urlreq(url, verb="GET", auth="BASIC", c='somecred', params=params, headers=headers)
r = urlreq(url, verb="POST", auth="BASIC", user="user", passwd="pass", params=params)

exit

The exit function is an alias for sys.exit of python standard library.

exit(exitcode)

For example

exit(0)

sleep

The sleep function is an alias for time.sleep.

sleep(num_of_secs)

For example

sleep(10)

_construct_random_password

The _construct_random_password API generates a random password and returns it.

_construct_random_password(lower, upper=None, numCaps=0, numLetters=0,
numDigits=0, numPuncs=0, startwith=None, caps=None, letters=None,
digits=None, puncs=None)

Returns: String

Table 3. Arguments
Argument Description
lower integer, minimum number of characters in the password.
upper integer (optional), maximum number of characters in the password. If upper is not defined, then the password returned will always be as per lower, else the length can vary between lower and upper (both included).
numCaps

integer (optional), minimum number of capital letters that must be there in password.

numLetters

integer (optional), minimum number of letters that must be there in password.

numDigits integer (optional), minimum number of digits that must be there in password.
numPuncs

integer (optional), minimum number of punctuation alphabets that must be there in password.

startwith

string (optional), password returned starts with one of the characters provided in startwith string.

caps string (optional), default = 'A-Z'. This can be overridden.
letters string (optional), default = 'a-zA-Z'. This can be overridden.
digits string (optional), default = '0-9'. This can be overridden.
puncs string (optional), default = '!@#$%^&'. This can be overridden.
Note: numCaps + numLetters + numDigits + numPuncs + 1 (if startwith is defined) must not be greater than upper.

_is_bad_password

The _is_bad_password function checks whether the password is correct or not.

_is_bad_password(password, reserved, dictionary=True, numCaps=0, numPuncs=0, \
numDigits=0, minLen=5)

For example

_is_bad_password("Abcd@123")

_randomchoose

The _randomchoose function is used to get a random character from a string.

_randomchoose(string)

For example

_randomchoose("adsadrer")

_shuffle

The _shuffle function is used to shuffle the sequence.

_shuffle(sequence)

For example

_shuffle(a)

get_sql_handle

The get_sql_handle function enables you to remotely connect and manage SQL Servers. It is implemented by using python pymssql module.

get_sql_handle(server, username, password, database='', timeout=0, login_timeout=60, charset='UTF-8', as_dict=False, host='', appname=None, port='1433', conn_properties=None, autocommit=False, tds_version=None)

Returns pymssql.Connection object

Table 4. Arguments
Argument Description
server (str) database host
user (str) database user to connect as
password (str)

user’s password

database (str)

The database to initialize the connection with. By default SQL Server selects the database which is set as default for specific user

timeout (int) query timeout in seconds, default 0 (no timeout)
login_timeout (int) timeout for connection and login in seconds, default is 60 seconds
charset (str) character set with which to connect to the database

For example

username="dbuser"
  password="myP@ssworD"
  server="10.10.10.10"
  port="1433"
  
  cnxn = get_sql_handle(server, username, password, port=port, autocommit=True)
  cursor = cnxn.cursor()
  
  # List all databases
  cursor.execute("""
    SELECT Name from sys.Databases;
  """)
  for row in cursor:
    print row[0]

    cnxn.close()

To refer to the video about supported eScripts, click here.

EScript Sample Script

The following script is an EScript sample script.

Note: Ensure that your script starts with #script .

#script

account_name = "@@{ACCOUNT_NAME}@@"
aviatrix_ip = "@@{address}@@"
new_test_password = "@@{NEW_TEST_PASSWORD}@@"
vpc_name = "Test"

api_url = 'https://{0}/v1/api'.format(aviatrix_ip)
#print api_url


def setconfig(api_url, payload):
  r = urlreq(api_url, verb='POST', auth="BASIC", user='admin', passwd='passwd', params=payload, verify=False)
  resp = json.loads(r.content)
  if resp['return']:
    return resp
  else:
    print "Post request failed", r.content
    exit(1)

print "Get the session ID for making API operations"
payload = {'action': 'login', 'username': 'admin', 'password': new_test_password}
api_url1 = api_url + "?action=login&username=admin&password="+ new_aviatrix_password
cid = setconfig(api_url=api_url1, payload=payload)
cid = cid['CID']
print cid

print "Delete the gateway"
payload = {'CID': cid,
  'action': 'delete_container',
  'account_name': account_name,
  'cloud_type': 1,
  'gw_name': vpc_name
  }
api_url1 = api_url + "?CID="+cid+"&action=delete_container&account_name="+account_name+"&cloud_type=1&gw_name="+vpc_name
print setconfig(api_url=api_url1,payload=payload)

print "Delete the aws account"

payload = {'CID': cid,
  'action': 'delete_account_profile',
  'account_name': account_name
  }
api_url1 = api_url + "?CID="+cid+"&action=delete_account_profile&account_name="+account_name
print setconfig(api_url=api_url1,payload=payload)

JWT Usage Sample Script

The following script is a jwt usage sample script.

Note: Ensure that your script starts with #script .

#script
jwt = '@@{calm_jwt}@@'

payload = {}
api_url = 'https://localhost:9440/api/nutanix/v3/apps/list'
headers = {'Content-Type': 'application/json',  'Accept':'application/json', 'Authorization': 'Bearer {}'.format(jwt)}
r = urlreq(api_url, verb='POST', params=json.dumps(payload), headers=headers, verify=False)
if r.ok:
    resp = json.loads(r.content)
    print resp
    exit(0)
else:
    print "Post request failed", r.content
    exit(1)

Sample Powershell Script

The following script is a powershell sample script.

Install-PackageProvider -Name NuGet -Force
Install-Module DockerMsftProvider -Force
Install-Package Docker -ProviderName DockerMsftProvider -Force

Sample Auto Logon and First Logon Scripts

Sample Auto Logon Script

The following script is a guest customization sample script for the Azure service.

<AutoLogon>
  <Password>
    <Value>@@{user.secret}@@</Value>
    <PlainText>true</PlainText>
  </Password>
  <Enabled>true</Enabled>
  <Username>@@{user.username}@@</Username>
</AutoLogon> 

Sample First Logon Script

The following script is a guest customization sample script for the Azure service.

<FirstLogonCommands>
    <SynchronousCommand>
    <CommandLine>cmd.exe /c powershell -Command get-host</CommandLine>
    <Order>1</Order>
    </SynchronousCommand>
</FirstLogonCommands>

Sample Guest Customization Scripts for VMware and GCP Services

The following script is a guest customization sample script for the VMware service.

cmd.exe /c winrm quickconfig -q
cmd.exe /c winrm set winrm/config/service/auth @{Basic="true"}
powershell -Command "enable-psremoting -Force"
powershell -Command "Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force"

The following script is a guest customization sample script for the GCP service.

#! /bin/bash\napt-get update\napt-get install -y apache2\ncat <<EOF > /var/www/html/index.html\n<html><body><h1>Hello World</h1>\n<p>This page was created from a simple startup script!</p>\n</body></html>\nEOF

Calm Blueprints Public Repository

Calm blueprints public repository contains custom blueprints and custom scripts that are created and published by community members. Calm also publishes official blueprints and tasks to the github public repository. You can clone the published blueprints and scripts and use from the repository. To access the repository, click here .

Seeding Scripts to the Calm Task Library

The blueprints repository of Calm contains script that can be seeded into task library and published to projects. You can use these tasks for blueprint configuration.

Procedure

  1. Clone the blueprint repository from github. To access the repository, click here.
  2. Change the directory to calm-integrations/generate_task_library_items .
  3. To execute the script to seed, run the following command in bash.
    bash generate_task_library_items.sh
  4. Enter the following information.
    • Prism Central IP : Enter the Prism Central IP address to which the task library items are to be seeded.
    • Prism Central User : Enter the user name with the access to create task library scripts.
    • Prism Central Password : Enter the password of the Prism Central user.
    • Prism Central Project : Enter the Project name to which the task library items can be published.
  5. To avoid giving inputs multiple times, run the following command to export environment variables before running the script. This step is optional.
    export PC_IP=<prism central IP>    
    export PC_USER=<prism central user>
    export PC_PASSWORD=<prism central password>
    export PC_PROJECT=<prism central project>
  6. Run the following command to seed individual files into Calm.
    python generate_task_library.py --pc $PC_IP--user $PC_USER --password $PC_PASSWORD --project $PC_PROJECT --script <path of script>

Calm Licensing and Upgrades

Calm Licensing

Calm license for Prism Central enables you to manage VMs that are provisioned or managed by Calm. Nutanix provides a free trial period of 60 days to try out Calm.

The Prism web console and Nutanix Support portal provide the most current information about your licenses. For detailed information about the Calm licensing feature, refer to the Prism Central Guide .

Calm Upgrades

Upgrade Calm or Epsilon using the Life Cycle Manager (LCM) from Prism Central. Epsilon is the orchestration engine for Calm. For more information , see Life Cycle Manager.

Life Cycle Manager

The Life Cycle Manager (LCM) allows you to track and upgrade the Calm and Epsilon versions in Prism Central.
Note: LCM 2.1 and above support Calm and Epsilon upgrades.

Performing Inventory with the Life Cycle Manager

Use LCM to display the software and firmware versions of the entities in the cluster.

Procedure

  1. In Prism Central, click the gear icon to open the Settings page.
  2. Select Life Cycle Management in the sidebar.
    Figure. Life Cycle Management Click to enlarge

  3. Click Options > Perform Inventory .
    The LCM shows a warning message if you have not enabled the auto-update, and a new version of the LCM framework is available.
    Figure. Perform Inventory Click to enlarge

  4. Click OK .
    The LCM displays all discovered entities.
  5. To view the current Calm and Epsilon versions, click View All .
    Figure. All Updates Click to enlarge

    The Epsilon and Calm entities show the current version and the date and time of the most recent update.

Upgrading Calm with the Life Cycle Manager

Use LCM to upgrade Calm and Epsilon to the latest available versions.

Before you begin

  • Configure rules in your external firewall to allow LCM updates. For more details, see the Firewall Requirements section in the Prism Web Console Guide .
  • Run a successful inventory operation before upgrading Calm or Epsilon.

Procedure

  1. In Prism Central, click the gear icon to open the Settings page.
  2. Select Life Cycle Management in the sidebar.
    Figure. Life Cycle Management Click to enlarge

  3. Click Edit and select Nutanix Calm .
    By default, Epsilon is selected.
    Note: Do not select only Epsilon to update.
  4. Click Change .
  5. Select the check box next to the version that you want to upgrade.
  6. Click Save .
  7. You can also update the services from the Options list.
    • To perform all available updates, select Update All .
    • To perform only required updates, select Update Required .
    • To perform only updates you have selected, select Update Selected .

      If you do not select any specific updates, the LCM performs all available updates.

Upgrading Calm at a Dark Site

By default, LCM automatically fetches updates from a pre-configured URL. If LCM fails to access the configured URL to fetch updates, you can configure the LCM to fetch updates locally to upgrade Calm and Epsilon.

About this task

Perform the following procedure to upgrade Calm and Epsilon at a dark site.

Note: When you upgrade Calm to the latest version as part of the Prism Central upgrade and if Policy Engine is enabled, then ensure to upgrade your policy engine as well.

Before you begin

Ensure that LCM version is 2.3 or above.

Procedure

  1. Set up a local web server that is reachable by all your Nutanix clusters.
    For more information about setting up a local web server, click here.
    Note:
    • You must use this server to host the LCM repository.
    • From Calm 3.0.0.2 release onwards, when you set up a Windows local web server for LCM dark site upgrade, create an MIME type called '.xz' with the type set as text/plain.
  2. From a device that has public Internet access, go to the Nutanix portal.
  3. Click Downloads > Calm .
  4. Select the required version and download Nucalm-X.X.X.X.ZIP , Epsilon-X.X.X.X.Zip , and Policy-X.X.X.X.ZIP tar files.
    X.X.X.X represents the Calm, Epsilon, or Policy engine versions.
  5. Transfer Nucalm-X.X.X.X.ZIP , Epsilon-X.X.X.X.Zip , and Policy-X.X.X.X.ZIP tar files to your local web server.
  6. Extract the files into the local release directory.
    You get the following files in the release directory.
    • Nucalm-X.X.X.X.Zip
      • release/builds/calm-builds/x.x.x.x/metadata.json
      • release/builds/calm-builds/x.x.x.x/metadata.sign
      • release/builds/calm-builds/x.x.x.x/nucalm.tar.xz
    • Epsilon-X.X.X.X.Zip
      • release/builds/epsilon-builds/x.x.x.x/metadata.json
      • release/builds/epsilon-builds/x.x.x.x/metadata.sign
      • release/builds/epsilon-builds/x.x.x.x/epsilon.tar.xz
    • Policy-X.X.X.X.ZIP
      • release/builds/policy_engine-builds/x.x.x.x/metadata.json
      • release/builds/policy_engine-builds/x.x.x.x/metadata.sign
      • release/builds/policy_engine-builds/x.x.x.x/policy-engine.tar.gz
  7. From a device that has public Internet access, go to the Nutanix portal.
  8. Select Downloads > Calm .
  9. Download nutanix_compatibility.tgz and nutanix_compatibility.tgz.sign tar files.
  10. Transfer the compatibility tar files to your local web server and replace the files in the /release directory.
  11. Log on to Prism Central.
  12. On the LCM page, click Settings .
  13. In the Source field, select Local web server .
  14. In the URL field, enter the path to the directory where you extracted the tar file on your local server . Use the format http://webserver_IP_address/release .
  15. Click Save .
  16. In the LCM sidebar, select Inventory and click Perform Inventory .
  17. Update the LCM framework before trying to update any other component.

    The LCM sidebar now shows the LCM framework with the updated version.

Calm VM Upgrades

Refer to this section to upgrade Calm to the latest available version after you deploy the Calm VM.

Following are the available methods:
  • Prism Central (PC) / Calm VM upgrade - Upgrading the Prism Central (Calm VM) also upgrades the Calm version.
  • Life Cycle Manager (LCM) upgrade - You can upgrade to newer versions of Calm without performing a VM upgrade. Upgrades to most minor releases and few major releases are done using this method.

Upgrading Calm and Epsilon from Calm VM 3.3.1 to 3.5.0

Use the following procedure to upgrade Calm and Epsilon from Calm VM 3.3.1 to 3.5.0.

Procedure

  1. Perform a Prism Central upgrade to version 2022.1.0.2.
  2. SSH to Calm VM.
  3. Run the following command to stop Calm and Epsilon containers.
    genesis stop nucalm epsilon
  4. Navigate to the Calm directory ( /usr/local/nutanix/nucalm/ ) and remove the tar file.
  5. Wget nucalm.tar.xz from the Downloads location.
  6. Navigate to the Epsilon directory ( /usr/local/nutanix/epsilon/ ) and remove the tar file.
  7. Wget epsilon.tar.xz from the Downloads location.
  8. Run the following command to start Calm and Epsilon services and wait for the services to get healthy.
    cluster start
  9. Verify that the Calm and Epsilon containers are healthy.

Upgrading Calm VM with Prism Central

About this task

To upgrade Calm VM using the PC method, do the following:

Procedure

  1. Login into the Calm VM GUI using the IP address.
  2. Click Prism Central Settings > Upgrade Prism Central .
    Figure. Prism Central Settings Click to enlarge

    Check if the compatible PC version is available. If not, go to the Name Servers page and enter the global DNS server as the Name server.

    Figure. Upgrade Prism Central Click to enlarge
  3. In the Upgrade Prism Central page, click Download against the compatible version.

    A confirmation window appears.

    Click Yes to start the download process. After the download gets completed, you can view the Upgrade list.

    Note: If the PC version you want to upgrade does not appear in the list, typically because you do not have Internet access (such as at a dark site), you can click the upload the Prism Central binary link to upload an image from your local media.
  4. In the Upgrade list, click Upgrade Now .
    A confirmation window appears.
  5. Click Continue to start the upgrade process.

    During the upgrade process, the Calm VM gets restarted.

    Also, you can log in to the Calm VM GUI to view the upgraded version. In the top-left corner, click User Menu > About Nutanix .

Upgrading Calm VM with Life Cycle Manager

You can upgrade to newer versions of Calm without performing a VM upgrade. Upgrades to most minor releases and few major releases are done using the LCM method.

Before you begin

LCM perform inventory and Calm/Epsilon upgrade operations fail using the default LCM URL. The workaround is to replace the LCM repository URL to http://download.nutanix.com/lcm/saas under the LCM > Settings .

About this task

To upgrade Calm VM using the LCM method, do the following:

Procedure

  1. Click Administration > LCM to open the LCM page.
  2. Click Perform Inventory .

    A confirmation window appears.

    Figure. LCM - Perform Inventory Click to enlarge
  3. Click Proceed .
    The Perform Inventory process can take several minutes depending on your cluster size. Once completed, you can view the available updates in the Software page.
  4. Select the check-box next to the Calm VM version that you want to upgrade. Then, click Update .

    Note that the Epsilon check-box also gets selected. Epsilon is the orchestration engine used by Calm.

    Figure. LCM - Upgrading Calm VM Click to enlarge

    A confirmation window appears.

    Note: Once the update process begins, it cannot be stopped or paused.
  5. Click Apply Updates to complete.
    If you do not have internet access, use the dark-site method to upgrade Calm. For more information, see Upgrading Calm VM with Life Cycle Manager at a Dark Site.

Upgrading Calm VM with Life Cycle Manager at a Dark Site

By default, Life Cycle Manager (LCM) automatically fetches updates from a pre-configured URL. If LCM fails to access the configured URL to fetch updates, you can configure the LCM to fetch updates locally to upgrade Calm and Epsilon. Perform the following procedure to upgrade Calm and Epsilon at a dark site.

About this task

Note:
  • Use the following procedure only for Calm VM deployments. Do not use this procedure to perform LCM upgrades in the Nutanix Prism Central VMs that are attached to Prism Elements.
  • If you have both Nutanix infrastructure (Nutanix Prism Central VMs attached to Prism Elements) and Calm VM, ensure to set up and manage two different LCM URLs or web servers to perform dark site upgrades for each type of deployments. For more information on using LCM for Prism Central VMs, see the Life Cycle Manager Dark Site Guide .
  • The following procedure handles the upgrades of Calm family only and does not handle upgrades of any other modules.

Before you begin

Ensure that LCM version is 2.3 or above.

Procedure

  1. Set up a local web server that is reachable to your Calm VMs.
    You will use this server to host the LCM repository.
    Note:
    • For more information about setting up a local web server, click here.
    • From Calm 3.0.0.2 release, create a MIME type called '.xz' with the type set as text/plain when setting up a Windows local web server for LCM dark site upgrade.
  2. Use the following steps to set up your LCM repository:
    1. From a device that has public Internet access, go to the Nutanix portal and select Downloads > Calm .
    2. Next to the Calm on ESX LCM Bundle entry, click Download to download the latest LCM framework tar file, lcm_dark_site_bundle_version.tgz .
    3. Transfer the framework tar file to your local web server.
    4. Extract the framework tar file into the release directory.
      The following files are extracted into the release directory.
      • master_manifest.tgz
      • master_manifest.tgz.sign
      • modules
      • nutanix_compatibility.tgz
      • nutanix_compatibility.tgz.sign
      • support.csv
      • support.md
  3. From a device that has public Internet access, go to the Nutanix portal and select Downloads > Calm . Choose the required version and download Nucalm-X.X.X.X.ZIP and Epsilon-X.X.X.X.Zip tar files.
    X.X.X.X represents the Calm and Epsilon versions.
  4. Transfer Nucalm-X.X.X.X.ZIP and Epsilon-X.X.X.X.Zip tar files to your local web server and extract the files into local directory release .
    After you extract and save the files in release folder, you can view the following files.
    • Nucalm-X.X.X.X.Zip
      • release/builds/calm-builds/x.x.x.x/metadata.json
      • release/builds/calm-builds/x.x.x.x/metadata.sign
      • release/builds/calm-builds/x.x.x.x/nucalm.tar.xz
    • Epsilon-X.X.X.X.Zip
      • release/builds/epsilon-builds/x.x.x.x/metadata.json
      • release/builds/epsilon-builds/x.x.x.x/metadata.sign
      • release/builds/epsilon-builds/x.x.x.x/epsilon.tar.xz
  5. From a device that has public Internet access, go to the Nutanix portal and select Downloads > Calm . Download nutanix_compatibility.tgz and nutanix_compatibility.tgz.sign tar files.
  6. Transfer the compatibility tar files to your local web server and replace the files in the /release folder.
  7. Replace the existing compatibility files with the new files.
  8. Log on to Prism Central.
  9. On the LCM page, click Settings .
  10. In the Source field, select Local web server .
  11. In the URL field, enter the path to the directory where you extracted the tar file on your local server. Use the format http://webserver_IP_address/release .
  12. Click Save .
    You return to the Life Cycle Manager.
  13. Select Inventory in the LCM sidebar and click Perform Inventory .
  14. Update the LCM framework before trying to update any other component.

    The LCM sidebar now shows the LCM framework with the same version as the LCM dark site bundle you downloaded.

Additional Information

Credential Security Support Provider

The Credential Security Support Provider (CredSSP) protocol is a security support provider that you implement using the Security Support Provider Interface (SSPI). CredSSP allows an application to delegate credentials of a user from the client to the target server for remote authentication. CredSSP provides an encrypted transport layer security protocol channel. The client is authenticated over the encrypted channel by using the Simple and Protected Negotiate (SPNEGO) protocol with either Microsoft Kerberos or Microsoft NTLM.

For more information, refer to the Microsoft Documentation .

Enabling CredSSP

Perform the following procedure to enable CredSSP.

Procedure

Run the following command to enable CredSSP on the target machine.
> Enable-WSManCredSSP -Role Server -Force

Generating SSH Key on a Linux VM

Perform the following procedure to generate an SSH key pair on a Linux VM.

About this task

Note: Avoid generating the RSA key pair on your Prism Central VM or CVM.

Procedure

  1. Run the following shell command to generate an RSA key pair on your local machine.
    $ ssh-keygen -t rsa
  2. Accept the default file location as ~/.ssh/id_rsa .
    You can find your public key at ~/.ssh/id_rsa.pub and the private key at ~/.ssh/id_rsa .
    Note: Do not share your private key with anyone.

Generating SSH Key on a Windows VM

Perform the following procedure to generate an SSH key pair on Windows.

About this task

Note: Avoid generating the RSA key pair on your Prism Central VM or CVM.

Procedure

  1. Launch PuTTygen.
  2. Move the mouse cursor in the blank area and click Generate .
  3. To convert the private key into an OpenSSH format, select Conversions > Export OpenSSH key .
    PuTTygen warning message appears.
  4. Click Yes to save the key without a passphrase.
  5. Navigate to a location on your local system to save the key.
  6. Type a name for the key.
  7. Click Save .
  8. Copy the public key (highlighted in the following image) into a plain text file and save the key at the same location as that of the private key.
    Figure. Public Key Click to enlarge

Migrating to Integrated Linux Based PowerShell Gateway from Karan Service

Integrated Linux based PowerShell gateway is an in-built microservice of Calm that you can use to run Windows PowerShell scripts. You do not have to install any Windows VM separately or install Karan service manually to run the PowerShell scripts in Calm. Perform the following task to run the PowerShell scripts in Calm.

About this task

Note: Calm version 2.10 or later do not support manual Karan service installation.

Before you begin

Ensure that you meet the following requirements.
  • Calm version is 2.5.0 and above.
  • If you use Windows ISO image, enable remoting and open the 5985 and 5986 ports in the Windows disk image or in Sysprep XML.
  • Define script type as PowerShell to execute the PowerShell scripts.
  • Select the connection type as Windows(PowerShell) from the Connection Type list while configuring a VM.

Procedure

  1. If you want to run the PowerShell scripts in the default execution mode, integrated Linux based PowerShell gateway runs the script by creating a remote PowerShell session and runs the script with NTLM authentication mode.
    The following example displays a sample PowerShell script that runs in the default execution mode.
    > Install-windowsfeature -name AD-Domain-Services –IncludeManagementTools
  2. If the PowerShell scripts need elevated privileges, run the following command to enable the CredSSP server role on the target machine.
    > Enable-WSManCredSSP -Role Server -Force
    The following example displays a sample PowerShell script that requires the elevated privileges.
    > Install-windowsfeature -name AD-Domain-Services –IncludeManagementTools

Integrated Linux Based PowerShell Gateway Errors

You might encounter the following errors when you run the PowerShell scripts using the integrated Linux based PowerShell gateway.

Table 1. Integrated Linux Based PowerShell Gateway Errors
Error Description
Access denied

If the VM and the WinRM services are started but the specified credential is wrong. You encounter the error in the following cases.

  • When you use the local account credential of a domain member machine. You can resolve the issue by using the domain credentials.
  • When the script requires elevated privileges and the CredSSP is not enabled on the target machine. You can resolve the issue by enabling the CredSSP on the target machine. For more information on enabling CredSSP, see Credential Security Support Provider.
Connection refused

You encounter the connection refusal error in the following cases.

  • When a VM is not started but Calm tries to do a check-login or runs a PowerShell task. You can resolve the issue by adding a delay time task. For more details, see Creating a Delay Task.
  • When Calm is not able to communicate with the target machine. You can resolve the issue by allowing the connection to the port that is used to contact the target machine. Ensure that all the firewalls between Prism Central and the target machine allow connections to the port.

Localization

Nutanix localizes the user interface in simplified Chinese and Japanese. All the static screens are translated to the selected language. You can change the language settings of the cluster from English (default) to simplified Chinese or Japanese. For information on how to change the language setting, refer to the Prism Central Guide .

Read article

Still Thinking?
Give us a try!

We embrace agility in everything we do.
Our onboarding process is both simple and meaningful.
We can't wait to welcome you on AiDOOS!