×

Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Nutanix
Migration Guide

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-07-25

This Document Has Been Removed

Nutanix Move is the Nutanix-recommended tool for migrating a VM. Please see the Move documentation at the Nutanix Support portal.

vSphere Administration Guide for Acropolis

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-12-08

Overview

Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing infrastructure.

All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also enables data protection and High Availability that keep critical data and guest VMs protected.

This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.

Hardware Configuration

See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management tasks.

Limitations

For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.

Nutanix Software Configuration

The Nutanix Distributed Storage Fabric aggregates local SSD and HDD storage resources into a single global unit called a storage pool. In this storage pool, you can create several storage containers, which the system presents to the hypervisor and uses to host VMs. You can apply a different set of compression, deduplication, and replication factor policies to each storage container.

Storage Pools

A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only one storage pool for each Nutanix cluster.

Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds an extra data protection layer at the cost of more storage space for the copy. For use cases where applications provide their own data protection or high availability, you can set a replication factor of 1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS datastore. The replication factor of a storage container determines its usable capacity. For example, replication factor 2 tolerates one component failure and replication factor 3 tolerates two component failures. When you create a Nutanix cluster, three storage containers are created by default. Nutanix recommends that you do not delete these storage containers. You can rename the storage container named default - xxx and use it as the main storage container for hosting VM data.
Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore can host.

Capacity Optimization

  • Nutanix recommends enabling inline compression unless otherwise advised.
  • Nutanix recommends disabling deduplication for all workloads except VDI.

    For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable deduplication on that storage container.

Nutanix CVM Settings

CPU
Keep the default settings as configured by the Foundation during the hardware configuration.

Change the CPU settings only if Nutanix Support recommends it.

Memory
Most workloads use less than 32 GB RAM memory per CVM. However, for mission-critical workloads with large working sets, Nutanix recommends more than 32 GB CVM RAM memory.
Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .
Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
Caution: Do not use jumbo Frames for the Nutanix CVM.
Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.

Nutanix Cluster Settings

Nutanix recommends that you do the following.

  • Map a Nutanix cluster to only one vCenter Server.

    Due to the way the Nutanix architecture distributes data, there is limited support for mapping a Nutanix cluster to multiple vCenter Servers. Some Nutanix products (Move, Era, Calm, Files, Prism Central), and features (disaster recovery solution) are unstable when a Nutanix cluster maps to multiple vCenter Servers.

  • Configure a Nutanix cluster with replication factor 2 or replication factor 3.
    Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3 requires at least five nodes so that the data remains online even if two nodes fail concurrently.
  • Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable storage for replication factor 2 or two nodes for replication factor 3.

    The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and rebuild two nodes for self-healing.

  • Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.

    You can also create a single storage container. If you are creating multiple storage containers, ensure that all the storage containers follow the advertised capacity recommendation.

  • Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist.

Software Acceptance Level

The Foundation sets the software acceptance level of an ESXi image to CommunitySupported by default. If there is a requirement to upgrade the software acceptance level, run the following command to upgrade the software acceptance level to the maximum acceptance level of PartnerSupported .

root@esxi# esxcli software acceptance set --level=PartnerSupported

Scratch Partition Settings

ESXi uses the scratch partition (/scratch) to dump the logs when it encounters a purple screen of death (PSOD) or a kernel dump. The Foundation install automatically creates this partition on the SATA DOM or M.2 device with the ESXi installation. Moving the scratch partition to any location other than the SATA DOM or M.2 device can cause issues with LCM, 1-click hypervisor updates, and the general stability of the Nutanix node.

vSphere Networking

vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking components across various traffic types. To ensure availability, scalability, performance, management, and security of your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.

You can configure networks according to your requirements. For detailed information about vSphere virtual networking and different networking strategies, refer to the Nutanix vSphere Storage Solution Document and the VMware Documentation . This chapter describes the configuration elements required to run VMware vSphere on the Nutanix Enterprise infrastrucutre.

Virtual Networking Configuration Options

vSphere on Nutanix supports the following types of virtual switches.

vSphere Standard Switch (vSwitch)
vSphere Standard Switch (vSS) with Nutanix vSwitch is the default configuration for Nutanix deployments and suits most use cases. A vSwitch detects which VMs are connected to each virtual port and uses that information to forward traffic to the correct VMs. You can connect a vSwitch to physical switches by using physical Ethernet adapters (also referred to as uplink adapters) to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network.
Tip: A vSwitch works like a physical Ethernet switch.
vSphere Distributed Switch (vDS)

Nutanix recommends vSphere Distributed Switch (vDS) coupled with network I/O control (NIOC version 2) and load-based teaming. This combination provides simplicity, ensures traffic prioritization if there is contention, and reduces operational management overhead. A vDS acts as a single virtual switch across all associated hosts on a datacenter. It enables VMs to maintain consistent network configuration as they migrate across multiple hosts. For more information about vDS, see NSX-T Support on Nutanix Platform.

Nutanix recommends setting all vNICs as active on the port group and dvPortGroup unless otherwise specified. The following table lists the naming convention, port groups, and the corresponding VLAN Nutanix uses for various traffic types.

Table 1. Port Groups and Corresponding VLAN
Port group VLAN Description
MGMT_10 10 VM kernel interface for host management traffic
VMOT_20 20 VM kernel interface for vMotion traffic
FT_30 30 Fault tolerance traffic
VM_40 40 VM traffic
VM_50 50 VM traffic
NTNX_10 10 Nutanix CVM to CVM cluster communication traffic (public interface)
Svm-iscsi-pg N/A Nutanix CVM to internal host traffic
VMK-svm-iscsi-pg N/A VM kernel port for CVM to hypervisor communication (internal)

All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi management, VM traffic, vMotion, and so on.

Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication between the CVM and the internal hypervisor.

VMware NSX Support

Running VMware NSX on Nutanix infrastructure ensures that VMs always have access to fast local storage and compute, consistent network addressing and security without the burden of physical infrastructure constraints. The supported scenario connects the Nutanix CVM to a traditional VLAN network, with guest VMs inside NSX virtual networks. For more information, see the Nutanix vSphere Storage Solution Document .

NSX-T Support on Nutanix Platform

Nutanix platform relies on communication with vCenter to work with networks backed by vSphere standard switch (vSS) or vSphere Distributed Switch (vDS). With the introduction of a new management plane, that enables network management agnostic to the compute manager (vCenter), network configuration information is available through the NSX-T manager. To collect the network configuration information from the NSX-T Manager, you must modify the Nutanix infrastructure workflows (AOS upgrades, LCM upgrades, and so on).

Figure. Nutanix and the NSX-T Workflow Overview Click to enlarge Nutanix and NSX-T Workflow Overview

The Nutanix platform supports the following in the NSX-T configuration.

  • ESXi hypervisor only.
  • vSS and vDS virtual switch configurations.
  • Nutanix CVM connection to VLAN backed NSX-T segments only.
  • The NSX-T Manager credentials registration using the CLI.

The Nutanix platform does not support the following in the NSX-T configuration.

  • Network segmentation with N-VDS.
  • Nutanix CVM connection to overlay NSX-T segments.
  • Link Aggregation/LACP for the uplinks backing the NVDS host switch connecting Nutanix CVMs.
  • The NSX-T Manager credentials registration through Prism.

NSX-T Segments

Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.

NSX-T has the following types of segments.

VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network. Encapsulation occurs at the transport layer (which is the NVDS module on the host).

Multicast Filtering

Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).

Creating Segment for NVDS

This procedure provides details about creating a segment for nVDS.

About this task

To check the vSwitch configuration of the host and verify if NSX-T network supports the CVM port-group, perform the following steps.

Procedure

  1. Log on to vCenter sever and go to the NSX-T Manager.
  2. Click Networking , and go to Connectivity > Segments in the left pane.
  3. Click ADD SEGMENT under the SEGMENTS tab on the right pane and specify the following information.
    Figure. Create New Segment Click to enlarge Create New Segment

    1. Segment Name : Enter a name for the segment.
    2. Transport Zone : Select the VLAN-based transport zone.
      This transport name is associated with the Transport Zone when configuring the NSX switch .
    3. VLAN : Enter the number 0 for native VLAN.
  4. Click Save to create a segment for NVDS.
  5. Click Yes when the system prompts to continue with configuring the segment.
    The newly created segment appears below the prompt.
    Figure. New Segment Created Click to enlarge New Segment Created

Creating NVDS Switch on the Host by Using NSX-T Manager

This procedure provides instructions to create an NVDS switch on the ESXi host. The management and CVM external interface of the host is migrated to the NVDS switch.

About this task

To create an NVDS switch and configure the NSX-T Manager, do the following.

Procedure

  1. Log on to NSX-T Manager.
  2. Click System , and go to Configuration > Fabric > Nodes in the left pane.
    Figure. Add New Node Click to enlarge Add New Node

  3. Click ADD HOST NODE under the HOST TRANSPORT NODES in the right pane.
    1. Specify the following information in the Host Details dialog box.
      Figure. Add Host Details Click to enlarge Add Host Details

        1. Name : Enter an identifiable ESXi host name.
        2. Host IP : Enter the IP address of the ESXi host.
        3. Username : Enter the username used to log on to the ESXi host.
        4. Password : Enter the password used to log on to the ESXi host.
        5. Click Next to move to the NSX configuration.
    2. Specify the following information in the Configure NSX dialog box.
      Figure. Configure NSX Click to enlarge Configure NSX

        1. Mode : Select Standard option.

          Nutanix recommends the Standard mode only.

        2. Name : Displays the default name of the virtual switch that appears on the host. You can edit the default name and provide an identifiable name as per your configuration requirements.
        3. Transport Zone : Select the transport zone that you selected in Creating Segment for NVDS.

          These segments operate in the way similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.

        4. Uplink Profile : Select an uplink profile for the new nVDS switch.

          This selected uplink profile represents the NICs connected to the host. For more information about uplink profiles, see the VMware Documentation .

        5. LLDP Profile : Select the LLDP profile for the new nVDS switch.

          For more information about LLDP profiles, see the VMware Documentation .

        6. Teaming Policy Uplink Mapping : Map the uplinks with the physical NICs of the ESXi host.
          Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking > Physical Adapters .

          Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration to the NVDS.

          Note: Always migrate one physical NIC at a time to avoid connectivity failure with the ESXi host.
        7. PNIC only Migration : Turn on the switch to Yes if there are no VMkernal Adapters (vmks) associated with the PNIC selected for migration from vSS switch to the nVDS switch.
        8. Network Mapping for Install . Click Add Mapping to migrate the VMkernels (vmks) to the NVDS switch.
        9. Network Mapping for Uninstall : To revert the migration of VMKernels.
  4. Click Finish to create the ESXi host to the NVDS switch.
    The newly created nVDS switch appears on the ESXi host.
    Figure. NVDS Switch Created Click to enlarge NVDS Switch Created

Registering NSX-T Manager with Nutanix

After migrating the external interface of the host and the CVM to the NVDS switch, it is mandatory to inform Genesis about the registration of the cluster with the NSX-T Manager. This registration helps Genesis communicate with the NSX-T Manager and avoid failures during LCM, 1-click, and AOS upgrades.

About this task

This procedure demonstrates an AOS upgrade error encountered for a non-registered NSX-T Manager with Nutanix and how to register the the NSX-T Manager with Nutanix and resolve the issue.

To register an the NSX-T Manager with Nutanix, do the following.

Procedure

  1. Log on to the Prism Element web console.
  2. Select VM > Settings > Upgrade Software > Upgrade > Pre-upgrade to upgrade AOS on the host.
    Figure. Upgrade AOS Click to enlarge

  3. The upgrade process throws an error if the NSX-T Manager is not registered with Nutanix.
    Figure. AOS Upgrade Error for Unregistered NSX-T Click to enlarge

    The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information must be configured in the Nutanix cluster.

  4. To fix this upgrade issue, log on to the Prism Element web console using SSH.
  5. Access the cluster directory.
    nutanix@cvm$ cd ~/cluster/bin
  6. Verify if the NSX-T Manager was registered with the CVM earlier.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If the NSX-T Manager was not registered earlier, you get the following message.

    No MX-T manager configured in the cluster
  7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T Manager to the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
    IP address: 10.10.10.10
    Username: admin
    Password: 
    /usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/urllib3/conectionpool.py:843:
     InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate verification is strongly advised. 
    See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    Successfully persisted NSX-T manager information
  8. Verify the registration of NSX-T Manager with the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If there are no errors, the system displays a similar output.

    IP address: 10.10.10.10
    Username: admin
  9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.

    The AOS upgrade is completed successfully.

Networking Components

IP Addresses

All CVMs and ESXi hosts have two network interfaces.
Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information about backplane interface and traffic segmentation, see Security Guide.
Interface IP address vSwitch
ESXi host vmk0 User-defined vSwitch0
CVM eth0 User-defined vSwitch0
ESXi host vmk1 192.168.5.1 vSwitchNutanix
CVM eth1 192.168.5.2 vSwitchNutanix
CVM eth1:1 192.168.5.254 vSwitchNutanix
CVM eth2 User-defined vSwitch0
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet 192.168.5.0/24.

vSwitches

A Nutanix node is configured with the following two vSwitches.

  • vSwitchNutanix

    Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.

    Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do not modify vSwitchNutanix.
    Figure. vSwitchNutanix Configuration Click to enlarge vSwitchNutanix Configuration

  • vSwitch0

    All other external communications like CVM to a differnet host (in case of HA re-direction) use vSwitch0 that has uplinks to the physical network interfaces. Since network segmentation is disabled by default, the backplane traffic uses vSwitch0.

    vSwitch0 has the following two networks.

    • Management Network

      HA, vMotion, and vCenter communications use the Management Network.

    • VM Network

      All VMs use the VM Network.

    Caution:
    • The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
    • You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo Frames on hypervisor hosts, ensure to enable them end-to-end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.
    Figure. vSwitch0 Configuration Click to enlarge vSwitch0 Configuration

Configuring Host Networking (Management Network)

After you create the Nutanix cluster by using Foundation, configure networking for your ESXi hosts.

About this task

Figure. Configure Management Network Click to enlarge Ip Configuration image

Procedure

  1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
  2. Press the down arrow key until Configure Management Network highlights and then press Enter .
  3. Select Network Adapters and then press Enter .
  4. Ensure that the connected network adapters are selected.
    If they are not selected, press Space key to select them and press Enter key to return to the previous screen.
    Figure. Network Adapters Click to enlarge Select a Network Adapters
  5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter . In the dialog box, provide the VLAN ID and press Enter .
    Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  6. Select IP Configuration and press Enter .
    Figure. Configure Management Network Click to enlarge IP Address Configuration
  7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting.
  8. Provide values for the following: IP Address , Subnet Mask , and Default Gateway fields based on your environment and then press Enter .
  9. Select DNS Configuration and press Enter .
  10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting.
  11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter .
  12. Press Esc and then Y to apply all changes and restart the management network.
  13. Select Test Management Network and press Enter .
  14. Press Enter to start the network ping test.
  15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter .

    Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured.

    Figure. Test Management Network Click to enlarge Test Management Network

    Press Enter to close the test window.

  16. Press Esc to log off.

Changing a Host IP Address

About this task

To change a host IP address, perform the following steps. Perform the following steps once for each hypervisor host in the Nutanix cluster. Complete the entire procedure on a host before proceeding to the next host.
Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host using the following steps.
Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that one interface is on the same subnet as the CVM.

Procedure

  1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking (Management Network).
  2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter.
  3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Reconnecting a Host to vCenter

About this task

If you modify the IP address of a host, you must reconnect the host with the vCenter. To reconnect the host to the vCenter, perform the following procedure.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the host with the changed IP address and select Disconnect .
  3. Right-click the host again and select Remove from Inventory .
  4. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
    You can see the host with the updated IP address in the left pane of vCenter.

Selecting a Management Interface

Nutanix tracks the management IP address for each host and uses that IP address to open an SSH session into the host to perform management activities. If the selected vmk interface is not accessible through SSH from the CVMs, activities that require interaction with the hypervisor fail.

If multiple vmk interfaces are present on a host, Nutanix uses the following rules to select a management interface.

  1. Assigns weight to each vmk interface.
    • If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4. Otherwise, the weight assigned is 0.
    • If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its weight.
    • If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its weight.
  2. The vmk interface that has the highest weight is selected as the management interface.

Example of Selection of Management Network

Consider an ESXi host with following configuration.

  • vmk0 IP address and mask: 2.3.62.204, 255.255.255.0
  • vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
  • vmk2 IP address and mask: 2.3.63.24, 255.255.255.0

Consider a CVM with following configuration.

  • eth0 inet address and mask: 2.3.63.31, 255.255.255.0
  • eth2 inet address and mask: 2.3.62.12, 255.255.255.0

According to the rules, the following weights are assigned to the vmk interfaces.

  • vmk0 = 4 + 0 + 1 = 5
  • vmk1 = 0 + 0 + 0 = 0
  • vmk2 = 0 + 2 + 0 = 2

Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.

To verify that vmk0 interface is selected for management IP address, use the following command.

root@esx# esxcli network ip interface tag get -i vmk0

You see the following output.

Tags: Management, VMmotion

For the other two interfaces, no tags are displayed.

If you want any other interface to act as the management IP address, enable management traffic on that interface by following the procedure described in Selecting a New Management Interface.

Selecting a New Management Interface

You can mark the vmk interface to select as a management interface on an ESXi host by using the following method.

Procedure

  1. Log on to vCenter with the web client.
  2. Do the following on the ESXi host.
    1. Go to Configure > Networking > VMkernel adapters .
    2. Select the interface on which you want to enable the management traffic.
    3. Click Edit settings of the port group to which the vmk belongs.
    4. Select Management check box from the Enabled services option to enable management traffic on the vmk interface.
  3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
    root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management

    Replace vmkN with the vmk interface where you want to enable the management traffic.

Updating Network Settings

After you configure networking of your vSphere deployments on Nutanix Enterprise Cloud, you may want to update the network settings.

  • To know about the best practice of ESXi network teaming policy, see Network Teaming Policy.

  • To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch (vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

  • To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch (vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG.

    .

Network Teaming Policy

On an ESXi host, NIC teaming policy allows you to bundle two or more physical NICs into a single logical link to provide more network bandwidth aggregation and link redundancy to a vSwitch. The NIC teaming policies in the ESXi networking configuration for a vSwitch consists of the following.

  • Route based on originating virtual port.
  • Route based on IP hash.
  • Route based on source MAC hash.
  • Explicit failover order.

In addition to the earlier mentioned NIC teaming policy, vDS uses an extra teaming policy that consists of - Route based on physical NIC load.

When Foundation or Phoenix imaging is performed on a Nutanix cluster, the following two standard virtual switches are created on ESXi hosts:

  • vSwitch0
  • vSwitchNutanix

On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the following recommendations for NIC teaming:

  • vSwitch. Route based on originating virtual port
  • vDS. Route based on physical NIC load

On vSwitchNutanix, there are no uplinks to the virtual switch, so there is no NIC teaming configuration required.

Migrate from a Standard Switch to a Distributed Switch

This topic provides detailed information about how to migrate from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS).

The following are the two types of virtual switches (vSwitch) in vSphere.

  • vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking).
  • vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking).
Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation .

For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch without LACP/LAG.

Standard Switch Configuration

The standard switch configuration consists of the following.

vSwitchNutanix
vSwitchNutanix handles internal communication between the CVM and the ESXi host. There are no uplink adapters associated with this vSwitch. This virtual switch enables the communication between the CVM and the host. Administrators must not modify the settings of this virtual switch or its port groups. The only members of this port group must be the CVM and its host. Do not modify this virtual switch configuration as it can disrupt communication between the host and the CVM.
vSwitch0
vSwitch0 consists of the vmk (VMkernel) management interface, vMotion interface, and VM port groups. This virtual switch connects to uplink network adapters that are plugged into a physical switch.

Planning the Migration

It is important to plan and understand the migration process. An incorrect configuration can disrupt communication, which can require downtime to resolve.

Consider the following while or before planning the migration.

  • Read Nutanix Best Practice Guide for VMware vSphere Networking available here.

  • Understand the various teaming and load-balancing algorithms on vSphere.

    For more information, see the VMware Documentation .

  • Confirm communication on the network through all the connected uplinks.
  • Confirm access to the host using IPMI when there are network connectivity issues during migration.

    Access the host to troubleshoot the network issue or move the management network back to the standard switch depending on the issue.

  • Confirm that the hypervisor external management IP address and the CVM IP address are in the same public subnet for the data path redundancy functionality to work.
  • When performing migration to the distributed switch, migrate one host at a time and verify that networking is working as desired.
  • Do not migrate the port groups and vmk (VMkernel) interfaces that are on vSwitchNutanix to the distributed switch (dvSwitch).

Unassigning Physical Uplink of the Host for Distributed Switch

All the physical adapters connect to the vSwitch0 of the host. A live distributed switch must have a physical uplink connected to it to work. To assign the physical adapter of the host to the distributed switch, unassign the physical adapter of the host and assign it to the new distributed switch.

About this task

To unassign the physical uplink of the host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Networking > Virtual Switches .
  4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters that you want to unassign from the list of physical adapters of the host.
    Figure. Managing Physical Adapters Click to enlarge Managing Physical Adapters

  5. Click X on the top.
    The selected adapter is unassigned from the list of physical adapters of the host.
    Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host. If you lose network connectivity to the ESXi host during this test, review your network configuration.

Migrating to a New Distributed Switch without LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Configuring Port Group Policies

Creating a Distributed Switch

Connect to vCenter and create a distributed switch.

About this task

To create a distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Distributed Switch Creation Click to enlarge Distributed Switch Creation

  3. Right-click the host, select Distributed Switch > New Distributed Switch , and specify the following information in the New Distributed Switch dialog box.
    1. Name and Location : Enter name for the distributed switch.
    2. Select Version : Select a distributed switch version that is compatible with all your hosts in that datacenter.
    3. Configure Settings : Select the number of uplinks you want to connect to the distributed switch.
      Select Create a default port group checkbox to create a port group. To configure a port group later, see Creating Port Groups on the Distributed Switch.
    4. Ready to complete : Review the configuration and click Finish .
    A new distributed switch is created with the default uplink port group. The uplink port group is the port group to which the uplinks connect. This uplink is different from the vmk (VMkernel) or the VM port groups.
    Figure. New Distributed Switch Created in the Host Click to enlarge New Distributed Switch Created in the Host

Creating Port Groups on the Distributed Switch

Create one or more vmk (VMkernel) port groups and VM port groups depending on the vSphere features you plan to use and or the physical network layout. The best practice is to have the vmk Management interface, vmk vMotion interface, and vmk iSCSI interface on separate port groups.

About this task

To create port groups on the distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Creating Distributed Port Groups Click to enlarge Creating Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port Group , and follow the wizard to create the remaining distributed port group (vMotion interface and VM port groups).
    You would need the following port groups because you would be migrating from the standard switch to the distributed switch.
    • VMkernel Management interface . Use this port group to connect to the host for all management operations.
    • VMNetwork . Use this port group to connect to the new VMs.
    • vMotion . This port group is an internal interface and the host will use this port during failover for vMotion traffic.
    Note: Nutnaix recommends you to use static port binding instead of ephemeral port binding when you create a port group.
    Figure. Distibuted Port Groups Created Click to enlarge Distibuted Port Groups Created

    Note: The port group for vmk management interface is created during the distributed switch creation. See Creating a Distributed Switch for more information.

Configuring Port Group Policies

To configure port groups, you must configure VLANs, Teaming and failover, and other distributed port groups policies at the port group layer or at the distributed switch layer. Refer to the following topics to configure the port group policies.

  1. Configuring Policies on the Port Group Layer
  2. Configuring Policies on the Distributed Switch Layer
  3. Adding ESXi Host to the Distributed Switch

Configuring Policies on the Port Group Layer

Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to configure the physical network switch. Configure the load balancing policy as per the network configuration requirements on the physical switch.

About this task

To configure the port group policies, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Configure Port Group Policies on the Distributed Switch Click to enlarge Configure Port Group Policies on the Distributed Switch

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings , and follow the wizard to configure the VLAN, Teaming and failover, and other options.
    Note: For more information about configuring port group policies, see the VMware Documentation .
  4. Click OK to complete the configuration.
  5. Repeat steps 2–4 to configure the other port groups.
Configuring Policies on the Distributed Switch Layer

You can configure the same policy for all the port groups simultaneously.

About this task

To configure the same policy for all the port groups, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Manage Distributed Port Groups Click to enlarge Manage Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port Groups , and specify the following information in Manage Distributed Port Group dialog box.
    1. In the Select port group policies tab, select the port group policies that you want to configure and click Next .
      Note: For more information about configuring port group policies, see the VMware Documentation .
    2. In the Select port groups tab, select the distributed port groups on which you want to configure the policy and click Next .
    3. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks , and click Next .
    4. In the Ready to complete window, review the configuration and click Finish .
Adding ESXi Host to the Distributed Switch

Migrate the management interface and CVM of the host to the distributed switch.

About this task

To migrate the Management interface and CVM of the ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Figure. Select Physical Adapter for Uplinking Click to enlarge Select Physical Adapter for Uplinking

          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
          Figure. Select a Port Group Click to enlarge Select a Port Group

        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .
  4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network configuration for the host.
    Note: Run a ping test to confirm that the networking on the host works as expected.
  5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.

Migrating to a New Distributed Switch with LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Creating Link Aggregation Group on Distributed Switch
  4. Creating Port Groups to use the LAG
  5. Adding ESXi Host to the Distributed Switch

Creating Link Aggregation Group on Distributed Switch

Using Link Aggregation Group (LAG) on a distributed switch, you can connect the ESXi host to physical switched by using dynamic link aggregation. You can create multiple link aggregation groups (LAGs) on a distributed switch to aggregate the bandwidth of physical NICs on ESXi hosts that are connected to LACP port channels.

About this task

To create a LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Distributed Switch > Configure > LACP .
    Figure. Create LAG on Distributed Switch Click to enlarge Create LAG on Distributed Switch

  4. Click New and enter the following details in the New Link Aggregation Group dialog box.
    1. Name : Enter a name for the LAG.
    2. Number of Ports : Enter the number of ports.
      The number of ports must match the physical ports per host in the LACP LAG. For example, if the Number of Ports two, then you can attach two physical ports per ESXi host to the LAG.
    3. Mode : Specify the state of the physical switch.
      Based on the configuration requirements, you can set the mode to Active or Passive .
    4. Load balancing mode : Specify the load balancing mode for the physical switch.
      For more information about the various load balancing options, see the VMware Documentation .
    5. VLAN trunk range : Specify the VLANs if you have VLANs configured in your environment.
  5. Click OK .
    LAG is created on the distributed switch.

Creating Port Groups to Use LAG

To use LAG as the uplink you have to edit the settings of the port group created on the distributed switch.

About this task

To edit the settings on the port group to use LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Management port Group > Edit Setting .
  4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.
    Figure. Configure the Management Port Group Click to enlarge Configure the Management Port Group

    1. Load Balancing : Select Route based IP hash .
    2. Active uplinks : Move the LAG under the Unused uplinks section to Active Uplinks section.
    3. Unused uplinks : Select the physical uplinks ( Uplink 1 and Uplink 2 ) and move them to the Unused uplinks section.
  5. Repeat steps 2–4 to configure the other port groups.

Adding ESXi Host to the Distributed Switch

Add the ESXi host to the distributed switch and migrate the network from the standard switch to the distributed switch. Migrate the management interface and CVM of the ESXi host to the distributed switch.

About this task

To migrate the Management interface and CVM of ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
      Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate this adapter to the corresponding port group on the distributed switch.
      Note: Do not migrate the VMkernel adapter associated with vSwitchNutanix.
      Note: If the are any VLANs associated with the port group on the standard switch, ensure that the corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to ensure it is configured as required.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .

vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. You can either create a vCenter Server or use an existing vCenter Server. To create a vCenter Server, refer to the VMware Documentation .

This section considers that you already have a vCenter Server and therefore describes the operations you can perform on an existing vCenter Server. To deploy vSphere clusters running Nutanix Enterprise Cloud, perform the following steps in the vCenter.

Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism Central. For more information, see Registering a Cluster to vCenter Server

1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best practices. For more information, see Creating a Nutanix Cluster in vCenter.

2. Configure HA. For more information, see vSphere HA Settings.

3. Configure DRS. For more information, see vSphere DRS Settings.

4. Configure EVC. For more information, see vSphere EVC Settings.

5. Configure override. For more information, see VM Override Settings.

6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to vCenter.

Registering a Cluster to vCenter Server

To perform core VM management operations directly from Prism without switching to vCenter Server, you need to register your cluster with the vCenter Server.

Before you begin

Ensure that you have vCenter Server Extension privileges as these privileges provide permissions to perform vCenter registration for the Nutanix cluster.

About this task

Following are some of the important points about registering vCenter Server.

  • Nutanix does not store vCenter Server credentials.
  • Whenever a new node is added to Nutanix cluster, vCenter Sever registration for the new node is automatically performed.
  • Nutanix supports vCenter Enhanced Linked Mode.

    When registering a Nutanix cluster to a vCenter Enhanced Linked Mode (EHM) enabled ESXi environment, ensure that Prism is registered to the vCenter containing the vSphere Cluster and Nutanix nodes (often the local vCenter). For more information about vCenter Enhanced Linked Mode, see vCenter Enhanced Linked Mode in the vCenter Server Installation and Setup documentation.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
  3. Click the Register link.
    The IP address is auto-populated in the Address field. The port number field is also auto-populated with 443. Do not change the port number. For the complete list of required ports, see Port Reference.
  4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
    Figure. vCenter Registration Figure 1 Click to enlarge vcenter registration

  5. Click Register .
    During the registration process a certificate is generated to communicate with the vCenter Server. If the registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
    Figure. vCenter Registration Figure 2 Click to enlarge vcenter registration

Unregistering a Cluster from the vCenter Server

To unregister the vCenter Server from your cluster, perform the following procedure.

About this task

  • Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter Server. After you change the IP address of the vCenter Sever, you should register the vCenter Server again with the new IP address with the cluster.
  • The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host Connection field changes to Not Connected , it implies that the hosts are being managed by a different vCenter Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to register to this vCenter Server.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    A message that cluster is already registered to the vCenter Server is displayed.
  3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
  4. Click Unregister .
    If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is displayed in the Tasks dashboard.

Creating a Nutanix Cluster in vCenter

Before you begin

Nutanix recommends creating a storage container in the Prism Element running on the host or using the default container to mount NFS datastore on all ESXi hosts.

About this task

To enable the vCenter to discover the Nutanix clusters, perform the following steps in the vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Do one of the following.
    • If you want the Nutanix cluster to be in an existing datacenter, proceed to step 3.
    • If you want the Nutanix cluster to be in a new datacenter or if there is no datacenter, perform the following steps to create a datacenter.
      Note: Nutanix clusters must be in a datacenter.
    1. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
    2. Click New Datacenter .
    3. Enter a meaningful name for the datacenter (for example, NTNX-DC ) and click OK .
  3. Right-click the datacenter node and click New Cluster .
    1. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster ).
    2. Turn on the vSphere DRS switch.
    3. Turn on the Turn on vSphere HA switch.
    4. Uncheck Manage all hosts in the cluster with a single image .
    Nutanix cluster ( NTNX-Cluster ) is created with the default settings for vSphere HA and vSphere DRS.

What to do next

Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a Nutanix Node to vCenter.

Adding a Nutanix Node to vCenter

Before you begin

Configure the Nutanix cluster according to Nutanix specifications given in Creating a Nutanix Cluster in vCenter and vSphere Cluster Settings Checklist.

About this task

Note: To ensure that vCenter managed ESXi hosts are accessible through vCenter only and are not directly accessible, put the vCenter managed ESXi hosts in lockdown mode. Lockdown mode forces all operations through the vCenter Server.
Tip: Refer to KB-1661 for the default credentials of all cluster components.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
  3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile .
    Ensure that Lockdown Mode is Disabled because Nutanix does not support lockdown mode.
  4. Configure DNS servers.
    1. Go to Configure > Networking > TCP/IP configuration .
    2. Click Default under TCP/IP stack and go to TCP/IP .
    3. Click the pencil icon to configure DNS servers and perform the following.
        1. Select Enter settings manually .
        2. Type the domain name in the Domain field.
        3. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK .
  5. Configure NTP servers.
    1. Go to Configure > System > Time Configuration .
    2. Click Edit .
    3. Select the Use Network Time Protocol (Enable NTP client) .
    4. Type the NTP server address in the NTP Servers text box.
    5. In the NTP Service Startup Policy, select Start and stop with host from the drop-down list.
      Add multiple NTP servers if necessary.
    6. Click OK .
  6. Click Configure > Storage and ensure that NFS datastores are mounted.
    Note: Nutanix recommends creating a storage container in Prism Element running on the host.
  7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.
    Note: Automatic VM start and stop is disabled in clusters where HA is enabled.
    1. Go to Configure > Virtual Machines > VM Startup/Shutdown .
    2. Click Edit .
    3. Ensure that Automatically start and stop the virtual machines with the system is checked.
    4. If the CVM is listed in Manual Startup , click the up arrow to move the CVM into the Automatic Startup section.
    5. Click OK .

What to do next

Configure HA and DRS settings. For more information, see vSphere HA Settings and vSphere DRS Settings.

Nutanix Cluster Settings

To ensure the optimal performance of your vSphere deployment running on Nutanix cluster, configure the following settings from the vCenter.

vSphere General Settings

About this task

Configure the following general settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > General .
    1. Under General , set the Swap file location to Virtual machine directory .
      Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
    2. Under Default VM Compatibility , set the compatibility to Use datacenter setting and host version .
      Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.
      Figure. General Cluster Settings Click to enlarge General Cluster Settings

vSphere HA Settings

If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute resources available to restart all VMs that were running on the failed node.

About this task

Configure the following HA settings from vCenter.
Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and conform to Nutanix best practices.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere Availability .
  4. Click Edit next to the text showing vSphere HA status.
    Figure. vSphere Availability Settings: Failures and Responses Click to enlarge vSphere Availability Settings: Failures and Responses

    1. Turn on the vSphere HA and Enable Host Monitoring switches.
    2. Specify the following information under the Failures and Responses tab.
        1. Host Failure Response : Select Restart VMs from the drop-down list.

          This option configures the cluster-wide host isolation response settings.

        2. Response for Host Isolation : Select Power off and restart VMs from the drop-down list.
        3. Datastore with PDL : Select Disabled from the drop-down list.
        4. Datastore with APD : Select Disabled from the drop-down list.
          Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.
        5. VM Monitoring : Select Disabled from the drop-down list.
    3. Specify the following information under the Admission Control tab.
      Note: If you are using replication factor 2 with cluster sizes up to 16 nodes, configure HA admission control settings to tolerate one node failure. For cluster sizes larger than 16 nodes, configure HA admission control to sustain two node failures and use replication factor 3. vSphere 6.7, and newer versions automatically calculate the percentage of resources required for admission control.
      Figure. vSphere Availability Settings: Admission Control Click to enlarge vSphere Availability Settings: Admission Control

        1. Host failures cluster tolerates : Enter 1 or 2 based on the number of nodes in the Nutanix cluster and the replication factor.
        2. Define host failover capacity by : Select Cluster resource Percentage from the drop-down list.
        3. Performance degradation VMs tolerate : Set the percentage to 100.

          For more information about settings of percentage of cluster resources reserved as failover spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment.

    4. Specify the following information under the Heartbeat Datastores tab.
      Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management network partition occurs while continuing to respond to failures.
      Figure. vSphere Availability Settings: Heartbeat Datastores Click to enlarge vSphere Availability Settings: Heartbeat Datastores

        1. Select Use datastores only from the specified list .
        2. Select the named storage container mounted as the NFS datastore (Nutanix datastore).

          If you have more than one named storage container, select all that are applicable.

        3. If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .
    5. Click OK .

vSphere HA Admission Control Settings for Nutanix Environment

Overview

If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.

N+2 Availability Configuration

The N+2 availability configuration can be achieved in the following two ways.

  • Redundancy factor 2 and N+2 vSphere HA admission control setting configured.

    Because Nutanix distributed file system recovers in the event of a node failure, it is possible to have a second node failure without data being unavailable if the Nutanix cluster has fully recovered before the subsequent failure. In this case, a N+2 vSphere HA admission control setting is required to ensure sufficient compute resources are available to restart all the VMs.

  • Redundancy factor 3 and N+2 vSphere HA admission control setting configured.
    If you want two concurrent node failures to be tolerated and the cluster has insufficient blocks to use block awareness, redundancy factor 3 in a cluster of five or more nodes is required. In either of these two options, the Nutanix storage pool must have sufficient free capacity to restore the configured redundancy factor (2 or 3). The percentage of free space required is the same as the required HA admission control percentage setting. In this case, redundancy factor 3 must be configured at the storage container layer. An N+2 vSphere HA admission control setting is also required to ensure sufficient compute resources are available to restart all the VMs.
    Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart following a failure.
Table 1. Minimum Reservation Percentage for vSphere HA Admission Control Setting For redundancy factor 2 deployments, the recommended minimum HA admission control setting percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated, the minimum required HA admission control setting percentage is marked with two asterisks (**) in the following table.
Nodes Availability Level
N+1 N+2 N+3 N+4
1 N/A N/A N/A N/A
2 N/A N/A N/A N/A
3 33* N/A N/A N/A
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
Nodes Availability Level
N+1 N+2 N+3 N+4
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three or more blocks exist within a cluster).

Block Awareness

For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block of up to four nodes configured with redundancy factor 2 can become unavailable.

If block awareness levels of availability are required, the vSphere HA admission control setting must ensure sufficient compute resources are available to restart all virtual machines. In addition, the Nutanix storage pool must have sufficient space to restore redundancy factor 2 to all data.

The vSphere HA minimum availability level must be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.

Rack Awareness

Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.

  • All power supplies in a rack fail.
  • Top-of-rack (TOR) switch fails.
  • Network partition occurs: one of the racks becomes inaccessible from the other racks.

With rack fault tolerance enabled, the cluster has rack awareness and guest VMs can continue to run even during the failure of one rack (with replication factor 2) or two racks (with replication factor 3). The redundant copies of guest VM data and metadata persist on other racks when one rack fails.

Table 2. Rack awareness has minimum requirements, described in the following table.
Replication factor Minimum number of nodes Minimum number of Blocks Minimum number of racks Data resiliency
2 3 3 3 Failure of 1 node, block, or rack
3 5 5 5 Failure of 2 nodes, blocks, or racks

vSphere DRS Settings

About this task

Configure the following DRS settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere DRS .
  4. Click Edit next to the text showing vSphere DRS status.
    Figure. vSphere DRS Settings: Automation Click to enlarge vSphere DRS Settings: Automation

    1. Turn on the vSphere DRS switch.
    2. Specify the following information under the Automation tab.
        1. Automation Level : Select Fully Automated from the drop-down list.
        2. Migration Threshold : Set the bar between conservative and aggressive (value=3).

          Migration threshold provides optimal resource utilization while minimizing DRS migrations with little benefit. This threshold automatically manages data locality in such a way that whenever VMs move, writes are always written on one of the replicas locally to maximize the subsequent read performance.

          Nutanix recommends the migration threshold at 3 in a fully automated configuration.

        3. Predictive DRS : Leave the option disabled.

          The value of predictive DRS depends on whether you use other VMware products such as vRealize operations. Unless you use vRealize operations, Nutanix recommends disabling predictive DRS.

        4. Virtual Machine Automation : Enable VM automation.
    3. Specifying anything under the Additional Options tab is optional.
    4. Specify the following information under the Power Management tab.
      Figure. vSphere DRS Settings: Power Management Click to enlarge vSphere DRS Settings: Power Management

        1. DPM : Leave the option disabled.

          Enabling DPM causes nodes in the Nutanix cluster to go offline, affecting cluster resources.

    5. Click OK .

vSphere EVC Settings

vSphere enhanced vMotion compatibility (EVC) ensures that workloads can live migrate, using vMotion, between ESXi hosts in a Nutanix cluster that are running different CPU generations. The general recommendation is to have EVC enabled as it will help you in the future where you will be scaling your Nutanix clusters with new hosts that might contain new CPU models.

About this task

To enable EVC in a brownfield scenario can be challenging. Configure the following EVC settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Shut down all the VMs on the hosts with feature sets greater than the EVC mode.
    Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.
  4. Click Configure , and go to Configuration > VMware EVC .
  5. Click Edit next to the text showing VMware EVC.
  6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK .
    If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the baseline.
    Tip: To know the processor class of a node, perform the following steps.
      1. Log on to Prism Element running on the Nutanix cluster.
      2. Click Hardware from the menu and go to Diagram or Table view.
      3. Click the node and look for the Block Serial field in Host Details .
    Figure. VMware EVC Click to enlarge VMware EVC

  7. Start the VMs in the Nutanix cluster to apply the EVC.
    If you try to enable EVC on a Nutanix cluster with mismatching host feature sets (mixed processor clusters), the lowest common feature set (lowest processor class) is selected. Hence, if VMs are already running on the new host and if you want to enable EVC on the host, you must first shut down the VMs and then enable EVC.
    Note: Do not shut down more than one CVM at the same time.

VM Override Settings

You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak the following VM overriding settings.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > VM Overrides .
  4. Select all the CVMs and click Next .
    If you do not have the CVMs listed, click Add to ensure that the CVMs are added to the VM Overrides dialog box.
    Figure. VM Override Click to enlarge VM Override

  5. In the VM override section, configure override for the following parameters.
    • DRS Automation Level: Disabled
    • VM HA Restart Priority: Disabled
    • VM Monitoring: Disabled
  6. Click Finish .

Migrating a Nutanix Cluster from One vCenter Server to Another

About this task

Perform the following steps to migrate a Nutanix cluster from one vCenter Server to another vCenter Server.
Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation. .

Procedure

  1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a Nutanix Cluster in vCenter.
  2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings.
  3. Unregister the Nutanix cluster from the source vCenter Server. See Unregistering a Cluster from the vCenter Server.
  4. Move the nodes from the source vCenter Server to the new vCenter Server.
    See the VMware Documentation to know the process.
  5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to vCenter Server.

Storage I/O Control (SIOC)

SIOC controls the I/O usage of a virtual machine and gradually enforces the predefined I/O share levels. Nutanix converged storage architecture does not require SIOC. Therefore, while mounting a storage container on an ESXi host, the system disables SIOC in the statistics mode automatically.

Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the following issues.
  • The storage can become unavailable because the hosts repeatedly create and delete the access .lck-XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
  • Site Recovery Manager (SRM) failover and failback does not run efficiently.
  • If you are using Metro Availability disaster recovery feature, activate and restore operations do not work.
    Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty storage container. Disable SIOC and delete all the files from the storage container that are related to SIOC. For more information, see KB-3501 .
Run the NCC health check (see KB-3358 ) to verify if SIOC and SIOC in statistics mode are disabled on storage containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the procedure described in Disabling Storage I/O Control (SIOC) on a Container.

Disabling Storage I/O Control (SIOC) on a Container

About this task

Perform the following procedure to disable storage I/O statistics collection.

Procedure

  1. Log on to vCenter with the web client.
  2. Click the Storage view in the left pane.
  3. Right-click the storage container under the Nutanix cluster and select Configure Storage I/O Controller .
    The properties for the storage container are displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
    1. Disable Storage I/O Control and statistics collection.
    2. Disable Storage I/O Control but enable statistics collection.
    3. Disable Storage I/O Control and statistics collection: Select the Disable Storage I/O Control and statistics collection option to disable SIOC.
      Uncheck Include I/O Statistics for SDRS option.
    4. Click OK .

Node Management

This chapter describes the management tasks you can do on a Nutanix node.

Node Maintenance (ESXi)

You are required to gracefully place a node into the maintenance mode or non-operational state for reasons such as making changes to the network configuration of a node, performing manual firmware upgrades or replacements, performing CVM maintenance or any other maintenance operations.

Entering and Exiting Maintenance Mode

With a minimum AOS release of 6.1.2, 6.5.1 or 6.6, you can only place one node at a time in maintenance mode for each cluster.​ When a host is in maintenance mode, the CVM is placed in maintenance mode as part of the node maintenance operation and any associated RF1 VMs are powered-off. The cluster marks the host as unschedulable so that no new VM instances are created on it. When a node is placed in the maintenance mode from the Prism web console, an attempt is made to evacuate VMs from the host. If the evacuation attempt fails, the host remains in the "entering maintenance mode" state, where it is marked unschedulable, waiting for user remediation.

When a host is placed in the maintenance mode, the CVM is placed in the maintenance mode as part of the node maintenance operation. The non-migratable VMs (for example, pinned or RF1 VMs which have affinity towards a specific node) are powered-off while live migratable or high availability (HA) VMs are moved from the original host to other hosts in the cluster. After exiting the maintenance mode, all non-migratable guest VMs are powered on again and the live migrated VMs are automatically restored on the original host.
Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.

See Putting a Node into Maintenance Mode (vSphere) to place a node under maintenance.

You can also enter or exit a host under maintenance through the vCenter web client. See Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.

Exiting a Node from Maintenance Mode

See Exiting a Node from the Maintenance Mode (vSphere) to remove a node from the maintenance mode.

Viewing a Node under Maintenance Mode

See Viewing a Node that is in Maintenance Mode to view the node under maintenance mode.

Guest VM Status when Node under Maintenance Mode

See Guest VM Status when Node is in Maintenance Mode to view the status of guest VMs when a node is undergoing maintenance operations.

Best Practices and Recommendations

Nutanix strongly recommends using the Enter Maintenance Mode option to place a node under maintenance.

Known Issues and Limitations ESXi

  • The Prism web console enabled maintenance operations (enter and exit node maintenance) are currently supported on ESXi.
  • Entering or exiting a node under maintenance using the vCenter for ESXi is not equivalent to entering or exiting the node under maintenance from the Prism Element web console.
  • You cannot exit the node from maintenance mode from Prism Element web console if the node is placed under maintenance mode using vCenter (ESXi node). However, you can enter the node maintenance through the Prism Element web console and exit the node maintenance using the vCenter (ESXi node).

Putting a Node into Maintenance Mode (vSphere)

Before you begin

Check the cluster status and resiliency before putting a node under maintenance. You can also verify the status of the guest VMs. See Guest VM Status when Node is in Maintenance Mode for more information.

About this task

As the node enter the maintenance mode, the following high-level tasks are performed internally.
  • The host initiates entering the maintenance mode.
  • The HA VMs are live migrated.
  • The pinned and RF1 VMs are powered-off.
  • The CVM enters the maintenance mode.
  • The CVM is shutdown.
  • The host completes entering the maintenance mode.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Select the node that you want to put under maintenance.
  5. Click the Enter Maintenance Mode option.
    Figure. Enter Maintenance Mode Option Click to enlarge

  6. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  7. On Host Maintenance window, select the Power off VMs that cannot migrate check box to enable the Enter Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.
  8. Click the Enter Maintenance Mode button.
    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is entering the maintenance mode.
    • The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely enters the maintenance mode.
      Figure. Enter Node Maintenance (On-going) Click to enlarge

    • You can also monitor the progress of the node maintenance operation through the newly created Host enter maintenance and Enter maintenance mode tasks which appear in the task tray.
    Note: In case of a node maintenance failure, certain roll-back operations are performed. For example, the CVM is rebooted. But the live-migrated VMs are not restored to the original host.

What to do next

Once the maintenance activity is complete, you can perform any of the following.
  • View the nodes under maintenance, see Viewing a Node that is in Maintenance Mode
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere))

Viewing a Node that is in Maintenance Mode

About this task

Note: This procedure is the same for AHV and ESXI nodes.

Perform the following steps to view a node under maintenance.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also view this icon in the host details view.
    Figure. Example: Node under Maintenance (Table and Host Details View) in AHV Click to enlarge

  5. Alternatively, view the node under maintenance from the Hardware > Diagram view.
    Figure. Example: Node under Maintenance (Diagram and Host Details View) in AHV Click to enlarge

What to do next

You can:
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) .

Exiting a Node from the Maintenance Mode (vSphere)

After you perform any maintenance activity, exit the node from the maintenance mode.

About this task

As the node exits the maintenance mode, the following high-level tasks are performed internally.
  • The host is taken out of maintenance.
  • The CVM is powered on.
  • The CVM is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. On the Prism web console home page, select Hardware from the drop-down menu.
  2. Go to the Table > Host view.
  3. Select the node which you intend to remove from the maintenance mode.
  4. Click the Exit Maintenance Mode option.
    Figure. Exit Maintenance Mode Option - Table View Click to enlarge

    Figure. Exit Maintenance Mode Option - Diagram View Click to enlarge

  5. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  6. On the Host Maintenance window, click the Exit Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is exiting the maintenance mode.
    • The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node completely exits the maintenance mode.
    • You can also monitor the progress of the exit node maintenance operation through the newly created Host exit maintenance and Exit maintenance mode tasks which appear in the task tray.

What to do next

You can:
  • View the status of node under maintenance, see Viewing a Node that is in Maintenance Mode.
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.

Guest VM Status when Node is in Maintenance Mode

The following scenarios demonstrate the behavior of three guest VM types - high availability (HA) VMs, pinned VMs, and RF1 VMs, when a node enters and exits a maintenance operation. The HA VMs are live VMs that can migrate across nodes if the host server goes down or reboots. The pinned VMs have the host affinity set to a specific node. The RF1 VMs have affinity towards a specific node or a CVM. To view the status of the guest VMs, go to VM > Table .

Note: The following scenarios are the same for AHV and ESXI nodes.

Scenario 1: Guest VMs before Node Entering Maintenance Mode

In this example, you can observe the status of the guest VMs on the node prior to the node entering the maintenance mode. All the guest VMs are powered-on and reside on the same host.

Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Scenario 2: Guest VMs during Node Maintenance Mode

  • As the node enter the maintenance mode, the following high-level tasks are performed internally.
    1. The host initiates entering the maintenance mode.
    2. The HA VMs are live migrated.
    3. The pinned and RF1 VMs are powered-off.
    4. The host completes entering the maintenance mode.
    5. The CVM enters the maintenance mode.
    6. The AHV host completes entering the maintenance mode.
    7. The CVM enters the maintenance mode.
    8. The CVM is shut down.
Figure. Example: VM and Hosts before Entering Maintenance Mode Click to enlarge

Scenario 3: Guest VMs after Node Exiting Maintenance Mode

  • As the node exits the maintenance mode, the following high-level tasks are performed internally.
    1. The CVM is powered on.
    2. The CVM is taken out of maintenance.
    3. The host is taken out of maintenance.
    After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.
Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Nonconfigurable ESXi Components

The Nutanix manufacturing and installation processes done by running Foundation on the Nutanix nodes configures the following components. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

ESXi

Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • NFS datastore settings
  • VM swapfile location
  • VM startup/shutdown order
  • CVM name
  • CVM virtual hardware configuration file (.vmx file)
  • iSCSI software adapter settings
  • Hardware settings, including passthrough HBA settings.

  • vSwitchNutanix standard virtual switch
  • vmk0 interface in Management Network port group
  • SSH
    Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi server through a control plane that does not depend on additional management systems or processes. The SSH connection is also required to modify the networking and control paths in the case of a host failure to maintain High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.
  • Open host firewall ports
  • CPU resource settings such as CPU reservation, limit, and shares of the CVM.
    Caution: Do not use the Reset System Configuration option.
  • ProductLocker symlink setting to point at the default datastore.

    Do not change the /productLocker symlink to point at a non-local datastore.

    Do not change the ProductLockerLocation advanced setting.

Putting the CVM and ESXi Host in Maintenance Mode Using vCenter

About this task

Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster undergoes maintenance or patch installations.
Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports putting the node in maintenance mode.

Procedure

  1. Log on to vCenter with the web client.
  2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the following.
    • Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
    • Shut down VMs other than the CVM that you do not want to migrate to another host.
  3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
  4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual machines to other hosts in the cluster and click OK .
    Note:

    In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either manually shut down those VMs or ensure the VMs can be migrated.

    Caution: When you put the host in maintenance mode, the maintenance mode process powers down or migrates all the VMs that are running on the host.
    The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS automatically attempts to migrate all the VMs to another host in the Nutanix cluster.

    The host enters maintenance mode after its CVM is shut down.

Shutting Down an ESXi Node in a Nutanix Cluster

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

You can put the ESXi host into maintenance mode and shut it down either from the web client or from the command line. For more information about shutting down a node from the command line, see Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. Log on to vCenter with the web client.
  2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.
    Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that is not present on the target host.
  3. Right-click the host and select Shut Down .
    Wait until the vCenter displays that the host is not responding, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

Procedure

  1. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
  2. Log on to another CVM in the Nutanix cluster with SSH.
  3. Shut down the host.
    nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command returns no output. If it fails with a message like the following, VMs are probably still running on the host.

    CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/maintenance_mode_enter failed with ret=-1

    Ensure that all VMs are shut down or moved to another host and try again before proceeding.

    nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host..

    Alternatively, you can put the ESXi host into maintenance mode and shut it down using the vSphere web client. For more information, see Shutting Down an ESXi Node in a Nutanix Cluster.

    If the host shuts down, a message like the following is displayed.

    INFO esx-shutdown:67 Please verify if ESX was successfully shut down using ping hypervisor_ip_addr

    hypervisor_ip_addr is the IP address of the ESXi host.

  4. Confirm that the ESXi host has shut down.
    nutanix@cvm$ ping hypervisor_ip_addr

    Replace hypervisor_ip_addr with the IP address of the ESXi host.

    If no ping packets are answered, the ESXi host shuts down.

Starting an ESXi Node in a Nutanix Cluster

About this task

You can start an ESXi host either from the web client or from the command line. For more information about starting a node from the command line, see Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.
  2. Log on to vCenter (or to the node if vCenter is not running) with the web client.
  3. Right-click the ESXi host and select Exit Maintenance Mode .
  4. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  5. Log on to another CVM in the Nutanix cluster with SSH.
  6. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  7. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.
  8. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)

About this task

You can start an ESXi host either from the command line or from the web client. For more information about starting a node from the web client, see Starting an ESXi Node in a Nutanix Cluster .

Procedure

  1. Log on to a running CVM in the Nutanix cluster with SSH.
  2. Start the CVM.
    nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command produces no output. If it fails, wait 5 minutes and try again.

    nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    .

    If the CVM starts, a message like the following is displayed.

    INFO esx-start-cvm:67 CVM started successfully. Please verify using ping cvm_ip_addr

    cvm_ip_addr is the IP address of the CVM on the ESXi host.

    After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.

    Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For more information, see Starting an ESXi Node in a Nutanix Cluster

  3. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  4. Verify the storage.
    1. Log on to the ESXi host with SSH.
    2. Rescan for datastores.
      root@esx# esxcli storage core adapter rescan --all
    3. Confirm that cluster VMFS datastores, if any, are available.
      root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting an ESXi Node using CLI

Before you begin

Shut down the guest VMs, including vCenter that are running on the node or move them to other nodes in the Nutanix cluster.

About this task

Procedure

  1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.
  2. Right-click the host and select Maintenance mode > Enter Maintenance Mode .
    In the Confirm Maintenance Mode dialog box, click OK .
    The host is placed in maintenance mode, which prevents VMs from running on the host.
    Note: The host does not enter in the maintenance mode until after the CVM is shut down.
  3. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  4. Right-click the node and select Power > Reboot .
    Wait until vCenter shows that the host is not responding and then is responding again, which takes several minutes.

    If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

  5. Right-click the ESXi host and select Exit Maintenance Mode .
  6. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  7. Log on to the CVM with SSH.
  8. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  9. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.

Rebooting an ESXI Node in a Nutanix Cluster

About this task

The Request Reboot operation in the Prism web console gracefully restarts the selected nodes one after the other.

Perform the following procedure to restart the nodes in the cluster.

Procedure

  1. Click the gear icon in the main menu and then select Reboot in the Settings page.
  2. In the Request Reboot window, select the nodes you want to restart, and click Reboot .
    Figure. Request Reboot of ESXi Node Click to enlarge

    A progress bar is displayed that indicates the progress of the restart of each node.

Changing an ESXi Node Name

After running a bare-metal Foundation, you can change the host (node) name from the command line or by using the vSphere web client.

To change the hostname, see VMware Documentation. .

Changing an ESXi Node Password

Although it is not required for the root user to have the same password on all hosts (nodes), doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

To change the host password, see VMware Documentation .

Changing the CVM Memory Configuration (ESXi)

About this task

You can increase the memory reserved for each CVM in your Nutanix cluster by using the 1-click CVM Memory Upgrade option available from the Prism Element web console.

Increase memory size depending on the workload type or to enable certain AOS features. For more information about CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .

VM Management

For the list of supported VMs, see Compatibility and Interoperability Matrix.

VM Management Using Prism Central

You can create and manage a VM on your ESXi from Prism Central. For more information, see Creating a VM through Prism Central (ESXi) and Managing a VM (ESXi).

Creating a VM through Prism Central (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

To create a VM, do the following:

Procedure

  1. Go to the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and click the Create VM button.
    The Create VM wizard appears.
  2. In the Configuration step, do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Cluster : Select the target cluster from the pull-down list on which you intend to create the VM.
    4. Number of VMs : Enter the number of VMs you intend to create. The created VM names are suffixed sequentially.
    5. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    6. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    7. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
    Figure. Create VM Window (Configuration) Click to enlarge VM update window display

  3. In the Resources step, do the following.

    Disks: To attach a disk to the VM, click the Attach Disk button. The Add Disks dialog box appears. Do the following in the indicated fields:

    Figure. Add Disk Dialog Box Click to enlarge

    1. Type : Select the type of storage device, Disk or CD-ROM , from the pull-down list.
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from NDSF file to copy any file from the cluster that can be used as an image onto the disk.

      • [ CD-ROM only] Select Empty CD-ROM to create a blank CD-ROM device. A CD-ROM device is needed when you intend to provide a system image from CD-ROM.
      • [Disk only] Select Allocate on Storage Container to allocate space without specifying an image. Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
      • Select Clone from Image to copy an image that you have imported by using image service feature onto the disk.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE , SCSI , PCI , or SATA .
    4. Path : Enter the path to the desired system image.
    5. Clone from Image : Select the image that you have created by using the image service feature.

      This field appears only when Clone from Image Service is selected. It specifies the image to copy.

      Note: If the image you created does not appear in the list, see KB-4892.
    6. Storage Container : Select the storage container to use from the pull-down list.

      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.

    7. Capacity : Enter the disk size in GiB.
    8. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    9. Repeat this step to attach additional devices to the VM.
    Figure. Create VM Window (Resources) Click to enlarge Create VM Window (Resources)

  4. In the Resources step, do the following.

    Networks: To create a network interface for the VM, click the Attach to Subnet button. The Attach to Subnet dialog box appears.

    Do the following in the indicated fields:

    Figure. Attach to Subnet Dialog Box Click to enlarge Create NIC window display

    1. Subnet : Select the target virtual LAN from the pull-down list.

      The list includes all defined networks (see Configuring Network Connections in the Prism Central Guide ).

    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see vCenter Server Integration in the Prism Central Guide .

    3. Network Connection State : Select the state for the network that you want it to operate in after VM creation. The options are Connected or Disconnected .
    4. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    5. Repeat this step to create more network interfaces for the VM.
  5. In the Management step, do the following.
    1. Categories : Search for the category to be assigned to the VM. The policies associated with the category value are assigned to the VM.
    2. Guest OS : Type and select the guest operating system.

      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see vCenter Server Integration in the Prism Central Guide .

      Figure. Create VM Window (Management) Click to enlarge VM update resources display - VM Flash Mode

  6. In the Review step, when all the field entries are correct, click the Create VM button to create the VM and close the Create VM dialog box.

    The new VM appears in the VMs entity page list.

Managing a VM (ESXi)

You can manage virtual machines (VMs) in an ESXi cluster through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

After creating a VM (see Creating a VM through Prism Central (ESXi)), you can use Prism Central to update the VM configuration, delete the VM, clone the VM, launch a console window, power on (or off) the VM, enable flash mode for a VM, assign the VM to a protection policy, create VM recovery point, add the VM to a recovery plan, run a playbook, manage categories, install and manage Nutanix Guest Tools (NGT), manage the VM ownership, or configure QoS settings.

You can perform these tasks by using any of the following methods:

  • Select the target VM in the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and choose the required action from the Actions menu.
  • Right-click on the target VM in the List tab of the VMs dashboard and select the required action from the drop-down list.
  • Go to the details page of a selected VM (see VM Details View in the Prism Central Guide ) and select the desired action.
Note: The available actions appear in bold; other actions are unavailable. The available actions depend on the current state of the VM and your permissions.

Procedure

  • To modify the VM configuration, select Update .

    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Make the desired changes and then click the Save button in the Review step.

    Figure. Update VM Window (Resources) Click to enlarge VM update resources display - VM Flash Mode

  • Disks: You can add new disks to the VM using the Attach Disk option. You can also modify the existing disk attached to the VM using the controls under the actions column. See Creating a VM through Prism Central (ESXi) before you create a new disk for a VM. You can enable or disable the flash mode settings for the VM. To enable flash mode on the VM, click the Enable Flash Mode check box. After you enable this feature on the VM, the status is updated in the VM table view.
  • Networks: You can attach new network to the VM using the Attach to Subnet option. You can also modify the existing subnet attached to the VM. See Creating a VM through Prism Central (ESXi) before you modify NIC network or create a new NIC for a VM.
  • To delete the VM, select Delete . A window prompt appears; click the OK button to delete the VM.
  • To clone the VM, select Clone .

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • You cannot override the secure boot setting while cloning a VM, unless the source VM already had secure boot setting enabled.
    Figure. Clone VM Window Click to enlarge clone VM window display

  • To launch a console window, select Launch Console .

    This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power On Actions (or Power Off Actions ) action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.

    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Chrome. (Firefox typically works best.)
    Figure. Console Window (VNC) Click to enlarge VNC console window display

  • To power on (or off) the VM, select Power on (or Power off ).
  • To disable (or enable) efficiency measurement for the VM, select Disable Efficiency Measurement (or Enable Efficiency Measurement ).
  • To disable (or enable) anomaly detection the VM, select Disable Anomaly Detection (or Enable Anomaly Detection ).
  • To assign the VM to a protection policy, select Protect . This opens a page to specify the protection policy to which this VM should be assigned. To remove the VM from a protection policy, select Unprotect .
    Note: You can create a protection policy for a VM or set of VMs that belong to one or more categories by enabling Leap and configuring the Availability Zone.
  • To migrate the VM to another host, select Migrate .

    This displays the Migrate VM dialog box. Select the target host from the pull-down list (or select the System will automatically select a host option to let the system choose the host) and then click the Migrate button to start the migration.

    Figure. Migrate VM Window Click to enlarge migrate VM window display

    Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated while heavily utilized, migration may fail because of limited bandwidth.
  • To add this VM to a recovery plan you created previously, select Add to Recovery Plan . For more information, see Adding Guest VMs Individually to a Recovery Plan in the Leap Administration Guide .
  • To create VM recovery point, select Create Recovery Point .

    This displays the Create VM Recovery Point dialog box. Enter a name of the recovery action for the VM. You can choose to create an App Consistent VM recovery point by enabling the check-box. The VM can be restored or replicated from a Recovery Point either locally or remotely in a state of a chosen recovery point.

    Figure. Create VM Recovery Point Window Click to enlarge Create VM Recovery Point Window display

  • To run a playbook you created previously, select Run Playbook . For more information, see Running a Playbook (Manual Trigger) in the Prism Central Guide .
  • To assign the VM a category value, select Manage Categories .

    This displays the Manage VM Categories page. For more information, see Assigning a Category in the Prism Central Guide .

  • To install Nutanix Guest Tools (NGT), select Install NGT . For more information, see Installing NGT on Multiple VMs in the Prism Central Guide .
  • To enable (or disable) NGT, select Manage NGT Applications . For more information, see Managing NGT Applications in the Prism Central Guide .
    The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    Note: If you clone a VM, by default NGT is not enabled on the cloned VM. You need to again enable and mount NGT on the cloned VM. If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .

    If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.

    ncli> ngt mount vm-id=virtual_machine_id

    For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

    ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
    c1601e759987
    Note:
    • Self-service restore feature is not enabled by default on a VM. You need to manually enable the self-service restore feature.
    • If you have created the NGT ISO CD-ROMs on AOS 4.6 or earlier releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You need to unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to a later AOS version.
  • To upgrade NGT, select Upgrade NGT . For more information, see Upgrading NGT in the Prism Central Guide .
  • To establish VM host affinity, select Configure VM Host Affinity .

    A window appears with the available hosts. Select (click the icon for) one or more of the hosts and then click the Save button. This creates an affinity between the VM and the selected hosts. If possible, it is recommended that you create an affinity to multiple hosts (at least two) to protect against downtime due to a node failure. For more information about VM affinity policies, see Affinity Policies Defined in Prism Central in the Prism Central Guide .

    Figure. Set VM Host Affinity Window Click to enlarge

  • To add a VM to the catalog, select Add to Catalog . This displays the Add VM to Catalog page. For more information, see Adding a Catalog Item in the Prism Central Guide .
  • To specify a project and user who own this VM, select Manage Ownership .
    In the Manage VM Ownership window, do the following in the indicated fields:
    1. Project : Select the target project from the pull-down list.
    2. User : Enter a user name. A list of matches appears as you enter a string; select the user name from the list when it appears.
    3. Click the Save button.
    Figure. VM Ownership Window Click to enlarge

  • To configure quality of service (QoS) settings, select Set QoS Attributes . For more information, see Setting QoS for an Individual VM in the Prism Central Guide .

VM Management using Prism Element

You can create and manage a VM on your ESXi from Prism Element. For more information, see Creating a VM (ESXi) and Managing a VM (ESXi).

Creating a VM (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through the web console.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but you cannot attach a volume group to the VM.

To create a VM, do the following:

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Create VM button.
    The Create VM dialog box appears.
  2. Do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Guest OS : Type and select the guest operating system.
      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
  3. To attach a disk to the VM, click the Add New Disk button.
    The Add Disks dialog box appears.
    Figure. Add Disk Dialog Box Click to enlarge configure a disk screen

    Do the following in the indicated fields:
    1. Type : Select the type of storage device, DISK or CD-ROM , from the pull-down list.
      The following fields and options vary depending on whether you choose DISK or CD-ROM .
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
      • Select Allocate on Storage Container to allocate space without specifying an image. (This option appears only when DISK is selected in the previous field.) Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE or SCSI .
    4. ADSF Path : Enter the path to the desired system image.
      This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the path name as / storage_container_name / vmdk_name .vmdk . For example to clone an image from myvm-flat.vmdk in a storage container named crt1 , enter /crt1/myvm-flat.vmdk . When a user types the storage container name ( / storage_container_name / ), a list appears of the VMDK files in that storage container (assuming one or more VMDK files had previously been copied to that storage container).
      Note: Make sure you are copying from a flat file.
    5. Storage Container : Select the storage container to use from the pull-down list.
      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.
    6. Size : Enter the disk size in GiBs.
    7. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    8. Repeat this step to attach more devices to the VM.
  4. To create a network interface for the VM, click the Add New NIC button.
    The Create NIC dialog box appears. Do the following in the indicated fields:
    1. VLAN Name : Select the target virtual LAN from the pull-down list.
      The list includes all defined networks. For more information, see Network Configuration for VM Interfaces in the Prism Web Console Guide .
    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .

    3. Network UUID : This is a read-only field that displays the network UUID.
    4. Network Address/Prefix : This is a read-only field that displays the network IP address and prefix.
    5. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    6. Repeat this step to create more network interfaces for the VM.
  5. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog box.
    The new VM appears in the VM table view. For more information, see VM Table View in the Prism Web Console Guide .

Managing a VM (ESXi)

You can use the web console to manage virtual machines (VMs) in the ESXi clusters.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

After creating a VM, you can use the web console to manage guest tools, power operations, suspend, launch a VM console window, update the VM configuration, clone the VM, or delete the VM. To accomplish one or more of these tasks, do the following:

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Table view.
  2. Select the target VM in the table (top section of screen).
    The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You can also right-click on a VM to select a relevant action.

    The possible actions are Manage Guest Tools , Launch Console , Power on (or Power off actions ), Suspend (or Resume ), Clone , Update , and Delete . The following steps describe how to perform each action.

    Figure. VM Action Links Click to enlarge

  3. To manage guest tools as follows, click Manage Guest Tools .
    You can also enable NGT applications (self-service restore, volume snapshot service and application-consistent snapshots) as part of manage guest tools.
    1. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
    2. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
      Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.

      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    3. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
      The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file or files from the VM. For more information about the self-service restore feature, see Self-Service Restore in the Data Protection and Recovery with Prism Element guide.

    4. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot feature is enabled by default.
      After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent Snapshots in the Data Protection and Recovery with Prism Element guide.

    5. To mount VMware guest tools, click Mount VMware Guest Tools check box.
      The VMware guest tools are mounted on the VM.
      Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular VM provided the VM has sufficient empty CD-ROM slots.
    6. Click Submit .
      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
      Note:
      • If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable NGT from the UI and restart the Nutanix guest agent service.
      • If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .
      If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.
      ncli> ngt mount vm-id=virtual_machine_id

      For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

      ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
      c1601e759987
      Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6 version and does not occur from AOS 4.6.x or later releases.
      Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
  4. To launch a VM console window, click the Launch Console action link.
    This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power Off Actions action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.
    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Google Chrome. (Firefox typically works best.)
  5. To start (or shut down) the VM, click the Power on (or Power off ) action link.

    Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the following options:

    • Power Off . Hypervisor performs a hard shut down action on the VM.
    • Reset . Hypervisor performs an ACPI reset action through the BIOS on the VM.
    • Guest Shutdown . Operating system of the VM performs a graceful shutdown.
    • Guest Reboot . Operating system of the VM performs a graceful restart.
    Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are installed.
  6. To pause (or resume) the VM, click the Suspend (or Resume ) action link. This option is available only when the VM is powered on.
  7. To clone the VM, click the Clone action link.

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits the most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • In the Clone window, you cannot update the disks.
    Figure. Clone VM Window Click to enlarge clone VM window display

  8. To modify the VM configuration, click the Update action link.
    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the configuration as needed (see Creating a VM (ESXi)), and in addition you can enable Flash Mode for the VM.
    Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
    1. Click the Enable Flash Mode check box.
      • After you enable this feature on the VM, the status is updated in the VM table view. To view the status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table view.
      • You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.
      Figure. Update VM Resources Click to enlarge VM update resources display - VM Flash Mode

      Figure. Update VM Resources - VM Disk Flash Mode Click to enlarge VM update resources display - VM Disk Flash Mode

  9. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the VM.
    The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered on.

vDisk Provisioning Types in VMware with Nutanix Storage

You can specify the vDisk provisioning policy when you perform certain VM management operations like creating a VM, migrating a VM, or cloning a VM.

Traditionally, a vDisk is provisioned with specific allocated space (thick space) or with space allocated on an as-needed basis (thin disk). The thick disks provisions the space using either lazy zero or eager zero disk formatting method.

For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types of disk provisioning. Thick disks provide second best performance and thin disks provide the least performance. However, this does not apply to modern storage systems found in Nutanix systems.

Nutanix uses a thick Virtual Machine Disk (VMDK) to reserve the storage space using the vStorage APIs for Array Integration (VAAI) reserve space API.

On a Nutanix system, there is no performance difference between thin and thick disks. This means that a thick eager zeroed virtual disk has no performance benefits over a thin virtual disk.

Nutanix uses thick disk (VMDK) in its configuration and the resulting disk will be the same whether the disk is a thin or a thick disk (despite the configuration differences).

Note: A thick-disk reservation is required for the reservation of the disk space. Nutanix VMDK has no performance requirement to provision a thick disk. For a single Nutanix container, even when a thick disk is provisioned, no disk space is allocated to write zeroes. So, there is no requirement for provisioning a thick disk.

When using the up-to-date VAAI for cloning operations, the following behavior is expected:

  • When cloning any type of disk format (thin, thick lazy zeroed or thick eager zeroed) to the same Nutanix datastore, the resulting VM will have a thin disk regardless of the explicit choice of a disk format in the vSphere client.

    Nutanix uses a thin provisioned disk because a thin disk performs the same as a thick disk in the system. The thin disk prevents disk space from wasting. In the cloning scenario, Nutanix disallows the flow of the reservation property from the source to the destination when creating a fast clone on the same datastore. This prevents space wastage due to unnecessary reservation.

  • When cloning a VM to a different datastore, the destination VM will have the disk format that you specified in the vSphere client.
    Important: A thick disk will be shown as thick in ESXi, and within NDFS (Nutanix Distributed File System) it is shown as a thin disk with an extra field configuration.

Nutanix recommends using thin disks over any other disk type.

VM Migration

You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.

  • Migrate VMs from existing storage platform to Nutanix.
  • Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.

In migrating VMs between Nutanix clusters running vSphere, the source host and NFS datastore are the ones presently running the VM. The target host and NFS datastore are the ones where the VM runs after migration. The target ESXi host and datastore must be part of a Nutanix cluster.

To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the migration is complete, you must unmount the datastores and block access.

Migrating a VM to Another Nutanix Cluster

Before you begin

Before migrating a VM to another Nutanix cluster running vSphere, verify that you have provisioned the target Nutanix environment.

About this task

The shared storage feature in vSphere allows you to move both compute and storage resources from the source legacy environment to the target Nutanix environment at the same time without disruption. This feature also removes the need to do any sort of file systems allow lists on Nutanix.

You can use the shared storage feature through the migration wizard in the web client.

Procedure

  1. Log on to vCenter with the web client.
  2. Select the VM that you want to migrate.
  3. Right-click the VM and select Migrate .
  4. Under Select Migration Type , select Change both compute resource and storage .
  5. Select Compute Resource and then Storage and click Next .
    If necessary, change the disk format to the one that you want to use during the migration process.
  6. Select a destination network for all VM network adapters and click Next .
  7. Click Finish .
    Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a temporary storage network over vmk0 for the period where the disk files are on Nutanix.

Cloning a VM

About this task

To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in, refer KB-1868 .

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the VM and select Clone .
  3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
  4. Select the datastore that contains source VM and click Next .
    Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the VMware implementation and not the Nutanix VAAI plug-in.
  5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
  6. Click Finish .

vStorage APIs for Array Integration

To improve the vSphere cloning process, Nutanix provides a vStorage API for array integration (VAAI) plug-in. This plug-in is installed by default during the Nutanix factory process.

Without the Nutanix VAAI plug-in, the process of creating a full clone takes a significant amount of time because all the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption.

The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the clone was created.

vSphere ESXi Hardening Settings

Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.
Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.
HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no

Match Address x.x.x.11,x.x.x.12,x.x.x.13,x.x.x.14,192.168.5.0/24
PermitRootLogin yes
PasswordAuthentication yes

ESXi Host Upgrade

You can upgrade your host either automatically through Prism Element (1-click upgrade) or manually. For more information about automatic and manual upgrades, see ESXi Upgrade and ESXi Host Manual Upgrade respectively.

This paragraph describes the Nutanix hypervisor support policy for vSphere and Hyper-V hypervisor releases. Nutanix provides hypervisor compatibility and support statements that should be reviewed before planning an upgrade to a new release or applying a hypervisor update or patch:
  • Compatibility and Interoperability Matrix
  • Hypervisor Support Policy- See Support Policies and FAQs for the supported Acropolis hypervisors.

Review the Nutanix Field Advisory page also for critical issues that Nutanix may have uncovered with the hypervisor release being considered.

Note: You may need to log in to the Support Portal to view the links above.

The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft, respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.

ESXi Upgrade

These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter server or other third-party software, see your vendor documentation for this information.

AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature (also known as 1-click upgrade).

You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console. In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel. You can see the current status of your software versions and start an upgrade.

VMware ESXi Hypervisor Upgrade Recommendations and Limitations

  • To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
  • Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a hypervisor version might require that you upgrade vCenter first.
  • If you have not enabled DRS in your environment and want to upgrade the ESXi host, you need to upgrade the ESXi host manually. For more information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere Administration Guide .
  • Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console Software Upgrade feature.

Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline bundles (not ISOs) from the VMware web site.

Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ. For updates that are made available by VMware that do not have a Nutanix-provided JSON metadata upgrade file, obtain the offline bundle and md5sum checksum available from VMware, then use the web console Software Upgrade feature to upgrade ESXi.

Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting and specifically enable EVC for Intel hosts.
CPU Level for Enhanced vMotion Compatibility (EVC)

AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level must be L4 - Sandy Bridge .

vCenter Requirements and Limitations
Note: ENG-358564 You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at the Nutanix Support portal for symptoms and solutions to this issue.
  • If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
  • If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1 hour before performing upgrades to allow cluster settings to become updated. Also do not register the cluster in vCenter and perform any upgrades at the same time.
  • Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must rules for VMs.

    Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter maintenance mode.

Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node as part of a break-fix procedure, planned migrations, and similar temporary operations.

Upgrading ESXi Hosts by Uploading Binary and Metadata Files

Before you begin

About this task

Do the following steps to download Nutanix-qualified ESXi metadata .JSON files and upgrade the ESXi hosts through Upgrade Software in the Prism Element web console. Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files.

Procedure

  1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster Check (NCC) health checks and upgrade NCC if necessary.
  2. Run NCC as described in Run NCC Checks .
  3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.
    1. The default view is All . From the drop-down menu, select Nutanix - VMware ESXi , which shows all available JSON versions.
    2. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a .
    3. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
    Figure. Downloads Page for ESXi Metadata JSON Click to enlarge This picture shows the portal page for ESXi metadata JSON downloads
  4. Log on to the Prism Element web console for any node in the cluster.
  5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  6. Click the upload the Hypervisor binary link.
  7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (obtained from VMware), respectively, browse to the file locations, select the file, and click Upload Now .
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .
  10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the inventory information.
    For more information, see Performing Inventory With LCM in the Acropolis Upgrade Guide .

Upgrading ESXi by Uploading An Offline Bundle File and Checksum

About this task

  • Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware, then upgrade ESXi through Upgrade Software in the Prism Element web console.
  • Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.

Procedure

  1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip ) and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not manually generated from the bundle by you.
  2. Save the files to your local machine or media, such as a USB drive or other portable media.
  3. Log on to the Prism Element web console for any node in the cluster.
  4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  5. Click the upload the Hypervisor binary link.
  6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
  7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file, and click Upload Now .
    Figure. ESXi 1-Click Upgrade, Unqualified Bundle Click to enlarge ESXi 1-Click Upgrade dialog box
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .

ESXi Host Manual Upgrade

If you have not enabled DRS in your environment and want to upgrade the ESXi host, you must upgrade the ESXi host manually. This topic describes all the requirements that you must meet before manually upgrading the ESXi host.

Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ.

Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require cluster downtime.

  • If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is unavailable.
  • If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different blocks.
  • Never shut down or restart multiple Controller VMs or hosts simultaneously.
  • Always run the cluster status command to verify that all Controller VMs are up before performing a Controller VM or host shutdown or restart.

ESXi Host Upgrade Process

Perform the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.
  • If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster stop command.
    Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2 configuration.
  • If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are up by logging on to a CVM and running the cluster status command. If any nodes are not running, start them before proceeding with the upgrade. Shut down all guest VMs on the node or migrate them to other nodes in the Nutanix cluster.
  • Disable email alerts in the web console under Email Alert Services or with the nCLI command.
    ncli> alerts update-alert-config enable=false
  • Run the complete NCC health check by using the health check command.
    nutanix@cvm$ ncc health_checks run_all
  • Run the cluster status command to verify that all Controller VMs are up and running, before performing a Controller VM or host shutdown or restart.
    nutanix@cvm$ cluster status
  • Place the host in the maintenance mode by using the web client.
  • Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  • Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.

Upgrading ESXi Host

  • See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade

Run the complete NCC health check by using the following command.

nutanix@cvm$ ncc health_checks run_all

vSphere Cluster Settings Checklist

Review the following checklist of the settings that you have to configure to successfully deploy vSphere virtual environment running Nutanix Enterprise cloud.

vSphere Availability Settings

  • Enable host monitoring.
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster.

    For more information about settings of percentage of cluster resources reserved as failover spare capacity, vSphere HA Admission Control Settings for Nutanix Environment.

  • Set the VM Restart Priority of all CVMs to Disabled .
  • Set the Host Isolation Response of the cluster to Power Off & Restart VMs .
  • Set the VM Monitoring for all CVMs to Disabled .
  • Enable datastore heartbeats by clicking Use datastores only from the specified list and choosing the Nutanix NFS datastore.

    If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .

vSphere DRS Settings

  • Set the Automation Level on all CVMs to Disabled .
  • Select Automation Level to accept level 3 recommendations.
  • Leave power management disabled.

Other Cluster Settings

  • Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one node for replication factor 2 or two nodes for replication factor 3).
  • Store VM swapfiles in the same directory as the VM.
  • Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings.
  • Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings.
  • Check Nonconfigurable ESXi Components. Modifying the nonconfigurable components may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.
Read article
Security Guide

AOS Security 5.20

Product Release Date: 2021-05-17

Last updated: 2022-12-14

Audience & Purpose

This Security Guide is intended for security-minded people responsible for architecting, managing, and supporting infrastructures, especially those who want to address security without adding more human resources or additional processes to their datacenters.

This guide offers an overview of the security development life cycle (SecDL) and host of security features supported by Nutanix. It also demonstrates how Nutanix complies with security regulations to streamline infrastructure security management. In addition to this, this guide addresses the technical requirements that are site specific or compliance-standards (that should be adhered), which are not enabled by default.

Note:

Hardening of the guest OS or any applications running on top of the Nutanix infrastructure is beyond the scope of this guide. We recommend that you refer to the documentation of the products that you have deployed in your Nutanix environment.

Nutanix Security Infrastructure

Nutanix takes a holistic approach to security with a secure platform, extensive automation, and a robust partner ecosystem. The Nutanix security development life cycle (SecDL) integrates security into every step of product development, rather than applying it as an afterthought. The SecDL is a foundational part of product design. The strong pervasive culture and processes built around security harden the Enterprise Cloud Platform and eliminate zero-day vulnerabilities. Efficient one-click operations and self-healing security models easily enable automation to maintain security in an always-on hyperconverged solution.

Since traditional manual configuration and checks cannot keep up with the ever-growing list of security requirements, Nutanix conforms to RHEL 7 Security Technical Implementation Guides (STIGs) that use machine-readable code to automate compliance against rigorous common standards. With Nutanix Security Configuration Management Automation (SCMA), you can quickly and continually assess and remediate your platform to ensure that it meets or exceeds all regulatory requirements.

Nutanix has standardized the security profile of the Controller VM to a security compliance baseline that meets or exceeds the standard high-governance requirements.

The most commonly used references in United States to guide vendors to build products according to the set of technical requirements are as follows.

  • The National Institute of Standards and Technology Special Publications Security and Privacy Controls for Federal Information Systems and Organizations (NIST 800.53)
  • The US Department of Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIG)

SCMA Implementation

The Nutanix platform and all products leverage the Security Configuration Management Automation (SCMA) framework to ensure that services are constantly inspected for variance to the security policy.

Nutanix has implemented security configuration management automation (SCMA) to check multiple security entities for both Nutanix storage and AHV. Nutanix automatically reports log inconsistencies and reverts them to the baseline.

With SCMA, you can schedule the STIG to run hourly, daily, weekly, or monthly. STIG has the lowest system priority within the virtual storage controller, ensuring that security checks do not interfere with platform performance.
Note: Only the SCMA schedule can be modified. The AIDE schedule is run on a fixed weekly schedule. To change the SCMA schedule for AHV or the Controller VM, see Hardening Instructions (nCLI).

RHEL 7 STIG Implementation in Nutanix Controller VM

Nutanix leverages SaltStack and SCMA to self-heal any deviation from the security baseline configuration of the operating system and hypervisor to remain in compliance. If any component is found as non-compliant, then the component is set back to the supported security settings without any intervention. To achieve this objective, Nutanix has implemented the Controller VM to support STIG compliance with the RHEL 7 STIG as published by DISA.

The STIG rules are capable of securing the boot loader, packages, file system, booting and service control, file ownership, authentication, kernel, and logging.

Example: STIG rules for Authentication

Prohibit direct root login, lock system accounts other than root , enforce several password maintenance details, cautiously configure SSH, enable screen-locking, configure user shell defaults, and display warning banners.

Security Updates

Nutanix provides continuous fixes and updates to address threats and vulnerabilities. Nutanix Security Advisories provide detailed information on the available security fixes and updates, including the vulnerability description and affected product/version.

To see the list of security advisories or search for a specific advisory, log on to the Support Portal and select Documentation , and then Security Advisories .

Nutanix Security Landscape

This topic provides highlights on Nutanix security landscape and its highlights. The following table helps to identify the security features offered out-of-the-box in Nutanix infrastructure.

Topic Highlights
Authentication and Authorization
Network segmentation VLAN-based, data driven segmentation
Security Policy Management Implement security policies using Microsegmentation.
Data security and integrity
Hardening Instructions

Log monitoring and analysis

Flow Networking

See Flow Networking Guide

UEFI

See UEFI Support for VMs in the AHV Administration Guide

Secure Boot

See Secure Boot Support for VMs in the AHV Administration Guide

Windows Credential Guard support

See Windows Defender Credential Guard Support in AHV in the AHV Administration Guide

RBAC

See Controlling User Access (RBAC)

Hardening Instructions (nCLI)

This chapter describes how to implement security hardening features for Nutanix AHV and Controller VM.

Hardening AHV

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to AHV as described below.

Table 1. Configuration Settings to Harden the AHV
Description Command or Settings Output
Getting the cluster-wide configuration of the SCMA policy. Run the following command:
nutanix@cvm$ ncli cluster get-hypervisor-security-config
Enable Aide : false
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY
Enabling the Advanced Intrusion Detection Environment (AIDE) to run on a weekly basis. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-aide=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : false
Enable Banner : false
Schedule : DAILY 
Enabling the high-strength password policies (minlen=15, difok=8, maxclassrepeat=4). Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params \
enable-high-strength-password=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : false
Schedule : DAILY
Enabling the defense knowledge consent banner of the US department. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : DAILY
Changing the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params schedule=hourly
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Enabling the settings so that AHV can generate stack traces for any cluster issue. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-core=true
Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
Enable Aide : true
Enable Core : true
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
When a high governance official needs to run the hardened configuration. The settings should be as follows:
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : false
Schedule : HOURLY
When a federal official needs to run the hardened configuration. The settings should be as follows:
Enable Aide : true
Enable Core : false
Enable High Strength P... : true
Enable Banner : true
Schedule : HOURLY
Note: A banner file can be modified to support non-DoD customer banners.
Backing up the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# sudo cp -a /srv/salt/security/KVM/sshd/DODbanner \
/srv/salt/security/KVM/sshd/DODbannerbak
Modifying the DoD banner file. Run the following command on the AHV host:
[root@AHV-host ~]# sudo vi /srv/salt/security/KVM/sshd/DODbanner
Note: Repeat all the above steps on every AHV in a cluster.
Setting the banner for all nodes through nCLI. Run the following command:
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-banner=true

The following options are configured or customized to harden the AHV:

  • Enable AIDE : Advanced Intrusion Detection Environment (AIDE) is a Linux utility that monitors a given node. After you install the AIDE package, the system will generate a database that contains all the files you selected in your configuration file by entering the aide -–init command as a root user. You can move the database to a secure location in a read-only media or on other machines. After you create the database, you can use the aide -–check command for the system to check the integrity of the files and directories by comparing the files and directories on your system with the snapshot in the database. In case there are unexpected changes, a report gets generated, which you can review. If the changes to existing files or files added are valid, you can use the aide --update command to update the database with the new changes.
  • Enable high strength password : You can run the command as shown in the table in this section to enable high-strength password policies (minlen=15, difok=8, maxclassrepeat=4).
    Note:
    • minlen is the minimum required length for a password.
    • difok is the minimum number of characters that must be different from the old password.
    • maxclassrepeat is the number of consecutive characters of same class that you can use in a password.
  • Enable Core : A core dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program gets crashed or terminated abnormally. Core dumps are used to assist in diagnosing or debugging errors in computer programs. You can enable the core for troubleshooting purposes.
  • Enable Banner : You can set a banner to display a specific message. For example, set a banner to display a warning message that the system is available to authorized users only.

Hardening Controller VM

You can use Nutanix Command Line Interface (nCLI) in order to customize the various configuration settings related to CVM as described below.

  • Run the following command to support cluster-wide configuration of the SCMA policy.

    nutanix@cvm$ ncli cluster get-cvm-security-config

    The current cluster configuration is displayed.

    Enable Aide : false
    Enable Core : false
    Enable High Strength P...: false
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to schedule weekly execution of Advanced Intrusion Detection Environment (AIDE).

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-aide=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : false
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the strong password policy.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-high-strength-password=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : false
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the defense knowledge consent banner of the US department.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : false
    Schedule : DAILY
  • Run the following command to enable the settings to allow only SNMP version 3.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-snmpv3-only=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : DAILY
  • Run the following command to change the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly.

    nutanix@cvm$ ncli cluster edit-cvm-security-params schedule=hourly

    The following output is displayed.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
  • Run the following command to enable the settings so that Controller VM can generate stack traces for any cluster issue.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-core=true

    The following output is displayed.

    Enable Aide : true
    Enable Core : true
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
    Note: Nutanix recommends that Core should not be set to true unless instructed by the Nutanix support team.
  • When a high governance official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : false
    Enable SNMPv3 Only : true
    Schedule : HOURLY
  • When a federal official needs to run the hardened configuration then the settings should be as follows.

    Enable Aide : true
    Enable Core : false
    Enable High Strength P... : true
    Enable Banner : true
    Enable SNMPv3 Only : true
    Schedule : HOURLY
    Note: A banner file can be modified to support non-DoD customer banners.
  • Run the following command to backup the DoD banner file.

    nutanix@cvm$ sudo cp -a /srv/salt/security/CVM/sshd/DODbanner \
    /srv/salt/security/CVM/sshd/DODbannerbak
  • Run the following command to modify DoD banner file.

    nutanix@cvm$ sudo vi /srv/salt/security/CVM/sshd/DODbanner
    Note: Repeat all the above steps on every CVM in a cluster.
  • Run the following command to backup the DoD banner file of the PCVM.

    nutanix@pcvm$ sudo cp -a /srv/salt/security/PC/sshd/DODbanner \
    /srv/salt/security/PC/sshd/DODbannerbak
  • Run the following command to modify DoD banner file of the PCVM.

    nutanix@pcvm$ sudo vi /srv/salt/security/PC/sshd/DODbanner
  • Run the following command to set the banner for all nodes through nCLI.

    nutanix@cvm$ ncli cluster edit-cvm-security-params enable-banner=true

TCP Wrapper Integration

Nutanix Controller VM uses the tcp_wrappers package to allow TCP supported daemons to control the network subnets which can access the libwrapped daemons. By default, SCMA controls the /etc/hosts.allow file in /srv/salt/security/CVM/network/hosts.allow and contains a generic entry to allow access to NFS, secure shell, and SNMP.

sshd: ALL : ALLOW
rpcbind: ALL : ALLOW
snmpd: ALL : ALLOW
snmptrapd: ALL : ALLOW

Nutanix recommends that the above configuration is changed to include only the localhost entries and the management network subnet for the restricted operations; this applies to both production and high governance compliance environments. This ensures that all subnets used to communicate with the CVMs are included in the /etc/hosts.allow file.

Common Criteria

Common Criteria is an international security certification that is recognized by many countries around the world. Nutanix AOS and AHV are Common Criteria certified by default and no additional configuration is required to enable the Common Criteria mode. For more information, see the Nutanix Trust website.
Note: Nutanix uses FIPS-validated cryptography by default.

Security Management Using Prism Element (PE)

Nutanix provides several mechanisms to maintain security in a cluster using Prism Element.

Configuring Authentication

About this task

Nutanix supports user authentication. To configure authentication types and directories and to enable client authentication or to enable client authentication only, do the following:
Caution: The web console (and nCLI) does not allow the use of the not secure SSLv2 and SSLv3 ciphers. There is a possibility of an SSL Fallback situation in some browsers which denies access to the web console. To eliminate this, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. Click the gear icon in the main menu and then select Authentication in the Settings page.
    The Authentication Configuration window appears.
    Note: The following steps combine three distinct procedures, enabling authentication (step 2), configuring one or more directories for LDAP/S authentication (steps 3-5), and enabling client authentication (step 6). Perform the steps for the procedures you need. For example, perform step 6 only if you intend to enforce client authentication.
  2. To enable server authentication, click the Authentication Types tab and then check the box for either Local or Directory Service (or both). After selecting the authentication types, click the Save button.
    The Local setting uses the local authentication provided by Nutanix (see User Management) . This method is employed when a user enters just a login name without specifying a domain (for example, user1 instead of user1@nutanix.com ). The Directory Service setting validates user@domain entries and validates against the directory specified in the Directory List tab. Therefore, you need to configure an authentication directory if you select Directory Service in this field.
    Figure. Authentication Types Tab Click to enlarge
    Note: The Nutanix admin user can log on to the management interfaces, including the web console, even if the Local authentication type is disabled.
  3. To add an authentication directory, click the Directory List tab and then click the New Directory option.
    A set of fields is displayed. Do the following in the indicated fields:
    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to the web console (or nCLI). Ensure users with this attribute first login to a domain workstation and change their password prior to accessing the web console. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • An Active Directory user name or group name containing spaces is not supported for Prism Element authentication.
        • Active Directory domain created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII text in Active Directory configuration, see the Internationalization (i18n) section.
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project. Nutanix currently supports the OpenLDAP 2.4 release running on CentOS distributions only.
    2. Name : Enter a directory name.
      This is a name you choose to identify this entry; it need not be the name of an actual directory.
    3. Domain : Enter the domain name.
      Enter the domain name in DNS format, for example, nutanix.com .
    4. Directory URL : Enter the URL address to the directory.
      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:
      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
        Note: The LDAP server SSL certificate must include a Subject Alternative Name (SAN) that matches the URL provided during the LDAPS setup.
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note: When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        Note: When querying the global catalog, the users sAMAccountName field must be unique across the AD forest. If the sAMAccountName field is not unique across the subdomains, authentication may fail intermittently or consistently.
      Note: For the complete list of required ports, see Port Reference .
    5. (OpenLDAP only) Configure the following additional fields:
      1. User Object Class : Enter the value that uniquely identifies the object class of a user.
      2. User Search Base : Enter the base domain name in which the users are configured.
      3. Username Attribute : Enter the attribute to uniquely identify a user.
      4. Group Object Class : Enter the value that uniquely identifies the object class of a group.
      5. Group Search Base : Enter the base domain name in which the groups are configured.
      6. Group Member Attribute : Enter the attribute that identifies users in a group.
      7. Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Enter the service account user name in the user_name@domain.com format that you want the web console to use to log in to the Active Directory.

      A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

      A service account is under the Managed Service Accounts in the Active Directory server. An application or service uses the service account to interact with the operating system. Enter your Active Directory service account credentials in this (username) and the following (password) field.

      Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).
      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.
    10. Repeat this step for each authentication directory you want to add.
    Note:
    • The Controller VMs need access to the Active Directory server, so open the standard Active Directory ports to each Controller VM in the cluster (and the virtual IP if one is configured).
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Assigning Role Permissions).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service. Additionally, for successful Prism Element authentication, the users must also have search or read privileges.
    Figure. Directory List Tab Click to enlarge
  4. To edit a directory entry, click the Directory List tab and then click the pencil icon for that entry.
    After clicking the pencil icon, the Directory List fields reappear (see step 3). Enter the new information in the appropriate fields and then click the Save button.
  5. To delete a directory entry, click the Directory List tab and then click the X icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  6. To enable client authentication, do the following:
    1. Click the Client tab.
    2. Select the Configure Client Chain Certificate check box.
      Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.
      Note: To authenticate on the PE with Client Chain Certificate the 'Subject name’ field must be present. The subject name should match the userPrincipalName (UPN) in the AD. The UPN is a username with domain address. For example user1@nutanix.com .
      Figure. Client Tab (1) Click to enlarge
    3. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note: Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
      Figure. Client Tab (2) Click to enlarge
    4. To enable client authentication, click Enable Client Authentication .
    5. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.
      Figure. Authentication Window: Client Tab (3) Click to enlarge

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on their local machine or by providing it through a smart card reader. Providing a valid certificate enables user login from a client machine with the relevant user certificate without utilizing user name and password. If the user is required to login from a client machine which does not have the certificate installed, then authentication using user name and password is still available.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  7. To specify a service account that the web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    Figure. Common Access Card Authentication Click to enlarge
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The web console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:
    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Element supports both certificate authentication and basic authentication in order to handle both Prism Element login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Element login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism by using CAC authentication, to successfully log out of Prism, close the browser after you click Log Out .
  8. Click the Close button to close the Authentication Configuration dialog box.

Assigning Role Permissions

About this task

When user authentication is enabled for a directory service (see Configuring Authentication), the directory users do not have any permissions by default. To grant permissions to the directory users, you must specify roles for the users (with associated permissions) to organizational units (OUs), groups, or individuals within a directory.

If you are using Active Directory, you must also assign roles to entities or users, especially before upgrading from a previous AOS version.

To assign roles, do the following:

Procedure

  1. In the web console, click the gear icon in the main menu and then select Role Mapping in the Settings page.
    The Role Mapping window appears.
    Figure. Role Mapping Window Click to enlarge
  2. To create a role mapping, click the New Mapping button.

    The Create Role Mapping window appears. Do the following in the indicated fields:

    1. Directory : Select the target directory from the pull-down list.

      Only directories previously defined when configuring authentication appear in this list. If the desired directory does not appear, add that directory to the directory list (see Configuring Authentication) and then return to this procedure.

    2. LDAP Type : Select the desired LDAP entity type from the pull-down list.

      The entity types are GROUP , USER , and OU .

    3. Role : Select the user role from the pull-down list.
      There are three roles from which to choose:
      • Viewer : This role allows a user to view information only. It does not provide permission to perform any administrative tasks.
      • Cluster Admin : This role allows a user to view information and perform any administrative task (but not create or modify user accounts).
      • User Admin : This role allows the user to view information, perform any administrative task, and create or modify user accounts.
    4. Values : Enter the case-sensitive entity names (in a comma separated list with no spaces) that should be assigned this role.
      The values are the actual names of the organizational units (meaning it applies to all users in those OUs), groups (all users in those groups), or users (each named user) assigned this role. For example, entering value " admin-gp,support-gp " when the LDAP type is GROUP and the role is Cluster Admin means all users in the admin-gp and support-gp groups should be assigned the cluster administrator role.
      Note:
      • Do not include a domain in the value, for example enter just admin-gp , not admin-gp@nutanix.com . However, when users log into the web console, they need to include the domain in their user name.
      • The AD user UPN must be in the user@domain_name format.
      • When an admin defines user role mapping using an AD with forest setup, the admin can map to the user with the same name from any domain in the forest setup. To avoid this case, set up the user-role mapping with AD that has a specific domain setup.
    5. When all the fields are correct, click Save .
      This saves the configuration and redisplays the Role Mapping window. The new role map now appears in the list.
      Note: All users in an authorized service directory have full administrator permissions when role mapping is not defined for that directory. However, after creating a role map, any users in that directory that are not explicitly granted permissions through the role mapping are denied access (no permissions).
    6. Repeat this step for each role map you want to add.
      You can create a role map for each authorized directory. You can also create multiple maps that apply to a single directory. When there are multiple maps for a directory, the most specific rule for a user applies. For example, adding a GROUP map set to Cluster Admin and a USER map set to Viewer for select users in that group means all users in the group have administrator permission except those specified users who have viewing permission only.
    Figure. Create Role Mapping Window Click to enlarge
  3. To edit a role map entry, click the pencil icon for that entry.
    After clicking the pencil icon, the Edit Role Mapping window appears, which contains the same fields as the Create Role Mapping window (see step 2). Enter the new information in the appropriate fields and then click the Save button.
  4. To delete a role map entry, click the "X" icon for that entry.
    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.
  5. Click the Close button to close the Role Mapping window.

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).

Authentication Best Practices

The authentication best practices listed here are guidance to secure the Nutanix platform by using the most common authentication security measures.

Emergency Local Account Usage

You must use the admin account as a local emergency account. The admin account ensures that both the Prism Web Console and the Controller VM are available when the external services such as Active Directory is unavailable.

Note: Local emergency account usage does not support any external access mechanisms, specifically for the external application authentication or external Rest API authentication.

For all the external authentication, you must configure the cluster to use an external IAM service such as Active Directory. You must create service accounts on the IAM and the accounts must have access grants to the cluster through Prism web console user account management configuration for authentication.

Modifying Default Passwords

You must change the default Controller VM password for nutanix user account by adhering to the password complexity requirements.

Procedure

  1. SSH to the Controller VM.
  2. Change the "nutanix" user account password.
    nutanix@cvm$ passwd nutanix
  3. Respond to the prompts and provide the current and new root password.
    Changing password for nutanix.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    Note:
    • Changing the user account password on one of the Controller VMs is applied to all Controller VMs in the cluster.
    • Ensure that you preserve the modified nutanix user password, since the local authentication (PAM) module requires the previous password of the nutanix user to successfully start the password reset process.
    • For the root account, both the console and SSH direct login is disabled.
    • It is recommended to use the admin user as the administrative emergency account.

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Setup Admin Session Timeout

By default, the users are logged out automatically after being idle for 15 minutes. You can change the session timeout for users and configure to override the session timeout by following the steps shown below.

Procedure

  1. Click the gear icon in the main menu and then select UI Settings in the Settings page.
  2. Select the session timeout for the current user from the Session Timeout For Current User drop-down list.
    Figure. Session Timeout Settings Click to enlarge displays the window for setting an idle logout value and for disabling the logon background animation

  3. Select the appropriate option from the Session Timeout Override drop-down list to override the session timeout.

Password Retry Lockout

For enhanced security, Prism Element locks out the admin account for a period of 15 minutes after a default number of unsuccessful login attempts. Once the account is locked out, the following message is displayed at the logon screen.

Account locked due to too many failed attempts

You can attempt entering the password after the 15 minutes lockout period, or contact Nutanix Support in case you have forgotten your password.

Internationalization (i18n)

The following table lists all the supported and unsupported entities in UTF-8 encoding.

Table 1. Internationalization Support
Supported Entities Unsupported Entities
Cluster name Acropolis file server
Storage Container name Share path
Storage pool Internationalized domain names
VM name E-mail IDs
Snapshot name Hostnames
Volume group name Integers
Protection domain name Password fields
Remote site name Any Hardware related names ( for example, vSwitch, iSCSCI initiator, vLAN name)
User management
Chart name
Caution: The creation of none of the above entities are supported on Hyper-V because of the DR limitations.

Entities Support (ASCII or non-ASCII) for the Active Directory Server

  • In the New Directory Configuration, Name field is supported in non-ASCII.
  • In the New Directory Configuration, Domain field is not supported in non-ASCII.
  • In Role mapping, Values field is supported in non-ASCII.
  • User names and group names are supported in non-ASCII.

User Management

Nutanix user accounts can be created or updated as needed using the Prism web console.

  • The web console allows you to add (see Creating a User Account), edit (see Updating a User Account), or delete (see Deleting a User Account) local user accounts at any time.
  • You can reset the local user account password using nCLI if you are locked out and cannot login to the Prism Element or Prism Central web console ( see Resetting Password (CLI)).
  • You can also configure user accounts through Active Directory and LDAP (see Configuring Authentication). Active Directory domain created by using non-ASCII text may not be supported.
Note: In addition to the Nutanix user account, there are IPMI, Controller VM, and hypervisor host users. Passwords for these accounts cannot be changed through the web console.

Creating a User Account

About this task

The admin user is created automatically when you get a Nutanix system, but you can add more users as needed. Note that you cannot delete the admin user. To create a user, do the following:
Note: You can also configure user accounts through Active Directory (AD) and LDAP (see Configuring Authentication).

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. To add a user, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
      Note: AOS uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Password : Enter a password (maximum of 255 characters).
      A second field to verify that the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.
      By default English is selected. You can select Simplified Chinese or Japanese . Depending on the language that you select here, the cluster locale is be updated for the new user. For example, if you select Simplified Chinese , the next time that the new user logs on to the web console, the user interface is displayed in Simplified Chinese.
    7. Roles : Assign a role to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user tasks.

        Note: Backup admin user is designed for Nutanix Mine integrations as of AOS version 5.19 and has minimal functionality in cluster management. This role has restricted access to the Nutanix Mine cluster.
        • Health , Analysis , and Tasks features are available in read-only mode.
        • The File server and Data Protection options in the web console are not available for this user.
        • The following features are available for Backup Admin users with limited functionality.
            • Home - The user cannot a register a cluster with Prism Central. The registration widget is disabled. Other read-only data is displayed and available.
            • Alerts - Alerts and events are displayed. However, the user cannot resolve or acknowledge any alert or event. The user cannot configure Alert Policy or Email configuration .
            • Hardware - The user cannot expand the cluster or remove hosts from the cluster. Read-only data is displayed and available.
            • Network - Networking data or configuration is displayed but configuration options are not available.
            • Settings - The user can only upload a new image using the Settings page.
            • VM - The user cannot configure options like Create VM and Network Configuration in the VM page. The following options are available for the user in the VM page:
              • Launch console
              • Power On
              • Power Off
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user tasks.
    8. When all the fields are correct, click Save .
      This saves the configuration and the web console redisplays the dialog box with the new user-administrative appearing in the list.
    Figure. Create User Window Click to enlarge

Updating a User Account

About this task

Update credentials and change the role for an existing user by using this procedure.
Note: To update your account credentials (that is, the user you are currently logged on as), see Updating My Account. Changing the password for a different user is not supported; you must log in as that user to change the password.

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
  2. Enable or disable the login access for a user by clicking the toggle text Yes (enabled) or No (disabled) in the Enabled column.
    A Yes value in the Enabled column means that the login is enabled; a No value in the Enabled column means it is disabled.
    Note: A user account is enabled (login access activated) by default.
  3. To edit the user credentials, click the pencil icon for that user and update one or more of the values in the displayed fields:
    1. Username : The username is fixed when the account is created and cannot be changed.
    2. First Name : Enter a different first name.
    3. Last Name : Enter a different last name.
    4. Email : Enter a different valid email address.
      Note: AOS Prism uses the email address for client authentication and logging when the local user performs user and cluster tasks in the web console.
    5. Roles : Change the role assigned to this user.
      • Select the User Admin box to allow the user to view information, perform any administrative task, and create or modify user accounts. (Checking this box automatically selects the Cluster Admin box to indicate that this user has full permissions. However, a user administrator has full permissions regardless of whether the cluster administrator box is checked.)
      • Select the Cluster Admin box to allow the user to view information and perform any administrative task (but not create or modify user accounts).
      • Select the Backup Admin box to allow the user to perform backup-related administrative tasks. This role does not have permission to perform cluster or user administrative tasks.
      • Leaving all the boxes unchecked allows the user to view information, but it does not provide permission to perform cluster or user-administrative administrative tasks.
    6. Reset Password : Change the password of this user.
      Enter the new password for Password and Confirm Password fields. Click the info icon to view the password complexity requirements.
    7. When all the fields are correct, click Save .
      This saves the configuration and redisplays the dialog box with the new user appearing in the list.
    Figure. Update User Window Click to enlarge

Updating My Account

About this task

To update your account credentials (that is, credentials for the user you are currently logged in as), do the following:

Procedure

  1. To update your password, select Change Password from the user icon pull-down list in the web console.
    The Change Password dialog box appears. Do the following in the indicated fields:
    1. Current Password : Enter the current password.
    2. New Password : Enter a new password.
    3. Confirm Password : Re-enter the new password.
    4. When the fields are correct, click the Save button (lower right). This saves the new password and closes the window.
    Figure. Change Password Window Click to enlarge
    Note: You can change the password for the "admin" account only once per day. Please contact Nutanix support if you need to update the password multiple times in one day
  2. To update other details of your account, select Update Profile from the user icon pull-down list.
    The Update Profile dialog box appears. Update (as desired) one or more of the following fields:
    1. First Name : Enter a different first name.
    2. Last Name : Enter a different last name.
    3. Email : Enter a different valid user email address.
    4. Language : Select a language for your account.
    5. API Key : Enter the key value to use a new API key.
    6. Public Key : Click the Choose File button to upload a new public key file.
    7. When all the fields are correct, click the Save button (lower right). This saves the changes and closes the window.
    Figure. Update Profile Window Click to enlarge

Deleting a User Account

About this task

To delete an existing user, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Local User Management in the Settings page.
    The User Management dialog box appears.
    Figure. User Management Window Click to enlarge
  2. Click the X icon for that user. Note that you cannot delete the admin user.
    A window prompt appears to verify the action; click the OK button. The user account is removed and the user no longer appears in the list.

Certificate Management

This chapter describes how to install and replace an SSL certificate for configuration and use on the Nutanix Controller VM.

Note: Nutanix recommends that you check for the validity of the certificate periodically, and replace the certificate if it is invalid.

Installing an SSL Certificate

About this task

Nutanix supports SSL certificate-based authentication for console access. To install a self-signed or custom SSL certificate, do the following:
Important: Ensure that SSL certificates are not password protected.
Note:
  • Nutanix recommends that customers replace the default self-signed certificate with a CA signed certificate.
  • SSL certificate (self-signed or signed by CA) can only be installed cluster-wide from Prism. SSL certificates can not be customized for individual Controller VM.

Procedure

  1. Click the gear icon in the main menu and then select SSL Certificate in the Settings page.
    The SSL Certificate dialog box appears.
    Figure. SSL Certificate Window Click to enlarge
  2. To replace (or install) a certificate, click the Replace Certificate button.
  3. To create a new self-signed certificate, click the Regenerate Self Signed Certificate option and then click the Apply button.
    A dialog box appears to verify the action; click the OK button. This generates and applies a new RSA 2048-bit self-signed certificate for the Prism user interface.
    Figure. SSL Certificate Window: Regenerate Click to enlarge
  4. To apply a custom certificate that you provide, do the following:
    1. Click the Import Key and Certificate option and then click the Next button.
      Figure. SSL Certificate Window: Import Click to enlarge
    2. Do the following in the indicated fields, and then click the Import Files button.
      Note:
      • All the three imported files for the custom certificate must be PEM encoded.
      • Ensure that the private key does not have any extra data (or custom attributes) before the beginning (-----BEGIN CERTIFICATE-----) or after the end (-----END CERTIFICATE-----) of the private key block.
      • Private Key Type : Select the appropriate type for the signed certificate from the pull-down list (RSA 4096 bit, RSA 2048 bit, EC DSA 256 bit, or EC DSA 384 bit).
      • Private Key : Click the Browse button and select the private key associated with the certificate to be imported.
      • Public Certificate : Click the Browse button and select the signed public portion of the server certificate corresponding to the private key.
      • CA Certificate/Chain : Click the Browse button and select the certificate or chain of the signing authority for the public certificate.
      Figure. SSL Certificate Window: Select Files Click to enlarge
      In order to meet the high security standards of NIST SP800-131a compliance, the requirements of the RFC 6460 for NSA Suite B, and supply the optimal performance for encryption, the certificate import process validates the correct signature algorithm is used for a given key/cert pair. Refer to the following table to ensure the proper set of key types, sizes/curves, and signature algorithms. The CA must sign all public certificates with proper type, size/curve, and signature algorithm for the import process to validate successfully.
      Note: There is no specific requirement for the subject name of the certificates (subject alternative names (SAN) or wildcard certificates are supported in Prism).
      Table 1. Recommended Key Configurations
      Key Type Size/Curve Signature Algorithm
      RSA 4096 SHA256-with-RSAEncryption
      RSA 2048 SHA256-with-RSAEncryption
      EC DSA 256 prime256v1 ecdsa-with-sha256
      EC DSA 384 secp384r1 ecdsa-with-sha384
      EC DSA 521 secp521r1 ecdsa-with-sha512
      Note: RSA 4096 bit certificates might not work with certain AOS and Prism Central releases. Please see the release notes for your AOS and Prism Central versions. Specifying an RSA 4096 bit certificate might cause multiple cluster services to restart frequently. To work around the issue, see KB 12775.
      You can use the cat command to concatenate a list of CA certificates into a chain file.
      $ cat signer.crt inter.crt root.crt > server.cert
      Order is essential. The total chain should begin with the certificate of the signer and end with the root CA certificate as the final entry.

Results

After generating or uploading the new certificate, the interface gateway restarts. If the certificate and credentials are valid, the interface gateway uses the new certificate immediately, which means your browser session (and all other open browser sessions) will be invalid until you reload the page and accept the new certificate. If anything is wrong with the certificate (such as a corrupted file or wrong certificate type), the new certificate is discarded, and the system reverts back to the original default certificate provided by Nutanix.
Note: The system holds only one custom SSL certificate. If a new certificate is uploaded, it replaces the existing certificate. The previous certificate is discarded.

Replacing a Certificate

Nutanix simplifies the process of certificate replacement to support the need of Certificate Authority (CA) based chains of trust. Nutanix recommends you to replace the default supplied self-signed certificate with a CA signed certificate.

Procedure

  1. Login to the Prism and click the gear icon.
  2. Click SSL Certificate .
  3. Select Replace Certificate to replace the certificate.
  4. Do one of the following.
    • Select Regenerate self signed certificate to generate a new self-signed certificate.
      Note:
      • This automatically generates and applies a certificate.
    • Select Import key and certificate to import the custom key and certificate. RSA 4096 bit, RSA 2048 bit, Elliptic Curve DSA 256 bit, and Elliptic Curve DSA 384 bit types of key and certificate are supported.

    The following files are required and should be PEM encoded to import the keys and certificate.

    • The private key associated with the certificate. The below section describes generating a private key in detail.
    • The signed public portion of the server certificate corresponding to the private key.
    • The CA certificate or chain of the signing authority for the certificate.
    Note:

    You must obtain the Public Certificate and CA Certificate/Chain from the certificate authority.

    Figure. Importing Certificate Click to enlarge

    Generating an RSA 4096 and RSA 2048 private key

    Tip: You can run the OpenSSL commands for generating private key and CSR on a Linux client with OpenSSL installed.
    Note: Some OpenSSL command parameters might not be supported on older OpenSSL versions and require OpenSSL version 1.1.1 or above to work.
    • Run the following OpenSSL command to generate a RSA 4096 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:40966
              -nodes -sha256 -keyout server.key
    • Run the following OpenSSL command to generate an RSA 2048 private key and the Certificate Signing Request (CSR).
      openssl req -out server.csr -new -newkey rsa:2048
              -nodes -sha256 -keyout server.key

      After executing the openssl command, the system prompts you to provide more details that will be incorporated into your certificate. The mandatory fields are - Country Name, State or Province Name, and Organization Name. The optional fields are - Locality Name, Organizational Unit Name, Email Address, and Challenge Password.

    Nutanix recommends including a DNS name for all CVMs in the certificate using the Subject Alternative Name (SAN) extension. This avoids SSL certificate errors when you access a CVM by direct DNS instead of the shared cluster IP. This example shows how to include a DNS name while generating an RSA 4096 private key:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example.com" 
    -keyout server.key 

    For a 3-node cluster you can provide DNS name for all three nodes in a single command. For example:

    openssl req -out server.csr -new -newkey rsa:4096 -sha256 -nodes 
    -addext "subjectAltName = DNS:example1.com,DNS:example2.com,DNS:example3.com" 
    -keyout server.key 

    If you have added a SAN ( subjectAltName ) extension to your certificate, then every time you add or remove a node from the cluster, you must add the DNS name when you generate or sign a new certificate.

    Generating an EC DSA 256 and EC DSA 384 private key

    • Run the following OpenSSL command to generate a EC DSA 256 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name prime256v1 –genkey 
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes -sha256 
    • Run the following OpenSSL command to generate a EC DSA 384 private key and the Certificate Signing Request (CSR).
      openssl ecparam -out dsakey.pem -name secp384r1 –genkey
      openssl req -out dsacert.csr -new -key dsakey.pem -nodes –sha384 
      
    Note: To adhere the high security standards of NIST SP800-131a compliance, requirements of the RFC 6460 for NSA Suite B, provide the optimal performance for encryption. The certificate import process validates the correct signature algorithm used for a given key or certificate pair.
  5. If the CA chain certificate provided by the certificate authority is not in a single file, then run the following command to concatenate the list of CA certificates into a chain file.
    cat signer.crt inter.crt root.crt > server.cert
    Note: The chain should start with the certificate of the signer and ends with the root CA certificate.
  6. Browse and add the Private Key, Public Certificate, and CA Certificate/Chain.
  7. Click Import Files .

What to do next

Prism restarts and you must login to use the application.

Exporting an SSL Certificate for Third-party Backup Applications

Nutanix allows you to export an SSL certificate for Prism Element on a Nutanix cluster and use it with third-party backup applications.

Procedure

  1. Log on to a Controller VM in the cluster using SSH.
  2. Run the following command to obtain the virtual IP address of the cluster:
    nutanix@cvm$ ncli cluster info

    The current cluster configuration is displayed.

        Cluster Id           : 0001ab12-abcd-efgh-0123-012345678m89::123456
        Cluster Uuid         : 0001ab12-abcd-efgh-0123-012345678m89
        Cluster Name         : three
        Cluster Version      : 6.0
        Cluster Full Version : el7.3-release-fraser-6.0-a0b1c2345d6789ie123456fg789h1212i34jk5lm6
        External IP address  : 10.10.10.10
        Node Count           : 3
        Block Count          : 1
        . . . . 
    Note: The external IP address in the output is the virtual IP address of the cluster.
  3. Run the following command to enter into the Python prompt:
    nutanix@cvm$ python

    The Python prompt appears.

  4. Run the following command to import the SSL library.
    $ import ssl
  5. From the Python console, run the following command to print the SSL certificate.
    $ print ssl.get_server_certificate(('virtual_IP_address',9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    Example: Refer to the following example where virtual_IP_address value is replaced by 10.10.10.10.
    $ print ssl.get_server_certificate(('10.10.10.10', 9440), ssl_version=ssl.PROTOCOL_TLSv1_2)
    The SSL certificate is displayed on the console.
    -----BEGIN CERTIFICATE-----
    0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01
    23456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123
    456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345
    6789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567
    89ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789AB
    CDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCD
    EFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEF
    GHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGH
    IJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJ
    KLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL
    MNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMN
    OPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOP
    QRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQR
    STUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRST
    UVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUV
    WXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWX
    YZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZab
    cdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcd
    efghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef
    ghij
    -----END CERTIFICATE-----

Controlling Cluster Access

About this task

Nutanix supports the Cluster lockdown feature. This feature enables key-based SSH access to the Controller VM and AHV on the Host (only for nutanix/admin users).

Enabling cluster lockdown mode ensures that password authentication is disabled and only the keys you have provided can be used to access the cluster resources. Thus making the cluster more secure.

You can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access.

To control key-based SSH access to the cluster, do the following:
Note: Use this procedure to lock down access to the Controller VM and hypervisor host. In addition, it is possible to lock down access to the hypervisor.

Procedure

  1. Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.
    The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window.
    Figure. Cluster Lockdown Window Click to enlarge
  2. To disable (or enable) remote login access, uncheck (check) the Enable Remote Login with Password box.
    Remote login access is enabled by default.
  3. To add a new public key, click the New Public Key button and then do the following in the displayed fields:
    1. Name : Enter a key name.
    2. Key : Enter (paste) the key value into the field.
    Note: Prism supports the following key types.
    • RSA
    • ECDSA
    1. Click the Save button (lower right) to save the key and return to the main Cluster Lockdown window.
    There are no public keys available by default, but you can add any number of public keys.
  4. To delete a public key, click the X on the right of that key line.
    Note: Deleting all the public keys and disabling remote login access locks down the cluster from SSH access.

Data-at-Rest Encryption

Nutanix provides an option to secure data while it is at rest using either self-encrypted drives or software-only encryption and key-based access management (cluster's native or external KMS for software-only encryption).

Encryption Methods

Nutanix provides you with the following options to secure your data.

  • Self Encrypting Drives (SED) Encryption - You can use a combination of SEDs and an external KMS to secure your data while it is at rest.
  • Software-only Encryption - Nutanix AOS uses the AES-256 encryption standard to encrypt your data. Once enabled, software-only data-at-rest encryption cannot be disabled, thus protecting against accidental data leaks due to human errors. Software-only encryption supports both Nutanix Native Key Manager (local and remote) and External KMS to secure your keys.

Note the following points regarding data-at-rest encryption.

  • Encryption is supported for AHV, ESXi, and Hyper-V.
    • For ESXi and Hyper-V, software-only encryption can be implemented at a cluster level or container level. For AHV, encryption can be implemented at the cluster level only.
  • Nutanix recommends using cluster-level encryption. With the cluster-level encryption, the administrative overhead of selecting different containers for the data storage gets eliminated.
  • Encryption cannot be disabled once it is enabled at a cluster level or container level.
  • Encryption can be implemented on an existing cluster with data that exists. If encryption is enabled on an existing cluster (AHV, ESXi, or Hyper-V), the unencrypted data is transformed into an encrypted format in a low priority background task that is designed not to interfere with other workload running in the cluster.
  • Data can be encrypted using either self-encrypted drives (SEDs) or software-only encryption. You can change the encryption method from SEDs to software-only. You can perform the following configurations.
    • For ESXi and Hyper-V clusters, you can switch from SEDs and External Key Management (EKM) combination to software-only encryption and EKM combination. First, you must disable the encryption in the cluster where you want to change the encryption method. Then, select the cluster and enable encryption to transform the unencrypted data into an encrypted format in the background.
    • For AHV, background encryption is supported.
  • Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
  • In the case of mixed clusters with ESXi and AHV nodes, where the AHV nodes are used for storage only, the encryption policies consider the cluster as an ESXi cluster. So, the cluster-level and container-level encryption are available.
  • You can use a combination of SED and non-SED drives in a cluster. After you encrypt a cluster using the software-only encryption, all the drives are considered as unencrypted drives. In case you switch from the SED encryption to the software-only encryption, you can add SED or non-SED drives to the cluster.
  • Data is not encrypted when it is replicated to another cluster. You must enable the encryption for each cluster. Data is encrypted as a part of the write operation and decrypted as a part of the read operation. During the replication process, the system reads, decrypts, and then sends the data over to the other cluster. You can use a third-party network solution if there a requirement to encrypt the data during the transmission to another cluster.
  • Software-only encryption does not impact the data efficiency features such as deduplication, compression, erasure coding, zero block suppression, and so on. The software encryption is the last data transformation performed. For example, during the write operation, compression is performed first, followed by encryption.

Key Management

Nutanix supports a Native Key Management Server, also called Local Key Manager (LKM), thus avoiding the dependency on an External Key Manager (EKM). Cluster localised Key Management Service support requires a minimum of 3-node in a cluster and is supported only for software-only encryption. So, 1-node and 2-node clusters can use either the Native KMS (remote) option or an EKM. .

The following types of keys are used for encryption.

  • Data Encryption Key (DEK) - A symmetric key, such as AES-256, that is used to encrypt the data.
  • Key Encryption Key (KEK) - This key is used to encrypt or decrypt the DEK.

Note the following points regarding the key management.

  • Nutanix does not support the use of the Local Key Manager with a third party External Key Manager.
  • Dual encryption (both SED and software-only encryption) requires an EKM. For more information, see Configuring Dual Encryption.
  • You can switch from an EKM to LKM, and inversely. For more information, see Switching between Native Key Manager and External Key Manager.
  • Rekey of keys stored in the Native KMS is supported for the Leader Keys. For more information, see Changing Key Encryption Keys (SEDs) and Changing Key Encryption Keys (Software Only).
  • You must back up the keys stored in the Native KMS. For more information, see Backing up Keys.
  • You must backup the encryption keys whenever you create a new container or remove an existing container. Nutanix Cluster Check (NCC) checks the status of the backup and sends an alert if you do not take a backup at the time of creating or removing a container.

Data-at-Rest Encryption (SEDs)

For customers who require enhanced data security, Nutanix provides a data-at-rest security option using Self Encrypting Drives (SEDs) included in the Ultimate license.

Note: If you are running the AOS Pro License on G6 platforms and above, you can use SED encryption by installing an add-on license.

Following features are supported:

  • Data is encrypted on all drives at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • A key authorization method allows password rotation at arbitrary times.
  • Protection can be enabled or disabled at any time.
  • No performance penalty is incurred despite encrypting all data.
  • Re-key of the leader encryption key (MEK) at arbitrary times is supported.
Note: If an SED cluster is present, then while executing the data-at-rest encryption, you will get an option to either select data-at-rest encryption using SEDs or data-at-rest encryption using AOS.
Figure. SED and AOS Options Click to enlarge

Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish these goals, Nutanix implements a data security configuration that uses SEDs with keys maintained through a separate key management device. Nutanix uses open standards (TCG and KMIP protocols) and FIPS validated SED drives for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge Graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  1. The security implementation begins by installing SEDs for all data drives in a cluster.

    The drives are FIPS 140-2 validated and use FIPS 140-2 validated cryptographic modules.

    Creating a new cluster that includes SEDs only is straightforward, but an existing cluster can be converted to support data-at-rest encryption by replacing the existing drives with SEDs (after migrating all the VMs/vDisks off of the cluster while the drives are being replaced).

    Note: Contact Nutanix customer support for assistance before attempting to convert an existing cluster. A non-protected cluster can contain both SED and standard drives, but Nutanix does not support a mixed cluster when protection is enabled. All the disks in a protected cluster must be SED drives.
  2. Data on the drives is always encrypted but read or write access to that data is open. By default, the access to data on the drives is protected by the in-built manufacturer key. However, when data protection for the cluster is enabled, the Controller VM must provide the proper key to access data on a SED. The Controller VM communicates with the SEDs through a Trusted Computing Group (TCG) Security Subsystem Class (SSC) Enterprise protocol.

    A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to the drive controller and never leaves the physical subsystem, so there is no way to access the data directly from the drive.

    Another key, known as a key encryption key (KEK), is used to encrypt/decrypt the DEK and authenticate to the drive. (Some vendors call this the authentication key or PIN.)

    Each drive has a separate KEK that is generated through the FIPS compliant random number generator present in the drive controller. The KEK is 32 bytes long to resist brute force attacks. The KEKs are sent to the key management server for secure storage and later retrieval; they are not stored locally on the node (even though they are generated locally).

    In addition to the above, the leader encryption key (MEK) is used to encrypt the KEKs.

    Each node maintains a set of certificates and keys in order to establish a secure connection with the external key management server.

  3. Keys are stored in a key management server that is outside the cluster, and the Controller VM communicates with the key management server using the Key Management Interoperability Protocol (KMIP) to upload and retrieve drive keys.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

  4. When a node experiences a full power off and power on (and cluster protection is enabled), the controller VM retrieves the drive keys from the key management server and uses them to unlock the drives.

    If the Controller VM cannot get the correct keys from the key management server, it cannot access data on the drives.

    If a drive is re-seated, it becomes locked.

    If a drive is stolen, the data is inaccessible without the KEK (which cannot be obtained from the drive). If a node is stolen, the key management server can revoke the node certificates to ensure they cannot be used to access data on any of the drives.

Preparing for Data-at-Rest Encryption (External KMS for SEDs and Software Only)

About this task

Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!

Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

If you are using an external KMS for encryption using AOS, preparation steps outside the web console are required. The information in this section is applicable if you choose to use an external KMS for configuring encryption.

You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.

The system accesses the EKM under the following conditions:

  • Starting a cluster

  • Regenerating a key (key regeneration occurs automatically every year by default)

  • Adding or removing a node (only when Self Encrypting Drives is used for encryption)

  • Switching between Native to EKM or EKM to Native

  • Starting, and restarting a service (only if Software-based encryption is used)

  • Upgrading AOS (only if Software-based encryption is used)

  • NCC heartbeat check if EKM is alive

Procedure

  1. Configure a key management server.

    The key management server devices must be configured into the network so the cluster has access to those devices. For redundant protection, it is recommended that you employ at least two key management server devices, either in active-active cluster mode or stand-alone.

    Note: The key management server must support KMIP version 1.0 or later.
    • SafeNet

      Ensure that Security > High Security > Key Security > Disable Creation and Use of Global Keys is checked.

    • Vormetric

      Set the appliance to compatibility mode. Suite B mode causes the SSL handshake to fail.

  2. Generate a certificate signing request (CSR) for each node in the cluster.
    • The Common Name field of the CSR is populated automatically with unique_node_identifier .nutanix.com to identify the node associated with the certificate.
      Tip: After generating the certificate from Prism, (if required) you can update the custom common name (CN) setting by running the following command using nCLI.
      ncli data-at-rest-encryption-certificate update-csr-information domain-name=abcd.test.com

      In the above command example, replace "abcd.test.com" with the actual domain name.

    • A UID field is populated with a value of Nutanix . This can be useful when configuring a Nutanix group for access control within a key management server, since it is based on fields within the client certificates.
    Note: Some vendors when doing client certificate authentication expect the client username to be a field in the CSR. While the CN and UID are pre-generated, many of the user populated fields can be used instead if desired. If a node-unique field such as CN is chosen, users must be created on a per node basis for access control. If a cluster-unique field is chosen, customers must create a user for each cluster.
  3. Send the CSRs to a certificate authority (CA) and get them signed.
    • Safenet

      The SafeNet KeySecure key management server includes a local CA option to generate signed certificates, or you can use other third-party vendors to create the signed certificates.

      To enable FIPS compliance, add user nutanix to the CA that signed the CSR. Under Security > High Security > FIPS Compliance click Set FIPS Compliant .

    Note: Some CAs strip the UID field when returning a signed certificate.
    To comply with FIPS, Nutanix does not support the creation of global keys.

    In the SafeNet KeySecure management console, go to Device > Key Server > Key Server > KMIP Properties > Authentication Settings .

    Then do the following:

    • Set the Username Field in Client Certificate option to UID (User ID) .
    • Set the Client Certificate Authentication option to Used for SSL session and username .

    If you do not perform these settings, the KMS creates global keys and fails to encrypt the clusters or containers using the software only method.

  4. Upload the signed SSL certificates (one for each node) and the certificate for the CA to the cluster. These certificates are used to authenticate with the key management server.
  5. Generate keys (KEKs) for the SED drives and upload those keys to the key management server.

Configuring Data-at-Rest Encryption (SEDs)

Nutanix offers an option to use self-encrypting drives (SEDs) to store data in a cluster. When SEDs are used, there are several configuration steps that must be performed to support data-at-rest encryption in the cluster.

Before you begin

A separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity (see step 5) that is resilient to a single failure.

About this task

To configure cluster encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page.
  3. Select the Encryption Type as Drive-based Encryption . This option is displayed only when SEDs are detected.
  4. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 6.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  5. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a leader/follower (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  6. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 4c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  7. Go to the Key Management Server section (see step 5) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    1. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  8. When the configuration is complete, click the Protect button on the opening page to enable encryption protection for the cluster.
    A clear key icon appears on the page.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    The key turns gold when cluster encryption is enabled.
    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must rekey the disks for the modification to take full effect (see Changing Key Encryption Keys (SEDs)).
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Enabling/Disabling Encryption (SEDs)

Data on a self encrypting drive (SED) is always encrypted, but enabling/disabling data-at-rest encryption for the cluster determines whether a separate (and secured) key is required to access that data.

About this task

To enable or disable data-at-rest encryption after it has been configured for the cluster (see Configuring Data-at-Rest Encryption (SEDs)), do the following:
Note: The key management server must be accessible to disable encryption.

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, do one of the following:
    • If cluster encryption is enabled currently, click the Unprotect button to disable it.
    • If cluster encryption is disabled currently, click the Protect button to enable it.
    Enabling cluster encryption enforces the use of secured keys to access data on the SEDs in the cluster; disabling cluster encryption means the data can be accessed without providing a key.

Changing Key Encryption Keys (SEDs)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey All Disks button under Hardware Encryption .
    Rekeying a cluster under heavy workloads may result in higher-than-normal IO latency, and some data may become temporarily unavailable. To continue with the rekey operation, click Confirm Rekey .
    This step resets the KEKs for all the self encrypting disks in the cluster.
    Note:
    • The Rekey All Disks button appears only when cluster protection is active.
    • If the cluster is already protected and a new key management server is added, you must press the Rekey All Disks button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

Destroying Data (SEDs)

Data on a self encrypting drive (SED) is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the drive controller. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by having the controller change the DEK. This is known as a crypto-erase.

About this task

To crypto-erase a SED, do the following:

Procedure

  1. In the web console, go to the Hardware dashboard and select the Diagram tab.
  2. Select the target disk in the diagram (upper section of screen) and then click the Remove Disk button (at the bottom right of the following diagram).

    As part of the disk removal process, the DEK for that disk is automatically cycled on the drive controller. The previous DEK is lost and all new disk reads are indecipherable. The key encryption key (KEK) is unchanged, and the new DEK is protected using the current KEK.

    Note: When a node is removed, all SEDs in that node are crypto-erased automatically as part of the node removal process.
    Figure. Removing a Disk Click to enlarge screen shot of the diagram tab of the hardware dashboard demonstrating how to remove a disk

Data-at-Rest Encryption (Software Only)

For customers who require enhanced data security, Nutanix provides a software-only encryption option for data-at-rest security (SEDs not required) included in the Ultimate license.
Note: On G6 platforms running the AOS Pro license, you can use software encryption by installing an add-on license.
Software encryption using a local key manager (LKM) supports the following features:
  • For AHV, the data can be encrypted on a cluster level. This is applicable to an empty cluster or a cluster with existing data.
  • For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
  • Data is encrypted at all times.
  • Data is inaccessible in the event of drive or node theft.
  • Data on a drive can be securely destroyed.
  • Re-key of the leader encryption key at arbitrary times is supported.
  • Cluster’s native KMS is supported.
Note: In case of mixed hypervisors, only the following combinations are supported.
  • ESXi and AHV
  • Hyper-V and AHV
Note: This solution provides enhanced security for data on a drive, but it does not secure data in transit.

Data Encryption Model

To accomplish the above mentioned goals, Nutanix implements a data security configuration that uses AOS functionality along with the cluster’s native or an external key management server. Nutanix uses open standards (KMIP protocols) for interoperability and strong security.

Figure. Cluster Protection Overview Click to enlarge graphical overview of the Nutanix data encryption methodology

This configuration involves the following workflow:

  • For software encryption, data protection must be enabled for the cluster before any data is encrypted. Also, the Controller VM must provide the proper key to access the data.
  • A symmetric data encryption key (DEK) such as AES 256 is applied to all data being written to or read from the disk. The key is known only to AOS, so there is no way to access the data directly from the drive.
  • In case of an external KMS:

    Each node maintains a set of certificates and keys in order to establish a secure connection with the key management server.

    Only one key management server device is required, but it is recommended that multiple devices are employed so the key management server is not a potential single point of failure. Configure the key manager server devices to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.

Configuring Data-at-Rest Encryption (Software Only)

Nutanix offers a software-only option to perform data-at-rest encryption in a cluster or container.

Before you begin

  • Nutanix provides the option to choose the KMS type as the Native KMS (local), Native KMS (remote), or External KMS.
  • Cluster Localised Key Management Service (Native KMS (local)) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
  • Software encryption using Native KMS is supported for remote office/branch office (ROBO) deployments using the Native KMS (remote) KMS type.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
    Caution: DO NOT HOST A KEY MANAGEMENT SERVER VM ON THE ENCRYPTED CLUSTER THAT IS USING IT!!

    Doing so could result in complete data loss if there is a problem with the VM while it is hosted in that cluster.

    Note: You must install the license of the external key manager for all nodes in the cluster. See Compatibility and Interoperability Matrix for a complete list of the supported key management servers. For instructions on how to configure a key management server, refer to the documentation from the appropriate vendor.
  • This feature requires an Ultimate license, or as an Add-On to the PRO license (for the latest generation of products). Ensure that you have procure the add-on license key to use the data-at-rest encryption using AOS, contact Sales team to procure the license.
  • Caution: For security, you can't disable software-only data-at-rest encryption once it is enabled.

About this task

To configure cluster or container encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
    The Data at Rest Encryption dialog box appears. Initially, encryption is not configured, and a message to that effect appears.
    Figure. Data at Rest Encryption Screen (initial) Click to enlarge initial screen of the data-at-rest encryption window

  2. Click the Create Configuration button.
    Clicking the Continue Configuration button, configure it link, or Edit Config button does the same thing, which is display the Data-at-Rest Encryption configuration page
  3. Select the Encryption Type as Encrypt the entire cluster or Encrypt storage containers . Then click Save Encryption Type .
    Caution: You can enable encryption for the entire cluster or just the container. However, if you enable encryption on a container; and there are any encryption key issue like loss of encryption key, you can encounter the following:
    • The entire cluster data is affected, not just the encrypted container.
    • All the user VMs of the cluster will not able to access the data.
    The hardware option is displayed only when SEDs are detected. Else, software based encryption type will be used by default.
    Figure. Select encryption type Click to enlarge select KMS type

    Note: For ESXi and Hyper-V, the data can be encrypted on a cluster or container level. The cluster or container can be empty or contain existing data. Consider the following points for container level encryption.
    • Once you enable container level encryption, you can not change the encryption type to cluster level encryption later.
    • After the encryption is enabled, the administrator needs to enable encryption for every new container.
    To enable encryption for every new storage container, do the following:
    1. In the web console, select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage Container tabs.
    2. To enable encryption, select the target storage container and then click the Update link.
      The Update Storage Container window appears.
    3. In the Advanced Settings area, select the Enable check box to enable encryption for the storage container you selected.
      Figure. Update storage container Click to enlarge Selecting encryption check box

    4. Click Save to complete.
  4. Select the Key Management Service.
    To keep the keys safe with the native KMS, select Native KMS (local) or Native KMS (remote) and click Save KMS type . If you select this option, skip to step 9 to complete the configuration.
    Note:
    • Cluster Localised Key Management Service ( Native KMS (local) ) requires a minimum of 3-node cluster. 1-node and 2-node clusters are not supported.
    • For enhanced security of ROBO environments (typically, 1 or 2 node clusters), select the Native KMS (remote) for software based encryption of ROBO clusters managed by Prism Central.
      Note: This is option is available only if the cluster is registered to Prism Central.
    For external KMS type, select the External KMS option and click Save KMS type . Continue to step 5 for further configuration.
    Figure. Select KMS Type Click to enlarge section of the data-at-rest encryption window for selecting KMS type

    Note: You can switch between the KMS types at a later stage if the specific KMS prerequisites are met, see Switching between Native Key Manager and External Key Manager.
  5. In the Certificate Signing Request Information section, do the following:
    Figure. Certificate Signing Request Section Click to enlarge section of the data-at-rest encryption window for configuring a certificate signing request

    1. Enter appropriate credentials for your organization in the Email , Organization , Organizational Unit , Country Code , City , and State fields and then click the Save CSR Info button.
      The entered information is saved and is used when creating a certificate signing request (CSR). To specify more than one Organization Unit name, enter a comma separated list.
      Note: You can update this information until an SSL certificate for a node is uploaded to the cluster, at which point the information cannot be changed (the fields become read only) without first deleting the uploaded certificates.
    2. Click the Download CSRs button, and then in the new screen click the Download CSRs for all nodes to download a file with CSRs for all the nodes or click a Download link to download a file with the CSR for that node.
      Figure. Download CSRs Screen Click to enlarge screen to download a certificate signing request

    3. Send the files with the CSRs to the desired certificate authority.
      The certificate authority creates the signed certificates and returns them to you. Store the returned SSL certificates and the CA certificate where you can retrieve them in step 5.
      • The certificates must be X.509 format. (DER, PKCS, and PFX formats are not supported.)
      • The certificate and the private key should be in separate files.
  6. In the Key Management Server section, do the following:
    Figure. Key Management Server Section Click to enlarge section of the data-at-rest encryption window for configuring a key management server

    1. Click the Add New Key Management Server button.
    2. In the Add a New Key Management Server screen, enter a name, IP address, and port number for the key management server in the appropriate fields.
      The port is where the key management server is configured to listen for the KMIP protocol. The default port number is 5696. For the complete list of required ports, see Port Reference.
      • If you have configured multiple key management servers in cluster mode, click the Add Address button to provide the addresses for each key management server device in the cluster.
      • If you have stand-alone key management servers, click the Save button. Repeat this step ( Add New Key Management Server button) for each key management server device to add.
        Note: If your key management servers are configured into a master/slave (active/passive) relationship and the architecture is such that the follower cannot accept write requests, do not add the follower into this configuration. The system sends requests (read or write) to any configured key management server, so both read and write access is needed for key management servers added here.
        Note: To prevent potential configuration problems, always use the Add Address button for key management servers configured into cluster mode. Only a stand-alone key management server should be added as a new server.
      Figure. Add Key Management Server Screen Click to enlarge screen to provide an address for a key management server

    3. To edit any settings, click the pencil icon for that entry in the key management server list to redisplay the add page and then click the Save button after making the change. To delete an entry, click the X icon.
  7. In the Add a New Certificate Authority section, enter a name for the CA, click the Upload CA Certificate button, and select the certificate for the CA used to sign your node certificates (see step 3c). Repeat this step for all CAs that were used in the signing process.
    Figure. Certificate Authority Section Click to enlarge screen to identify and upload a certificate authority certificate

  8. Go to the Key Management Server section (see step 4) and do the following:
    1. Click the Manage Certificates button for a key management server.
    2. In the Manage Signed Certificates screen, upload the node certificates either by clicking the Upload Files button to upload all the certificates in one step or by clicking the Upload link (not shown in the figure) for each node individually.
    3. Test that the certificates are correct either by clicking the Test all nodes button to test the certificates for all nodes in one step or by clicking the Test CS (or Re-Test CS ) link for each node individually. A status of Verified indicates the test was successful for that node.
    4. Repeat this step for each key management server.
    Note: Before removing a drive or node from an SED cluster, ensure that the testing is successful and the status is Verified . Otherwise, the drive or node will be locked.
    Figure. Upload Signed Certificates Screen Click to enlarge screen to upload and test signed certificates

  9. When the configuration is complete, click the Enable Encryption button.
    Enable Encryption window is displayed.
    Figure. Data-at-Rest Encryption Screen (unprotected) Click to enlarge

    Caution: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.
  10. Enter ENCRYPT .
  11. Click Encrypt button.
    The data-at-rest encryption is enabled. To view the status of the encrypted cluster or container, go to Data at Rest Encryption in the Settings menu.

    When you enable encryption, a low priority background task runs to encrypt all the unencrypted data. This task is designed to take advantage of any available CPU space to encrypt the unencrypted data within a reasonable time. If the system is occupied with other workloads, the background task consumes less CPU space. Depending on the amount of data in the cluster, the background task can take 24 to 36 hours to complete.

    Note: If changes are made to the configuration after protection has been enabled, such as adding a new key management server, you must do the rekey operation for the modification to take full effect. In case of EKM, rekey to change the KEKs stored in the EKM. In case of LKM, rekey to change the leader key used by native key manager, see Changing Key Encryption Keys (Software Only)) for details.
    Note: Once the task to encrypt a cluster begins, you cannot cancel the operation. Even if you stop and restart the cluster, the system resumes the operation.
    Figure. Data-at-Rest Encryption Screen (protected) Click to enlarge

Switching between Native Key Manager and External Key Manager

After Software Encryption has been established, Nutanix supports the ability to switch the KMS type from the External Key Manager to the Native Key Manager or from the Native Key Manager to an External Key Manager, without any down time.

Note:
  • The Native KMS requires a minimum of 3-node cluster.
  • For external KMS, a separate key management server is required to store the keys outside of the cluster. Each key management server device must be configured and addressable through the network. It is recommended that multiple key manager server devices be configured to work in clustered mode so they can be added to the cluster configuration as a single entity that is resilient to a single failure.
  • It is recommended that you backup and save the encryption keys with identifiable names before and after changing the KMS type. For backing up keys, see Backing up Keys.
To change the KMS type, change the KMS selection by editing the encryption configuration. For details, see step 3 in Configuring Data-at-Rest Encryption (Software Only) section.
Figure. Select KMS type Click to enlarge select KMS type

After you change the KMS type and save the configuration, the encryption keys are re-generated on the selected KMS storage medium and data is re-encrypted with the new keys. The old keys are destroyed.
Note: This operation completes in a few minutes, depending on the number of encrypted objects and network speed.

Changing Key Encryption Keys (Software Only)

The key encryption key (KEK) can be changed at any time. This can be useful as a periodic password rotation security precaution or when a key management server or node becomes compromised. If the key management server is compromised, only the KEK needs to be changed, because the KEK is independent of the drive encryption key (DEK). There is no need to re-encrypt any data, just to re-encrypt the DEK.

About this task

To change the KEKs for a cluster, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, select Manage Keys and click the Rekey button under Software Encryption .
    Note: The Rekey button appears only when cluster protection is active.
    Note: If the cluster is already protected and a new key management server is added, you must press the Rekey button to use this new key management server for storing secrets.
    Figure. Cluster Encryption Screen Click to enlarge

    Note: The system automatically regenerates the leader key yearly.

Destroying Data (Software Only)

Data on the AOS cluster is always encrypted, and the data encryption key (DEK) used to read the encrypted data is known only to the AOS. All data on the drive can effectively be destroyed (that is, become permanently unreadable) by deleting the container or cluster. This is known as a crypto-erase.

About this task

Note: To help ensure that your data is secure, you cannot disable software-only data-at-rest encryption once it is enabled. Nutanix recommends regularly backing up your data, encryption keys, and key management server.

To crypto-erase the container or cluster, do the following:

Procedure

  1. Delete the storage container or destroy the cluster.
    • For information on how to delete a storage container, see Modifying a Storage Container in the Prism Web Console Guide .
    • For information on how to destroy a cluster, see Destroying a Cluster in the Acropolis Advanced Administration Guide .
    Note:

    When you delete a storage container, the Curator scans and deletes the DEK and KEK keys automatically.

    When you destroy a cluster, then:

    • the Native Key Manager (local) destroys the master key shares and the encrypted DEKs/KEKs.
    • the Native Key Manager (remote) retains the root key on the PC if the cluster is still registered to a PC when it is destroyed. You must unregister a cluster from the PC and then destroy the cluster to delete the root key.
    • the External Key Manager deletes the encrypted DEKs. However, the KEKs remain on the EKM. You must use an external key manager UI to delete the KEKs.
  2. Delete the key backup files, if any.

Switching from SED-EKM to Software-LKM

This section describes the steps to switch from SED and External KMS combination to software-only and LKM combination.

About this task

To switch from SED-EKM to Software-LKM, do the following.

Procedure

  1. Perform the steps for the software-only encryption with External KMS. For more information, see Configuring Data-at-Rest Encryption (Software Only).
    After the background task completes, all the data gets encrypted by the software. The time taken to complete the task depends on the amount of data and foreground I/O operations in the cluster.
  2. Disable the SED encryption. Ensure that all the disks are unprotected.
    For more information, see Enabling/Disabling Encryption (SEDs).
  3. Switch the key management server from the External KMS to Local Key Manager. For more information, see Switching between Native Key Manager and External Key Manager.

Configuring Dual Encryption

About this task

Dual Encryption protects the data on the clusters using both SED and software-only encryption. An external key manager is used to store the keys for dual encryption, the Native KMS is not supported.

To configure dual encryption, do the following:

Procedure

  1. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  2. In the Cluster Encryption page, check to enable both Drive-based and Software-based encryption.
  3. Click Save Encryption Type .
    Figure. Dual Encryption Click to enlarge selecting encryption types

  4. Continue with the rest of the encryption configuration, see:
    • Configuring Data-at-Rest Encryption (Software Only)
    • Configuring Data-at-Rest Encryption (SEDs)

Backing up Keys

About this task

You can take a backup of encryption keys:

  • when you enable Software-only Encryption for the first time
  • after you regenerate the keys

Backing up encryption keys is critical in the very unlikely situation in which keys get corrupted.

You can download key backup file for a cluster on a PE or all clusters on a PC. To download key backup file for all clusters, see Taking a Consolidated Backup of Keys (Prism Central) .

To download the key backup file for a cluster, do the following:

Procedure

  1. Log on to the Prism Element web console.
  2. Click the gear icon in the main menu and then select Data at Rest Encryption in the Settings page.
  3. In the Cluster Encryption page, select Manage Keys .
  4. Enter and confirm the password.
  5. Click the Download Key Backup button.

    The backup file is saved in the default download location on your local machine.

    Note: Ensure you move the backup key file to a safe location.

Taking a Consolidated Backup of Keys (Prism Central)

If you are using the Native KMS option with software encryption for your clusters, you can take a consolidated backup of all the keys from Prism Central.

About this task

To take a consolidated backup of keys for software encryption-enabled clusters (Native KMS-only), do the following:

Procedure

  1. Log on to the Prism Central web console.
  2. Click the hamburger icon, then select Clusters > List view.
  3. Select a cluster, go to Actions , then select Manage & Backup Keys .
  4. Download the backup keys:
    1. In Password , enter your password.
    2. In Confirm Password , reenter your password.
    3. To change the encryption key, select the Rekey Encryption Key (KEK) box .
    4. To download the backup key, click Backup Key .
    Note: Ensure that you move the backup key file to a safe location.

Importing Keys

You can import the encryption keys from backup. You must note the specific commands in this topic if you backed up your keys to an external key manager (EKM)

About this task

Note: Nutanix recommends that you contact Nutanix Support for this operation. Extended cluster downtime might result if you perform this task incorrectly.

Procedure

  1. Log on to any Controller VM in the cluster with SSH.
  2. Retrieve the encryption keys stored on the cluster and verify that all the keys you want to retrieve are listed.
    In this example, the password is Nutanix.123 . date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path=/home/nutanix/encryption_key_backup_date \
    --password=Nutanix.123 --list_key_ids=true 
  3. Import the keys into the cluster.
    mantle_recovery_util --backup_file_path=/home/nutanix/key_backup \
    --password=Nutanix.123 --interactive_mode 
  4. If you are using an external key manager such as IBM Security Key Lifecycle Manager, Gemalto Safenet, or Vormetric Data Security Manager, use the --store_kek_remotely option to import the keys into the cluster.
    In this example, date is the timestamp portion of the backup file name.
    mantle_recovery_util --backup_file_path path/encryption_key_backup_date \
     --password key_password --store_kek_remotely

Securing Traffic Through Network Segmentation

Network segmentation enhances security, resilience, and cluster performance by isolating a subset of traffic to its own network.

You can achieve traffic isolation in one or more of the following ways:

Isolating Backplane Traffic by using VLANs (Logical Segmentation)
You can separate management traffic from storage replication (or backplane) traffic by creating a separate network segment (LAN) for storage replication. For more information about the types of traffic seen on the management plane and the backplane, see Traffic Types In a Segmented Network.

To enable the CVMs in a cluster to communicate over these separated networks, the CVMs are multihomed. Multihoming is facilitated by the addition of a virtual network interface card (vNIC) to the Controller VM and placing the new interface on the backplane network. Additionally, the hypervisor is assigned an interface on the backplane network.

The traffic associated with the CVM interfaces and host interfaces on the backplane network can be secured further by placing those interfaces on a separate VLAN.

In this type of segmentation, both network segments continue to use the same external bridge and therefore use the same set of physical uplinks. For physical separation, see Isolating the Backplane Traffic Physically on an Existing Cluster.

Isolating backplane traffic from management traffic requires minimal configuration through the Prism web console. No manual host (hypervisor) configuration steps are required.

For information about isolating backplane traffic, see Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only).

Isolating Backplane Traffic Physically (Physical Segementation)

You can physically isolate the backplane traffic (intra cluster traffic) from the management traffic (Prism, SSH, SNMP) in to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation of the backplane traffic from the management traffic.

You can use Prism to configure the vNIC on the CVM and configure the backplane traffic to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation.

For more information about physically isolating backplane traffic, see Isolating the Backplane Traffic Physically on an Existing Cluster.

Isolating service-specific traffic
You can also secure traffic associated with a service (for example, Nutanix Volumes) by confining its traffic to a separate vNIC on the CVM and using a dedicated virtual network that has its own physical NICs. This type of segmentation therefore offers true physical separation for service-specific traffic.

You can use Prism to create the vNIC on the CVM and configure the service to communicate over the dedicated virtual network. However, you must first manually configure the virtual network on the hosts and associate it with the physical NICs that it requires for true traffic isolation. You need one virtual network for each service you want to isolate. For a list of the services whose traffic you can isolate in the current release, see Cluster Services That Support Traffic Isolation.

For information about isolating service-specific traffic, see Isolating Service-Specific Traffic.

Isolating Stargate-to-Stargate traffic over RDMA
Some Nutanix platforms support remote direct memory access (RDMA) for Stargate-to-Stargate service communication. You can create a separate virtual network for RDMA-enabled network interface cards. If a node has RDMA-enabled NICs, Foundation passes the NICs through to the CVMs during imaging. The CVMs use only the first of the two RDMA-enabled NICs for Stargate-to-Stargate communications. The virtual NIC on the CVM is named rdma0. Foundation does not configure the RDMA LAN. After creating a cluster, you need to enable RDMA by creating an RDMA LAN from the Prism web console. For more information about RDMA support, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

For information about isolating backplane traffic on an RDMA cluster, see Isolating the Backplane Traffic on an Existing RDMA Cluster.

Traffic Types In a Segmented Network

The traffic entering and leaving a Nutanix cluster can be broadly classified into the following types:

Backplane traffic
Backplane traffic is intra-cluster traffic that is necessary for the cluster to function, and it comprises traffic between CVMs and traffic between CVMs and hosts for functions such as storage RF replication, host management, high availability, and so on. This traffic uses eth2 on the CVM. In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured. For nodes that have RDMA-enabled NICs, the CVMs use a separate RDMA LAN for Stargate-to-Stargate communications.
Management traffic
Management traffic is administrative traffic, or traffic associated with Prism and SSH connections, remote logging, SNMP, and so on. The current implementation simplifies the definition of management traffic to be any traffic that is not on the backplane network, and therefore also includes communications between user VMs and CVMs. This traffic uses eth0 on the CVM.

Traffic on the management plane can be further isolated per service or feature. An example of this type of traffic is the traffic that the cluster receives from external iSCSI initiators (Nutanix Volumes iSCSI traffic). For a list of services supported in the current release, see Cluster Services That Support Traffic Isolation.

Segmented and Unsegmented Networks

In the default unsegmented network in a Nutanix cluster (ESXi and AHV), the Controller VM has two virtual network interfaces—eth0 and eth1.

Interface eth0 is connected to the default external virtual switch, which is in turn connected to the external network through a bond or NIC team that contains the host physical uplinks.

Interface eth1 is connected to an internal network that enables the CVM to communicate with the hypervisor.

In the below unsegmented network (see Unsegmented Network - ESXi Cluster , and Unsegmented Network - AHV Cluster images) all external CVM traffic, whether backplane or management traffic, uses interface eth0. These interfaces are on the default VLAN on the default virtual switch.

Figure. Unsegmented Network- ESXi Cluster Click to enlarge

This figure shows an unsegmented network (AHV cluster).

In AHV, VM live migration traffic is also backplane, and uses the AHV backplane interface, VLAN, and virtual switch when configured.

Figure. Unsegmented Network- AHV Cluster Click to enlarge

If you further isolate service-specific traffic, additional vNICs are created on the CVM. Each service requiring isolation is assigned a dedicated virtual NIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

Network with Segmentation

In a segmented network, management traffic uses CVM interface eth0 and additional services can be isolated to different VLANs or virtual switches. In backplane segmentation, the backplane traffic uses interface eth2. The backplane network uses either the default VLAN or, optionally, a separate VLAN that you specify when segmenting the network. In ESXi, you must select a port group for the new vmkernel interface. In AHV this internal interface is created automatically in the selected virtual switch. For physical separation of the backplane network, create this new port group on a separate virtual switch in ESXi, or select the desired virtual switch in the AHV GUI.

If you want to isolate service-specific traffic such as Volumes or Disaster Recovery as well as backplane traffic, then additional vNICs are needed on the CVM, but no new vmkernel adapters or internal interfaces are required. AOS creates additional vNICs on the CVM. Each service that requires isolation is assigned a dedicated vNIC on the CVM. The NICs are named ntnx0, ntnx1, and so on. Each service-specific NIC is placed on a configurable existing or new virtual network (vSwitch or bridge) and a VLAN and IP subnet are specified.

You can choose to perform backplane segmentation alone, with no other forms of segmentation. You can also choose to use one or more types of service specific segmentation with or without backplane segmentation. In all of these cases, you can choose to segment any service to either the existing, or a new virtual switch for further physical traffic isolation. The combination selected is driven by the security and networking requirements of the deployment. In most cases, the default configuration with no segmentation of any kind is recommended due to simplicity and ease of deployment.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on ESXi hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an ESXi Cluster Click to enlarge

Here are the CVM to ESXi hypervisor connection details:

  • The eth0 vNIC on the CVM and vmk0 on the host are carrying management traffic and connected to the hypervisor through the existing PGm (portgroup) on vSwitch0.
  • The eth2 vNIC on the CVM and vmk2 on the host are carrying backplane traffic and connected to the hypervisor through a new user created PGb on the existing vSwitch.
  • The ntnx0 vNIC on the CVM is carrying iSCSI traffic and connected to the hypervisor through PGi on the vSwitch1. No new vmkernel adapter is required.
  • The ntnx1 vNIC on the CVM is carrying DR traffic and connected to the hypervisor through PGd on the vSwitch2. Here as well, there is no new vmkernel adapter required.

The following figure shows an implementation scenario where the backplane and service specific segmentation is configured with two vSwitches on an AHV hypervisors.

Figure. Backplane and Service Specific Segmentation Configured with two vSwitches on an AHV Cluster Click to enlarge

Here are the CVM to AHV hypervisor connection details:

  • The eth0 vNIC on the CVM is carrying management traffic and connected to the hypervisor through the existing vnet0.
  • Other vNICs such as eth2, ntnx0, and ntnx1 are connected to the hypervisor through the auto created interfaces on either the existing or new vSwitch.
Note: In the above figure the interface name 'br0-bp' is read as 'br0-backplane'.

The following table describes the vNIC, port group (PG), VM kernel (vmk), virtual network (vnet) and virtual switch connections for CVM and hypervisor in different implementation scenarios. The tables capture information for ESXi and AHV hypervisors:

Table 1.
Implementation Scenarios vNICs on CVM Connected to ESXi Hypervisor Connected to AHV Hypervisor
Backplane Segmentation with 1 vSwitch

eth0:

DR, iSCSI, andManagement traffic

vmk0 via existing PGm on vSwitch Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on bridge br0
Backplane Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0 Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch
Service Specific Segmentation for Volumes with 1 vSwitch

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on vSwitch0

Auto created interface on existing br0
Service Specific Segmentation for Volumes with 2 vSwitches

eth0:

DR, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx0:

iSCSI (Volumes) traffic

CVM vNIC via PGi on new vSwitch

Auto created interface on new virtual switch
Service Specific Segmentation for DR with 1 vSwitch

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on existing br0

Service Specific Segmentation for DR with 2 vSwitches

eth0:

iSCSI, Backplane, and Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

ntnx1:

DR traffic

CVM vNIC via PGd on new vSwitch

Auto created interface on new virtual switch

Backplane and Service Specific Segmentation with 1 vSwitch

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on vSwitch0

CVM vNIC via PGb on vSwitch0

Auto created interfaces on br0

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch0 Auto created interface on br0

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch0

Auto created interface on br0
Backplane and Service Specific Segmentation with 2 vSwitches

eth0:

Management traffic

vmk0 via existing PGm on vSwitch0

Existing vnet0

eth2:

Backplane traffic

New vmk2 via PGb on new vSwitch

CVM vNIC via PGb on new vSwitch

Auto created interfaces on new virtual switch

ntnx0:

iSCSI traffic

CVM vNIC via PGi on vSwitch1

No new user defined vmkernel adapter is required.

Auto created interface on new virtual switch

ntnx1:

DR traffic

CVM vNIC via PGd on vSwitch2.

No new user defined vmkernel adapter is required.

Auto created interface in new virtual switch

Implementation Considerations

Supported Environment

Network segmentation is supported in the following environment:

  • The hypervisor must be one of the following:
    • For network segmentation by traffic type (separating backplane traffic from management traffic):
      • AHV
      • ESXi
      • Hyper-V
        Note: Only logical segmentation (or VLAN-based segmentation) is supported on Hyper-V. Physical network segmentation is not supported on Hyper-V.
    • For service-specific traffic isolation:
      • AHV
      • ESXi
  • For logical network segmentation, AOS version must be 5.5 or later. For physical segmentation and service-specific traffic isolation, the AOS version must be 5.11 or later.
  • From the 5.11.1 release, you cannot enable network segmentation on mixed-hypervisor clusters. However, if you enable network segmentation on a mixed-hypervisor cluster running a release earlier than 5.11.1 and you upgrade that cluster to 5.11.1 or later, network segmentation continues to work seamlessly.
  • RDMA requirements:
    • Network segmentation is supported with RDMA for AHV and ESXi hypervisors only.
    • For more information about RDMA, see Remote Direct Memory Access in the NX Series Hardware Administration Guide .

Prerequisites

For Nutanix Volumes

Stargate does not monitor the health of a segmented network. If physical network segmentation is configured, network failures or connectivity issues are not tolerated. To overcome this issue, configure redundancy in the network. That is, use two or more uplinks in a fault tolerant configuration, connected to two separate physical switches.

For Disaster Recovery

  • Ensure that the VLAN and subnet that you plan to use for the network segment are routable.
  • Make sure that you have a pool of IP addresses to specify when configuring segmentation. For each cluster, you need n+1 IP addresses, where n is the number of nodes in the cluster. The additional IP address is for the virtual IP address requirement.
  • Enable network segmentation for disaster recovery at both sites (local and remote) before configuring remote sites at those sites.

Limitations

For Nutanix Volumes

  • If network segmentation is enabled for Volumes, volume group attachments are not recovered during VM recovery.
  • Nutanix service VMs such as Files and Buckets continue to communicate with the CVM eth0 interface when using Volumes for iSCSI traffic. Other external clients use the new service-specific CVM interface.

For Disaster Recovery

The system does not support configuring a Leap DR and DR service specific traffic isolation together.

Cluster Services That Support Traffic Isolation

In this release, you can isolate traffic associated with the following services to its own virtual network:

  • Management (The default network that cannot be moved from CVM eth0)

  • Backplane

  • RDMA

  • Service Specific Disaster Recovery

  • Service Specific Volumes

Configurations in Which Network Segmentation Is Not Supported

Network segmentation is not supported in the following configurations:

  • Clusters on which the CVMs have a manually created eth2 interface.
  • Clusters on which the eth2 interface on one or more CVMs have been assigned an IP address manually. During an upgrade to an AOS release that supports network segmentation, an eth2 interface is created on each CVM in the cluster. Even though the cluster does not use these interfaces until you configure network segmentation, you must not manually configure these interfaces in any way.
Caution:

Nutanix has deprecated support for manual multi-homed CVM network interfaces from AOS version 5.15 and later. Such a manual configuration can lead to unexpected issues on these releases. If you have configured an eth2 interface on the CVM manually, refer to the KB-9479 and Nutanix Field Advisory #78 for details on how to remove the eth2 interface.

Configuring the Network on an AHV Host

These steps describe how to configure host networking for physical and service-specific network segmentation on an AHV host. These steps are prerequisites for physical and service-specific network segmentation and you must perform these steps before you perform physical or service-specific traffic isolation. If you are configuring networking on an ESXi host, perform the equivalent steps by referring to the ESXi documentation. On ESXi, you create vSwitches and port groups to achieve the same results.

About this task

For information about the procedures to create, update and delete a virtual switch in Prism Element Web Console, see Configuring a Virtual Network for Guest VMs in the Prism Web Console Guide .

Note: The term unconfigured node in this procedure refers to a node that is not part of a cluster and is being prepared for cluster expansion.

To configure host networking for physical and service-specific network segmentation, do the following:

Note: If you are segmenting traffic on nodes that are already part of a cluster, perform the first step. If you are segmenting traffic on an unconfigured node that is not part of a cluster, perform the second step directly.

Procedure

  1. If you are segmenting traffic on nodes that are already part of a cluster, do the following:
    1. From the default virtual switch vs0, remove the uplinks that you want to add to the virtual switch you created by updating the default virtual switch.

      For information about updating the default virtual switch vs0 to remove the uplinks, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

    2. Create a virtual switch for the backplane traffic or service whose traffic you want to isolate.
      Add the uplinks to the new virtual switch.

      For information about creating a new virtual switch, see Creating or Updating a Virtual Switch in the Prism Web Console Guide .

  2. If you are segmenting traffic on an unconfigured node (new host) that is not part of a cluster, do the following:
    1. Create a bridge for the backplane traffic or service whose traffic you want to isolate by logging on to the new host.
      ovs-vsctl add-br br1
    2. From the default bridge br0, log on to the host CVM and keep only eth0 and eth1 in br0.
      manage_ovs --bridge_name br0 --interfaces eth0,eth1 --bond_name br0-up --bond_mode active-backup update_uplinks
    3. Log on to the host CVM and then add eth2 and eth3 to the uplink bond of br1
      manage_ovs --bridge_name br1 --interfaces eth2,eth3 --bond_name br1-up --bond_mode active-backup update_uplinks
      Note: If this step is not done correctly, a network loop can be created that causes a network outage. Ensure that no other uplink interfaces exist on this bridge before adding the new interfaces, and always add interfaces into a bond.

What to do next

Prism can configure a VLAN only on AHV hosts. Therefore, if the hypervisor is ESXi, in addition to configuring the VLAN on the physical switch, make sure to configure the VLAN on the port group.

If you are performing physical network segmentation, see Isolating the Backplane Traffic Physically on an Existing Cluster.

If you are performing service-specific traffic isolation, see Service-Specific Traffic Isolation.

Network Segmentation for Traffic Types (Backplane, Management, and RDMA)

You can segment the network on a Nutanix cluster in the following ways:

  • You can segment the network on an existing cluster by using the Prism web console.
  • You can segment the network when creating a cluster by using Nutanix Foundation 3.11.2 or higher versions.

The following topics describe network segmentation procedures for existing clusters and changes during AOS upgrade and cluster expansion. For more information about segmenting the network when creating a cluster, see the Field Installation Guide.

Isolating the Backplane Traffic Logically on an Existing Cluster (VLAN-Based Segmentation Only)

You can segment the network on an existing cluster by using the Prism web console. You must configure a separate VLAN for the backplane network to achieve logical segmentation. The network segmentation process creates a separate network for backplane communications on the existing default virtual switch. The process then places the eth2 interfaces (that the process creates on the CVMs during upgrade) and the host interfaces on the newly created network. This method allows you to achieve logical segmentation of traffic over the selected VLAN. From the specified subnet, assign IP addresses to each new interface. You, therefore, need two IP addresses per node. When you specify the VLAN ID, AHV places the newly created interfaces on the specified VLAN.

Before you begin

If your cluster has RDMA-enabled NICs, follow the procedure in Isolating the Backplane Traffic on an Existing RDMA Cluster.

  • For ESXi clusters, it is mandatory to create and manage port groups that networking uses for CVM and backplane networking. Therefore, ensure that you create port groups on the default virtual switch vs0 for the ESXi hosts and CVMs.

    Since backplane traffic segmentation is logical, it is based on the VLAN that is tagged for the port groups. Therefore, while creating the port groups ensure that you tag the new port groups created for the ESXi hosts and CVMs with the appropriate VLAN ID. Consult your networking team to acquire the necessary VLANs for use with Nutanix nodes.

  • For new backplane networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network segmentation.

About this task

You need separate VLANs for Management network and Backplane network. For example, configure VLAN 100 as Management network VLAN and VLAN 200 as Backplane network VLAN on the Ethernet links that connect the Nutanix nodes to the physical switch.
Note: Nutanix does not control these VLAN IDs. Consult your networking team to acquire VLANs for the Management and Backplane networks.

To segment the network on an existing cluster for a backplane LAN, do the following:

Note:

In this method, for AHV nodes, logical segmentation (VLAN-based segmentation) is done on the default bridge. The process creates the host backplane interface on the Backplane Network port group on ESXi or br0-backplane (interface) on br0 bridge in case of AHV. The eth2 interface on the CVM is on CVM Backplane Network by default.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box appears.
  2. In the Network Configuration > Controller VM Interfaces > Backplane LAN row, click Configure .
    The Create Interface dialog box appears.
  3. In the Create Interface dialog box, provide the necessary information.
    • In Subnet IP , specify a non-routable subnet.

      Ensure that the subnet has sufficient IP addresses. The segmentation process requires two IP addresses per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.

    • In Netmask , specify the netmask.
    • If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.

      Nutanix recommends that you use a VLAN. If you do not specify a VLAN ID, the default VLAN on the virtual switch is used.

  4. Click Verify and Save .
    The network segmentation process creates the backplane network if the network settings that you specified pass validation.

Isolating the Backplane Traffic on an Existing RDMA Cluster

Segment the network on an existing RDMA cluster by using the Prism web console.

About this task

The network segmentation process creates a separate network for RDMA communications on the existing default virtual switch and places the rdma0 interface (created on the CVMs during upgrade) and the host interfaces on the newly created network. From the specified subnet, IP addresses are assigned to each new interface. Two IP addresses are therefore required per node. If you specify the optional VLAN ID, the newly created interfaces are placed on the VLAN. A separate VLAN is highly recommended for the RDMA network to achieve true segmentation.

Before you begin

  • For new RDMA networks, you must specify a non-routable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network alone.
  • If you plan to specify a VLAN for the RDMA network, make sure that the VLAN is configured on the physical switch ports to which the nodes are connected.
  • Configure the switch interface as a Trunk port.
  • Ensure that this cluster is configured to support RDMA during installation using the Foundation.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
    The Network Configuration dialog box is displayed.
  2. Click the Internal Interfaces tab.
  3. Click Configure in the RDMA row.
    Ensure that you have configured the switch interface as a trunk port.
    The Create Interface dialog box is displayed.
    Figure. Create Interface Dialog Box Click to enlarge

  4. In the Create Interface dialog box, do the following:
    1. In Subnet IP and Netmask , specify a non-routable subnet and netmask, respectively. Make sure that the subnet can accommodate cluster expansion in the future.
    2. In VLAN , specify a VLAN ID for the RDMA LAN.
      A VLAN ID is optional but highly recommended for true network segmentation and enhanced security.
    3. c. From the PFC list, select the priority flow control value configured on the physical switch port.
  5. Click Verify and Save .
  6. Click Close .

Isolating the Backplane Traffic Physically on an Existing Cluster

By using the Prism web console, you can configure the eth2 interface on a separate vSwitch (ESXi) or bridge (AHV). The network segmentation process creates a separate network for backplane communications on the new bridge or vSwitch and places the eth2 interfaces (that are created on the CVMs during upgrade) and the host interfaces on the newly created network. From the specified subnet, IP addresses are assigned to each new interface. Two IP addresses are therefore required per node. If you specify the optional VLAN ID, the newly created interfaces are placed on the VLAN. A separate VLAN is highly recommended for the backplane network to achieve true segmentation.

Before you begin

The physical isolation of backplane traffic is supported from the AOS 5.11.1 release.

Before you enable physical isolation of the backplane traffic, configure the network (port groups or bridges) on the hosts and associate the network with the required physical NICs.

Note: Nutanix does not support physical network segmentation on Hyper-V.

On the AHV hosts, do the following:

  1. Create a bridge for the backplane traffic.
  2. From the default bridge br0, remove the uplinks (physical NICs) that you want to add to the bridge you created for the backplane traffic.
  3. Add the uplinks to a new bond.
  4. Add the bond to the new bridge.

See Configuring the Network on an AHV Host for instructions about how to perform these tasks on a host.

On the ESXi hosts, do the following:

  1. Create a vSwitch for the backplane traffic.
  2. From vSwitch0, remove the uplinks (physical NICs) that you want to add to the vSwitch you created for the backplane traffic.
  3. On the backplane vSwitch, create one port group for the CVM and another for the host. Ensure that at least one uplink is present in the Active Adaptors list for each port group if you have overridden the failover order.

See the ESXi documentation for instructions about how to perform these tasks.

Note: Before you perform the following procedure, ensure that the uplinks you added to the vSwitch or bridge are in the UP state.

About this task

Perform the following procedure to physically segment the backplane traffic.

Procedure

  1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration in the Settings page.
  2. On the Controller VM Interfaces tab, in the Backplane LAN row, click Configure .
  3. In the Backplane LAN dialog box, do the following:
    1. In Subnet IP , specify a non-routable subnet.
      Make sure that the subnet has a sufficient number of IP addresses. Two IP addresses are required per node. Reconfiguring the backplane to increase the size of the subnet involves cluster downtime, so you might also want to make sure that the subnet can accommodate new nodes in the future.
    2. In Netmask , specify the netmask.
    3. If you want to assign the interfaces on the network to a VLAN, specify the VLAN ID in the VLAN ID field.
      A VLAN is strongly recommended. If you do not specify a VLAN ID, the default VLAN on the virtual switch is implied.
    4. (AHV only) In the Host Node list, select the bridge you created for the backplane traffic.
    5. (ESXi only) In the Host Port Group list, select the port group you created for the host.
    6. (ESXi only) In the CVM Port Group list, select the port group you created for the CVM.
    Note:

    Nutanix clusters support both vSphere Standard Switches and vSphere Distributed Switches. However, you must mandatorily configure only one type of virtual switches in one cluster. Configure all the backplane and management traffic in one cluster on either vSphere Standard Switches or vSphere Distributed Switches. Do not mix Standard and Distributed vSwitches on a single cluster.

  4. Click Verify and Save .
    If the network settings you specified pass validation, the backplane network is created and the CVMs perform a reboot in a rolling fashion (one at a time), after which the services use the new backplane network. The progress of this operation can be tracked on the Prism tasks page.
    Note: Segmenting backplane traffic can involve up to two rolling reboots of the CVMs. The first rolling reboot is done to move the backplane interface (eth2) of the CVM to the selected port group or bridge. This is done only for CVM(s) whose backplane interface is not already connected to the selected port group or bridge. The second rolling reboot is done to migrate the cluster services to the newly configured backplane interface.
  5. Restart the Acropolis service on all the nodes in the cluster.
    Note: Perform this step only if your AOS version is 5.17.0.x. This step is not required if your AOS version is 5.17.1 or later.
    1. Log on to any CVM in the cluster with SSH.
    2. Stop the Acropolis service.
      nutanix@cvm$ allssh genesis stop acropolis
      Note: You cannot manage your guest VMs after the Acropolis service is stopped.
    3. Verify if the Acropolis service is DOWN on all the CVMs.
      nutanix@cvm$ cluster status | grep -v UP 

      An output similar to the following is displayed:

      
      nutanix@cvm$ cluster status | grep -v UP 
      
      2019-09-04 14:43:18 INFO zookeeper_session.py:143 cluster is attempting to connect to Zookeeper 
      
      2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1, X.X.X.2, X.X.X.3 
      
      The state of the cluster: start 
      
      Lockdown mode: Disabled 
              CVM: X.X.X.1 Up 
                                 Acropolis DOWN       [] 
              CVM: X.X.X.2 Up, ZeusLeader 
                                 Acropolis DOWN       [] 
              CVM: X.X.X.3 Maintenance
    4. From any CVM in the cluster, start the Acropolis service.
      nutanix@cvm$ cluster start 

Reconfiguring the Backplane Network

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Caution: At the end of this procedure, the cluster stops and restarts, even if only the VLAN is changed, and therefore involves cluster downtime.

To reconfigure the cluster, do the following:

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Reconfigure the backplane network.
    nutanix@cvm$ backplane_ip_reconfig [--backplane_vlan=vlan-id] \
    [--backplane_subnet=subnet_ip_address --backplane_netmask=netmask]

    Replace vlan-id with the new VLAN ID, subnet_ip_address with the new subnet IP address, and netmask with the new netmask.

    For example, reconfigure the backplane network to use VLAN ID 10 and subnet 172.30.25.0 with netmask 255.255.255.0 .

    nutanix@cvm$ backplane_ip_reconfig --backplane_vlan=10 \
    --backplane_subnet=172.30.25.0 --backplane_netmask=255.255.255.0

    Output similar to the following is displayed:

    This operation will do a 'cluster stop', resulting in disruption of 
    cluster services. Do you still want to continue? (Type "yes" (without quotes) 
    to continue)
    Type yes to confirm that you want to reconfigure the backplane network.
    Caution: During the reconfiguration process, you might receive an error message similar to the following.
    Failed to reach a node.
    You can safely ignore this error message and therefore do not stop the script manually.
    Note: The backplane_ip_reconfig command is not supported on ESXi clusters with vSphere Distributed Switches. To reconfigure the backplane network on a vSphere Distributed Switch setup, disable the backplane network (see Disabling Network Segmentation on an ESXi and Hyper-V Cluster) and enable again with a different subnet or VLAN.
  3. Type yes to confirm that you want to reconfigure the backplane network.
    The reconfiguration procedure takes a few minutes and includes a cluster restart. If you type anything other than yes , network reconfiguration is aborted.
  4. After the process completes, verify that the backplane was reconfigured.
    1. Verify that the IP addresses of the eth2 interfaces on the CVM are set correctly.
      nutanix@cvm$ svmips -b
      Output similar to the following is displayed:
      172.30.25.1 172.30.25.3 172.30.25.5
    2. Verify that the IP addresses of the backplane interfaces of the hosts are set correctly.
      nutanix@cvm$ hostips -b
      Output similar to the following is displayed:
      172.30.25.2 172.30.25.4 172.30.25.6
    The svmips and hostips commands, when used with the option b , display the IP addresses assigned to the interfaces on the backplane.

Disabling Network Segmentation on an ESXi and Hyper-V Cluster

Backplane network reconfiguration is a CLI-driven procedure that you perform on any one of the CVMs in the cluster. The change is propagated to the remaining CVMs.

About this task

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Disable the backplane network.
    • Use this CLI to disable network segmentation on an ESXi and Hyper-V cluster:
      nutanix@cvm$ network_segmentation --backplane_network --disable

      Output similar to the following appears:

      Operation type : Disable
      Network type : kBackplane
      Params : {}
      Please enter [Y/y] to confirm or any other key to cancel the operation

      Type Y/y to confirm that you want to reconfigure the backplane network.

      If you type Y/y, network segmentation is disabled and the cluster restarts in a rolling manner, one CVM at a time. If you type anything other than Y/y, network segmentation is not disabled.

  3. Verify that network segmentation was successfully disabled. You can verify this in one of two ways:
    • Verify that the backplane is disabled.
      nutanix@cvm$ network_segment_status

      Output similar to the following is displayed:

      2017-11-23 06:18:23 INFO zookeeper_session.py:110 network_segment_status is attempting to connect to Zookeeper

      Network segmentation is disabled

    • Verify that the commands to show the backplane IP addresses of the CVMs and hosts list the management IP addresses (run the svmips and hostips commands once without the b option and once with the b option, and then compare the IP addresses shown in the output).
      Important:
      nutanix@cvm$ svmips
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ svmips -b
      192.127.3.2 192.127.3.3 192.127.3.4
      nutanix@cvm$ hostips
      192.127.3.5 192.127.3.6 192.127.3.7
      nutanix@cvm$ hostips -b
      192.127.3.5 192.127.3.6 192.127.3.7

      In the example above, the outputs of the svmips and hostips commands with and without the b option are the same, indicating that the backplane network segmentation is disabled.

Disabling Network Segmentation on an AHV Cluster

About this task

You perform backplane network reconfiguration procedure on any one of the CVMs in the cluster. The change propagates to the remaining CVMs.

Procedure

  1. Shut down all the guest VMs in the cluster from within the guest OS or use the Prism Element web console.
  2. Place all nodes of a cluster into the maintenance mode.
    1. Use SSH to log on to a Controller VM in the cluster
    2. Determine the IP address of the node you want to put into the maintenance mode:
      nutanix@cvm$ acli host.list
      Note the value of Hypervisor IP for the node you want to put in the maintenance mode.
    3. Put the node into the maintenance mode:
      nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true | false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
      Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is recommended to shutdown user VMs before proceeding with disruptive changes.

      Replace host-IP-address with either the IP address or host name of the AHV host you want to shut down.

      The following are optional parameters for running the acli host.enter_maintenance_mode command:

      • wait
      • non_migratable_vm_action

      Do not continue if the host has failed to enter the maintenance mode.

    4. Verify if the host is in the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to EnteredMaintenanceMode and schedulable equals to False .

  3. Disable backplane network segmentation from the Prism Web Console.
    1. Log on to the Prism web console, click the gear icon in the top-right corner, and then click Network Configuration under the Settings .
    2. In the Internal Interfaces tab, in the Backplane LAN row, click Disable .
      Figure. Disable Network Configuration Click to enlarge

    3. Click Yes to disable Backplane LAN.

      This involves a rolling reboot of CVMs to migrate the cluster services back to the external interface.

  4. Log on to a CVM in the cluster with SSH and stop Acropolis cluster-wide:
    nutanix@cvm$ allssh genesis stop acropolis 
  5. Restart Acropolis cluster-wide:
    nutanix@cvm$ cluster start 
  6. Remove all nodes from the maintenance mode.
    1. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance mode:
      nutanix@cvm$ acli host.exit_maintenance_mode host-ip

      Replace host-ip with the new IP address of the host.

      This command migrates (live migration) all the VMs that were previously running on the host back to the host.

    2. Verify if the host has exited the maintenance mode:
      nutanix@cvm$ acli host.get host-ip

      In the output that is displayed, ensure that node_state equals to kAcropolisNormal or AcropolisNormal and schedulable equals to True .

  7. Power on the guest VMs from the Prism Element web console.

Service-Specific Traffic Isolation

Isolating the traffic associated with a specific service is a two-step process. The process is as follows:

  • Configure the networks and uplinks on each host manually. Prism only creates the VNIC that the service requires, and it places that VNIC on the bridge or port group that you specify. Therefore, you must manually create the bridge or /port group on each host and add the required physical NICs as uplinks to that bridge or port group.
  • Configure network segmentation for the service by using Prism. Create an extra VNIC for the service, specify any additional parameters that are required (for example, IP address pools), and the bridge or port group that you want to dedicate to the service.

Isolating Service-Specific Traffic

Before you begin

  • Ensure to configure each host as described in Configuring the Network on an AHV Host.
  • Review Prerequisites.

About this task

To isolate a service to a separate virtual network, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. In the details pane, on the Internal Interfaces tab, click Create New Interface .
    The Create New Interface dialog box is displayed.
  4. On the Interface Details tab, do the following:
    1. Specify a descriptive name for the network segment.
    2. (On AHV) Optionally, in VLAN ID , specify a VLAN ID.
      Make sure that the VLAN ID is configured on the physical switch.
    3. In Bridge (on AHV) or CVM Port Group (on ESXi), select the bridge or port group that you created for the network segment.
    4. To specify an IP address pool for the network segment, click Create New IP Pool , and then, in the IP Pool dialog box, do the following:
      • In Name , specify a name for the pool.
      • In Netmask , specify the network mask for the pool.
      • Click Add an IP Range , specify the start and end IP addresses in the IP Range dialog box that is displayed.
      • Use Add an IP Range to add as many IP address ranges as you need.
        Note: Add at least n+1 IP addresses in an IP range considering n is the number of nodes in the cluster.
      • Click Save .
      • Use Add an IP Pool to add more IP address pools. You can use only one IP address pool at any given time.
      • Select the IP address pool that you want to use, and then click Next .
        Note: You can also use an existing unused IP address pool.
  5. On the Feature Selection tab, do the following:
    You cannot enable network segmentation for multiple services at the same time. Complete the configuration for one service before you enable network segmentation for another service.
    1. Select the service whose traffic you want to isolate.
    2. Configure the settings for the selected service.
      The settings on this page depend on the service you select. For information about service-specific settings, see Service-Specific Settings and Configurations.
    3. Click Save .
  6. In the Create Interface dialog box, click Save .
    The CVMs are rebooted multiple times, one after another. This procedure might trigger more tasks on the cluster. For example, if you configure network segmentation for disaster recovery, the firewall rules are added on the CVM to allow traffic on the specified ports through the new CVM interface and updated when a new recovery cluster is added or an existing cluster is modified.

What to do next

See Service-Specific Settings and Configurations for any additional tasks that are required after you segment the network for a service.

Modifying Network Segmentation Configured for a Service

To modify network segmentation configured for a service, you must first disable network segmentation for that service and then create the network interface again for that service with the new IP address pool and VLAN.

About this task

For example, if the interface of the service you want to modify is ntnx0, after the reconfiguration, the same interface (ntnx0) is assigned to that service if that interface is not assigned to any other service. If ntnx0 is assigned to another service, a new interface (for example ntnx1) is created and assigned to that service.

Perform the following to reconfigure network segmentation configured for a service.

Procedure

  1. Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  2. Create the network again by following the instructions in Isolating Service-Specific Traffic.

Disabling Network Segmentation Configured for a Service

To disable network segmentation configured for a service, you must disable the dedicated VNIC. Disabling network segmentation frees up the name of the VNIC. Disabling network segmentation frees up the vNIC’s name. The free name is reused in a subsequent network segmentation configuration.

About this task

At the end of this procedure, the cluster performs a rolling restart. Disabling network segmentation might also disrupt the functioning of the associated service. To restore normal operations, you might have to perform other tasks immediately after the cluster has completed the rolling restart. For information about the follow-up tasks, see Service-Specific Settings and Configurations.

To disable the network segmentation configured for a service, do the following:

Procedure

  1. Log on to the Prism web console and click the gear icon at the top-right corner of the page.
  2. In the left pane, click Network Configuration .
  3. On the Internal Interfaces tab, for the interface that you want to disable, click Disable .
    Note: The defined IP address pool is available even after disabling the network segmentation.

Deleting a vNIC Configured for a Service

If you disable network segmentation for a service, the vNIC for that service is not deleted. AOS reuses the vNIC if you enable network segmentation again. However, you can manually delete a vNIC by logging into any CVM in the cluster with SSH.

Before you begin

Ensure that the following prerequisites are met before you delete the vNIC configured for a Service:
  • Disable the network segmentation configured for a service by following the instructions in Disabling Network Segmentation Configured for a Service.
  • Observe the Limitation specified in Limitation for vNIC Hot-Unplugging topic in AHV Admin Guide .
you

About this task

Perform the following to delete a vNIC.

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Delete the vNIC.
    nutanix@cvm$ network_segmentation --service_network --interface="interface-name" --delete

    Replace interface-name with the name of the interface you want to delete. For example, ntnx0.

Service-Specific Settings and Configurations

The following sections describe the settings required by the services that support network segmentation.

Nutanix Volumes

Network segmentation for Volumes also requires you to migrate iSCSI client connections to the new segmented network. If you no longer require segmentation for Volumes traffic, you must also migrate connections back to eth0 after disabling the vNIC used for Volumes traffic.

You can create two different networks for Nutanix Volumes with different IP pools, VLANs, and data services IP addresses. For example, you can create two iSCSI networks for production and non-production traffic on the same Nutanix cluster.

Follow the instructions in Isolating Service-Specific Traffic again to create the second network for Volumes after you create the first network.

Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Client Subnet The network (in CIDR notation) that hosts the iSCSI clients. Required If the vNIC created for the service on the CVM is not on the same network as the clients.
Gateway Gateway to the subnetwork that hosts the iSCSI clients. Required If you specify the client subnet.
Migrating iSCSI Connections to the Segmented Network

After you enable network segmentation for Volumes, you must manually migrate connections from existing iSCSI clients to the newly segmented network.

Before you begin

Make sure that the task for enabling network segmentation for the service succeeds.

About this task

Note: Even though support is available to run iSCSI traffic on both the segmented and management networks at the same time, Nutanix recommends that you move the iSCSI traffic for guest VMs to the segmented network to achieve true isolation.

To migrate iSCSI connections to the segmented network, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets that are using CVM eth0 or the Data Service IP address.
  2. Optionally, remove all the discovery records for the Data Services IP address (DSIP) on eth0.
  3. If the clients are allowlisted by their IP address, remove the client IP address that is on the management network from the allowlist, and then add the client IP address on the new network to the allowlist.
    nutanix@cvm$ acli vg.detach_external vg_name initiator_network_id=old_vm_IP
    nutanix@cvm$ acli vg.attach_external vg_name initiator_network_id=new_vm_IP
    

    Replace vg_name with the name of the volume group and old_vm_IP and new_vm_IP with the old and new client IP addresses, respectively.

  4. Discover the virtual IP address specified for Volumes.
  5. Connect to the iSCSI targets from the client.
Migrating Existing iSCSI Connections to the Management Network (Controller VM eth0)

About this task

To migrate existing iSCSI connections to eth0, do the following:

Procedure

  1. Log out from all the clients connected to iSCSI targets using the CVM vNIC dedicated to Volumes.
  2. Remove all the discovery records for the DSIP on the new interface.
  3. Discover the DSIP for eth0.
  4. Connect the clients to the iSCSI targets.
Disaster Recovery with Protection Domains

The settings for configuring network segmentation for disaster recovery apply to all Asynchronous, NearSync, and Metro Availability replication schedules. You can use disaster recovery with Asynchronous, NearSync, and Metro Availability replications only if both the primary site and the recovery site is configured with Network Segmentation. Before enabling or disabling the network segmentation on a host, disable all the disaster recovery replication schedules running on that host.

Note: Network segmentation does not support disaster recovery with Leap.
Table 1. Settings to be Specified When Configuring Traffic Isolation
Parameter or Setting Description
Virtual IP (Optional) Virtual IP address for the service. If specified, the IP address must be picked from the specified IP address pool. If not specified, an IP address from the specified IP address pool is selected for you.
Note: Virtual IP address is different from the external IP address and the data services IP address of the cluster.
Gateway Gateway to the subnetwork.
Remote Site Configuration

After configuring network segmentation for disaster recovery, configure remote sites at both locations. You also need to reconfigure remote sites if you disable network segmentation.

For information about configuring remote sites, see Remote Site Configuration in the Data Protection and Recovery with Prism Element Guide.

Segmenting a Stretched Layer 2 Network for Disaster Recovery

A stretched Layer 2 network configuration allows the source and remote metro clusters to be in the same broadcast domain and communicate without a gateway.

About this task

You can enable network segmentation for disaster recovery on a stretched Layer 2 network that does not have a gateway. A stretched Layer 2 network is usually configured across the physically remote clusters such as a metro availability cluster deployment. A stretched Layer 2 network allows the source and remote clusters to be configured in the same broadcast domain without the usual gateway.

See AOS 5.19.2 Release Notes for minimum AOS version required to configure a stretched Layer 2 network between metro clusters.

To configure a network segment as a stretched L2 network, do the following.

Procedure

Run the following command:
network_segmentation --service_network --service_name=kDR --ip_pool="DR-ip-pool-name" --service_vlan="DR-vlan-id" --desc_name="Description" --host_physical_network='"portgroup/bridge"' --stretched_metro

Replace the following: (See Isolating Service-Specific Traffic for the information)

  • DR-ip-pool-name with the name of the IP Pool created for the DR service or any existing unused IP address pool.
  • DR-vlan-id with the VLAN ID being used for the DR service.
  • Description with a suitable description of this stretched L2 network segment.
  • portgroup/bridge with the details of Bridge or CVM Port Group used for the DR service.

For more information about the network_segmentation command, see the Command Reference guide.

Network Segmentation During Cluster Expansion

When you expand a cluster on which service-specific traffic isolation is configured, ensure to meet the following prerequisites before adding new (unconfigured) nodes to the cluster:

  • Manually configure bridges for segmented networks on the hypervisor host of the new nodes. The bridges used for the segmented network should be identical to the other nodes. For information about the steps to perform on an unconfigured node, see Configuring the Network on a Host . The steps involve logging on to the host by using SSH and running the ovs-vsctl commands. For instructions about how to add nodes to your Nutanix cluster, see Expanding a Cluster in the Prism Web Console Guide .
  • Configure the network settings on the physical switch to which the new nodes connect are identical to the other nodes in the cluster. New nodes should be able to communicate with current nodes using the same VLAN ID for segmented networks.
  • Prepare new nodes imaged with AOS and hypervisor identical to the other nodes of the cluster. Reimaging the nodes as part of the cluster expansion is not supported if the cluster network is segmented.
Note: For ESXi clusters with vSphere Distributed Switches (DVS):
  • Before you expand the cluster, ensure that the node you want to add is part of the same vCenter cluster, the same DVS as the other nodes in the cluster, and is not in a disconnected state.

  • Ensure that the nodes that you add have more than 20 GB of memory.

Network Segmentation–Related Changes During an AOS Upgrade

When you upgrade from an AOS version which does not support network segmentation to an AOS version that does, the eth2 interface (used to segregate backplane traffic) is automatically created on each CVM. However, the network remains unsegmented, and the cluster services on the CVM continue to use eth0 until you configure network segmentation.

The vNICs ntnx0, ntnx1, and so on, are not created during an upgrade to a release that supports service-specific traffic isolation. They are created when you configure traffic isolation for a service.

Note:

Do not delete the eth2 interface that is created on the CVMs, even if you are not using the network segmentation feature.

Firewall Requirements

Ports and Protocols describes detailed port information (like protocol, service description, source, destination, and associated service) for Nutanix products and services. It includes port and protocol information for 1-click upgrades and LCM updates.

Log management

This chapter describes how to configure cluster-wide setting for log-forwarding and documenting the log fingerprint.

Log Forwarding

The Nutanix Controller VM provides a method for log integrity by using a cluster-wide setting to forward all the logs to a central log host. Due to the appliance form factor of the Controller VM, system and audit logs does not support local log retention periods as a significant increase in log traffic can be used to orchestrate a distributed denial of service attack (DDoS).

Nutanix recommends deploying a central log host in the management enclave to adhere to any compliance or internal policy requirement for log retention. In case of any system compromise, a central log host serves as a defense mechanism to preserve log integrity.

Note: The audit in the Controller VM uses the audisp plugin by default to ship all the audit logs to the rsyslog daemon (stored in /home/log/messages ). Searching for audispd in the central log host provides the entire content of the audit logs from the Controller VM. The audit daemon is configured with a rules engine that adheres to the auditing requirements of the Operating System Security Requirements Guide (OS SRG), and is embedded as part of the Controller VM STIG.

Use the nCLI to enable forwarding of system, audit, aide, and SCMA logs of all the Controller nodes in a cluster at the required log level. For more information, see Send Logs to Remote Syslog Server in the Acropolis Advanced Administration Guide

Documenting the Log Fingerprint

For forensic analysis, non-repudiation is established by verifying the fingerprint of the public key for the log file entry.

Procedure

  1. Login to the CVM.
  2. Run the following command to document the fingerprint for each public key assigned to an individual admin.
    nutanix@cvm$ ssh-keygen -lf /<location of>/id_rsa.pub

    The fingerprint is then compared to the SSH daemon log entries and forwarded to the central log host ( /home/log/secure in the Controller VM).

    Note: After completion of the ssh public key inclusion in Prism and verification of connectivity, disable the password authentication for all the Controller VMs and AHV hosts. From the Prism main menu, de-select Cluster Lockdown configuration > Enable Remote Login with password check box from the gear icon drop-down list.

Security Management Using Prism Central (PC)

Prism Central provides several mechanisms and features to enforce security of your multi-cluster environment.

If you enable Identity and Access Management (IAM), see Security Management Using Identity and Access Management (Prism Central).

Configuring Authentication

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Prism Central supports user authentication with these authentication options:

  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.
  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts.
  • SAML authentication. Users can authenticate through a supported identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties: an identity provider (IDP) and Prism Central as the service provider.

    If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported IDP for Single Sign-on. If you enable IAM, additional IDPs are available. For more information, see Security Management Using Identity and Access Management (Prism Central) and Updating ADFS When Using SAML Authentication.

  • Local user authentication. Users can authenticate if they have a local Prism Central account. For more information, see Managing Local User Accounts .
  • Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled for Prism Central.

Adding An Authentication Directory (Prism Central)

Before you begin

Caution: Prism Central does not allow the use of the (not secure) SSLv2 and SSLv3 ciphers. To eliminate the possibility of an SSL Fallback situation and denied access to Prism Central, disable (uncheck) SSLv2 and SSLv3 in any browser used for access. However, TLS must be enabled (checked).

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.

    The Authentication Configuration window appears.

    Figure. Authentication Configuration Window Click to enlarge Authentication Configuration window main display

  2. To add an authentication directory, click the New Directory button.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Directory Type : Select one of the following from the pull-down list.
      • Active Directory : Active Directory (AD) is a directory service implemented by Microsoft for Windows domain networks.
        Note:
        • Users with the "User must change password at next logon" attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
        • Use of the "Protected Users" group is currently unsupported for Prism authentication. For more details on the "Protected Users" group, see “Guidance about how to configure protected accounts” on Microsoft documentation website.
        • An Active Directory user name or group name containing spaces is not supported for Prism Central authentication.
        • The Microsoft AD is LDAP v2 and LDAP v3 compliant.
        • The Microsoft AD servers supported are Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019.
      • OpenLDAP : OpenLDAP is a free, open source directory service, which uses the Lightweight Directory Access Protocol (LDAP), developed by the OpenLDAP project.
        Note: Prism Central uses a service account to query OpenLDAP directories for user information and does not currently support certificate-based authentication with the OpenLDAP directory.
    2. Name : Enter a directory name.

      This is a name you choose to identify this entry; it need not be the name of an actual directory.

    3. Domain : Enter the domain name.

      Enter the domain name in DNS format, for example, nutanix.com .

    4. Directory URL : Enter the URL address to the directory.

      The URL format is as follows for an LDAP entry: ldap:// host : ldap_port_num . The host value is either the IP address or fully qualified domain name. (In some environments, a simple domain name is sufficient.) The default LDAP port number is 389. Nutanix also supports LDAPS (port 636) and LDAP/S Global Catalog (ports 3268 and 3269). The following are example configurations appropriate for each port option:

      Note: LDAPS support does not require custom certificates or certificate trust import.
      • Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
        ldap://ad_server.mycompany.com:389
      • Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
        ldaps://ad_server.mycompany.com:636
      • Port 3268 (LDAP - GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
      • Port 3269 (LDAPS - GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
        Note:
        • When constructing your LDAP/S URL to use a Global Catalog server, ensure that the Domain Control IP address or name being used is a global catalog server within the domain being configured. If not, queries over 3268/3269 may fail.
        • Cross-forest trust between multiple AD forests is not supported.

      For the complete list of required ports, see Port Reference.
    5. [OpenLDAP only] Configure the following additional fields:
      • User Object Class : Enter the value that uniquely identifies the object class of a user.
      • User Search Base : Enter the base domain name in which the users are configured.
      • Username Attribute : Enter the attribute to uniquely identify a user.
      • Group Object Class : Enter the value that uniquely identifies the object class of a group.
      • Group Search Base : Enter the base domain name in which the groups are configured.
      • Group Member Attribute : Enter the attribute that identifies users in a group.
      • Group Member Attribute Value : Enter the attribute that identifies the users provided as value for Group Member Attribute .
    6. Search Type . How to search your directory when authenticating. Choose Non Recursive if you experience slow directory logon performance. For this option, ensure that users listed in Role Mapping are listed flatly in the group (that is, not nested). Otherwise, choose the default Recursive option.
    7. Service Account Username : Depending upon the Directory type you select in step 2.a, the service account user name format as follows:
      • For Active Directory , enter the service account user name in the user_name@domain.com format.
      • For OpenLDAP , enter the service account user name in the following Distinguished Name (DN) format:

        cn=username, dc=company, dc=com

        A service account is created to run only a particular service or application with the credentials specified for the account. According to the requirement of the service or application, the administrator can limit access to the service account.

        A service account is under the Managed Service Accounts in the Active Directory and openLDAP server. An application or service uses the service account to interact with the operating system. Enter your Active Directory and openLDAP service account credentials in this (username) and the following (password) field.

        Note: Be sure to update the service account credentials here whenever the service account password changes or when a different service account is used.
    8. Service Account Password : Enter the service account password.
    9. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the Authentication Configuration dialog box. The configured directory now appears in the Directory List tab.

    10. Repeat this step for each authentication directory you want to add.
    Note:
    • No permissions are granted to the directory users by default. To grant permissions to the directory users, you must specify roles for the users in that directory (see Configuring Role Mapping).
    • Service account for both Active directory and openLDAP must have full read permission on the directory service.
    Figure. Directory List Fields Click to enlarge Directory List tab display

  3. To edit a directory entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete a directory entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Adding a SAML-based Identity Provider

About this task

If you do not enable Nutanix Identity and Access Management (IAM) on Prism Central, ADFS is the only supported identity provider (IDP) for Single Sign-on and only one IDP is allowed at a time. If you enable IAM, additional IDPs are available. See Security Management Using Identity and Access Management (Prism Central) and also Updating ADFS When Using SAML Authentication.

Before you begin

  • An identity provider (typically a server or other computer) is the system that provides authentication through a SAML request. There are various implementations that can provide authentication services in line with the SAML standard.
  • If you enable IAM by enabling CMSP, you can specify other tested standard-compliant IDPs in addition to ADFS. See also the Prism Central release notes topic Identity and Access Management Software Support for specific support requirements..

    Only one identity provider is allowed at a time, so if one was already configured, the + New IDP link does not appear.

  • You must configure the identity provider to return the NameID attribute in SAML response. The NameID attribute is used by Prism Central for role mapping. See Configuring Role Mapping for details.

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. To add a SAML-based identity provider, click the + New IDP link.

    A set of fields is displayed. Do the following in the indicated fields:

    1. Configuration name : Enter a name for the identity provider. This name will appear in the log in authentication screen.
    2. Import Metadata : Click this radio button to upload a metadata file that contains the identity provider information.

      Identity providers typically provide an XML file on their website that includes metadata about that identity provider, which you can download from that site and then upload to Prism Central. Click + Import Metadata to open a search window on your local system and then select the target XML file that you downloaded previously. Click the Save button to save the configuration.

      Figure. Identity Provider Fields (metadata configuration) Click to enlarge

    This completes configuring an identity provider in Prism Central, but you must also configure the callback URL for Prism Central on the identity provider. To do this, click the Download Metadata link just below the Identity Providers table to download an XML file that describes Prism Central and then upload this metadata file to the identity provider.
  3. To edit a identity provider entry, click the pencil icon for that entry.

    After clicking the pencil icon, the relevant fields reappear. Enter the new information in the appropriate fields and then click the Save button.

  4. To delete an identity provider entry, click the X icon for that entry.

    After clicking the X icon, a window prompt appears to verify the delete action; click the OK button. The entry is removed from the list.

Enabling and Configuring Client Authentication

Procedure

  1. In the web console, click the gear icon in the main menu and then select Authentication in the Settings page.
  2. Click the Client tab, then do the following steps.
    1. Select the Configure Client Chain Certificate check box.

      The Client Chain Certificate is a list of certificates that includes all intermediate CA and root-CA certificates.

    2. Click the Choose File button, browse to and select a client chain certificate to upload, and then click the Open button to upload the certificate.
      Note:
      • Client and CAC authentication only supports RSA 2048 bit certificate.
      • Uploaded certificate files must be PEM encoded. The web console restarts after the upload step.
    3. To enable client authentication, click Enable Client Authentication .
    4. To modify client authentication, do one of the following:
      Note: The web console restarts when you change these settings.
      • Click Enable Client Authentication to disable client authentication.
      • Click Remove to delete the current certificate. (This also disables client authentication.)
      • To enable OCSP or CRL based certificate revocation checking, see Certificate Revocation Checking.

    Client authentication allows you to securely access the Prism by exchanging a digital certificate. Prism will validate that the certificate is signed by your organization’s trusted signing certificate.

    Client authentication ensures that the Nutanix cluster gets a valid certificate from the user. Normally, a one-way authentication process occurs where the server provides a certificate so the user can verify the authenticity of the server (see Installing an SSL Certificate). When client authentication is enabled, this becomes a two-way authentication where the server also verifies the authenticity of the user. A user must provide a valid certificate when accessing the console either by installing the certificate on the local machine or by providing it through a smart card reader.
    Note: The CA must be the same for both the client chain certificate and the certificate on the local machine or smart card.
  3. To specify a service account that the Prism Central web console can use to log in to Active Directory and authenticate Common Access Card (CAC) users, select the Configure Service Account check box, and then do the following in the indicated fields:
    1. Directory : Select the authentication directory that contains the CAC users that you want to authenticate.
      This list includes the directories that are configured on the Directory List tab.
    2. Service Username : Enter the user name in the user name@domain.com format that you want the web console to use to log in to the Active Directory.
    3. Service Password : Enter the password for the service user name.
    4. Click Enable CAC Authentication .
      Note: For federal customers only.
      Note: The Prism Central console restarts after you change this setting.

    The Common Access Card (CAC) is a smart card about the size of a credit card, which some organizations use to access their systems. After you insert the CAC into the CAC reader connected to your system, the software in the reader prompts you to enter a PIN. After you enter a valid PIN, the software extracts your personal certificate that represents you and forwards the certificate to the server using the HTTP protocol.

    Nutanix Prism verifies the certificate as follows:

    • Validates that the certificate has been signed by your organization’s trusted signing certificate.
    • Extracts the Electronic Data Interchange Personal Identifier (EDIPI) from the certificate and uses the EDIPI to check the validity of an account within the Active Directory. The security context from the EDIPI is used for your PRISM session.
    • Prism Central supports both certificate authentication and basic authentication in order to handle both Prism Central login using a certificate and allowing REST API to use basic authentication. It is physically not possible for REST API to use CAC certificates. With this behavior, if the certificate is present during Prism Central login, the certificate authentication is used. However, if the certificate is not present, basic authentication is enforced and used.
    Note: Nutanix Prism does not support OpenLDAP as directory service for CAC.
    If you map a Prism Central role to a CAC user and not to an Active Directory group or organizational unit to which the user belongs, specify the EDIPI (User Principal Name, or UPN) of that user in the role mapping. A user who presents a CAC with a valid certificate is mapped to a role and taken directly to the web console home page. The web console login page is not displayed.
    Note: If you have logged on to Prism Central by using CAC authentication, to successfully log out of Prism Central, close the browser after you click Log Out .

Certificate Revocation Checking

Enabling Certificate Revocation Checking using Online Certificate Status Protocol (nCLI)

About this task

OCSP is the recommended method for checking certificate revocation in client authentication. You can enable certificate revocation checking using the OSCP method through the command line interface (nCLI).

To enable certificate revocation checking using OCSP for client authentication, do the following.

Procedure

  1. Set the OCSP responder URL.
    ncli authconfig set-certificate-revocation set-ocsp-responder=<ocsp url> <ocsp url> indicates the location of the OCSP responder.
  2. Verify if OCSP checking is enabled.
    ncli authconfig get-client-authentication-config

    The expected output if certificate revocation checking is enabled successfully is as follows.

    Auth Config Status: true
    File Name: ca.cert.pem
    OCSP Responder URI: http://<ocsp-responder-url>

Enabling Certificate Revocation Checking using Certificate Revocation Lists (nCLI)

About this task

Note: OSCP is the recommended method for checking certificate revocation in client authentication.

You can use the CRL certificate revocation checking method if required, as described in this section.

To enable certificate revocation checking using CRL for client authentication, do the following.

Procedure

Specify all the CRLs that are required for certificate validation.
ncli authconfig set-certificate-revocation set-crl-uri=<uri 1>,<uri 2> set-crl-refresh-interval=<refresh interval in seconds>
  • The above command resets any previous OCSP or CRL configurations.
  • The URIs must be percent-encoded and comma separated.
  • The CRLs are updated periodically as specified by the crl-refresh-interval value. This interval is common for the entire list of CRL distribution points. The default value for this is 86400 seconds (1 day).

User Management

Managing Local User Accounts

About this task

The Prism Central admin user is created automatically, but you can add more (locally defined) users as needed. To add, update, or delete a user account, do the following:

Note:
  • To add user accounts through Active Directory, see Configuring Authentication. If you enable the Prism Self Service feature, an Active Directory is assigned as part of that process.
  • Changing the Prism Central admin user password does not impact registration (re-registering clusters is not required).

Procedure

  • Click the gear icon in the main menu and then select Local User Management in the Settings page.

    The Local User Management dialog box appears.

    Figure. User Management Window Click to enlarge displays user management window

  • To add a user account, click the New User button and do the following in the displayed fields:
    1. Username : Enter a user name.
    2. First Name : Enter a first name.
    3. Last Name : Enter a last name.
    4. Email : Enter a valid user email address.
    5. Password : Enter a password (maximum of 255 characters).
      Note: A second field to verify the password is not included, so be sure to enter the password correctly in this field.
    6. Language : Select the language setting for the user.

      English is selected by default. You have an option to select Simplified Chinese or Japanese . If you select either of these, the cluster locale is updated for the new user. For example, if you select Simplified Chinese , the user interface is displayed in Simplified Chinese when the new user logs in.

    7. Roles : Assign a role to this user.

      There are three options:

      • Checking the User Admin box allows the user to view information, perform any administrative task, and create or modify user accounts.
      • Checking the Prism Central Admin (formerly "Cluster Admin") box allows the user to view information and perform any administrative task, but it does not provide permission to manage (create or modify) other user accounts.
      • Leaving both boxes unchecked allows the user to view information, but it does not provide permission to perform any administrative tasks or manage other user accounts.
    8. When all the fields are correct, click the Save button (lower right).

      This saves the configuration and redisplays the dialog box with the new user appearing in the list.

    Figure. Create User Window Click to enlarge displats create user window

  • To modify a user account, click the pencil icon for that user and update one or more of the values as desired in the Update User window.
    Figure. Update User Window Click to enlarge displays update user window

  • To dis