This document details the operation of a Nutanix hyperconvergence solution and describes the Prism Central and PRISM element interfaces.
Requirements
- A Nutanix cluster in your OVHcloud account.
- Access to the OVHcloud Control Panel.
Technical solution overview
A reminder about defining a node
A Nutanix solution consists of "nodes," which are physical computers. On each of these computers, we find:
- One system disk or two system disks in RAID with the AHV hypervisor installed.
- An SSD or CVM (a virtual machine that provides connections between each node and is an essential component of the Nutanix solution) is stored. Any remaining disk space may be used for data storage.
- Other SSD or SAS disks, with a different license cost depending on the chosen storage technology.
- One or more processors.
- Memory.
- Sometimes a GPU (Graphical Processor Unit) graphics card.
Ideally, each node in a Nutanix cluster should be identical. There may be differences, especially when a GPU is present. However, nodes that contain storage must be identical.
How a Nutanix cluster works
A minimum of three nodes are required to run a Nutanix cluster. When a cluster is created, all available disks are added to what is called a "Storage POOL". We recommend having only one Storage POOL.
As a reminder, the OVHcloud Nutanix solution starts from three nodes and can go up to 18 nodes.
Data redundancy is not done on one node, as it is with RAID, but across the network on multiple nodes.
There are several levels of redundancy:
- RF2: Data is available on two nodes, allowing a node or data disk to fail on one of the nodes.
- RF3: Data is available on three nodes. This solution is only possible from five nodes; it is more secure as it allows the loss of two nodes with a smaller storage capacity.
Virtualization overview
Virtualization is done through the AHV hypervisor. This hypervisor is integrated on each node and does not require an additional license.
Virtual machines run on one of the nodes and can hot-swap from one node to another in normal operation.
If a node fails, the virtual machines reboot on one of the nodes.
List of Nutanix cluster connection options
- From the Prism Central web interface (an additional virtual machine that has features that Prism Element does not have and that can connect to one or more clusters).
- On the Prism ELEMENT web interface (actually one of the CVMs).
- Via SSH on the cluster (in this case, it is also one of the CVMs).
- Via SSH on one of the cluster nodes for hypervisor maintenance operations.
Through Prism Central and Prism Element, it is possible to use the RESTAPI interface to automate some command line tasks.
Instructions
Connecting to Prism Central from the Internet
We will connect via Prism Central which is the entry point from the Internet.
Access to the cluster is via a public address such as https://FQDN:9440
. This address is provided to you when you create a Nutanix cluster with OVHcloud.
Enter your username and password and click the arrow.
Connecting to Prism Element via Prism Central
On the Prism Central Dashboard, click the cluster name in the Cluster Quick Access frame.
You will then access your cluster’s dashboard.
- To the right is the total number of disks, the number of VMs, and the number of hosts.
- A green heart indicates that the Nutanix cluster is functioning correctly.
- At the bottom of this section, you will see the fault tolerance level (1 means we are in RF2 with the possibility of a disk loss on a node or failure of an entire node).
- A summary of the storage and available disk space is displayed on the left.
Click View Details
for more information about storage.
This allows you to check the storage status by node.
Click on the Hardware
menu to view the details of the storage per node, as well as the number of disks allocated per node.
Click Diagram
for a graphical summary as shown below.
Go further
For more information and tutorials, please see our other Nutanix support guides or explore the guides for other OVHcloud products and services.