Solutions where the storage and compute is in the same box are destined to have a success, because they radically change the datacenter design and efficiency. They scale a linear way, because of the nature the data are stored. They're stored and accessed locally, with a copy on another node. The Nutanix distributed file system as a core of the system, with controller VM which manages the reads and writes to the local storage and the replication between nodes, has a linear scalability. It means that every time you add additional block, you receives more compute and storage performance in a linear way without having to rebuild your storage architecture. This is difficult to achieve in traditional environments with shared storage SAN devices, where, in each time you add more servers to the cluster to gain more in compute performance and memory, but the SAN will perform the same way and slowly will become a bottleneck.
Minimal Latency – Nutanix Distributed File System with a hybrid flash approach and the proximity with the compute unit allows having minimal latency, if compared with traditional SAN designs, where the data has to travel through the HBA, switch, over the wire to the Storage adapter of the SAN etc…
Update: Nutanix just released Prism UI – a HTML 5 bases new UI for their new release of NOS 3.5 which brings Elastic Deduplication Engine which accelerates an application performance with near instantaneous response times and up to 10x improvement in effective capacity for hot working data. Prism includes a programmatic REST-based API for cloud based integration.
NOS 3.5 supports not only VMware, but also KVM and Hyper-V! This is new, and during the times of this update, its only a preview.
Other enhancements of the NOS 3.5:
- Extension of compression to the native Nutanix Disaster Recovery feature
- Introduction of a Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM).
Video of Prism UI released today:
See the demonstration on how Nutanix technology works and how the data are read, written and protected against failure, in the Nutanix cluster. The local flash with lowest latency in each compute node is a PCI-E based flash card providing maximum IOPS with minimal latency, followed by SATA attached SSDs and SATA Spinning Magnetic disks (for the slowest tier). The detail of the hardware is also mentioned in my post about the Nutanix distributed file system.
This video shows the animation of how the Nutanix technology works.
The video comes up from the Nutanix channel.
Nutanix uses Heat-Optimized Tiering (HOT) that automatically moves the most frequently accessed data to the highest performing storage tier – the Fusion-IO flash PCI-e card, then the Intel SSD, or the slowest SATA tier for cold data. The thresholds are user defined. You can decide how many hours/days before the data becomes “cold” …. and moved to the SATA tier.
In addition, to optimize access speed, most of the VMs data are always local. If VM wants to access data through another controller VM, than the local one, the datas are moved closer to the VM.
Nutanix single bloc (2U) has four nodes. Each node (host) is a standard x64 architecture server with dual processors. Each node has local storage (Fusion-IO, SSD and SATA disks), which has different performance.
Nutanix supports VMware vSphere 5.1 and KVM.