Powered by Blogger.

VSAN 6.0 Part 8 – Fault Domains

Posted on April 20, 2015 by 

One of the really nice new features of VSAN 6.0 is fault domains. Previously, there was very little control over where VSAN placed virtual machine components. In order to protect against something like a rack failure, you may have had to use a very high NumberOfFailuresToTolerate value, resulting in multiple copies of the VM data dispersed around the cluster. With VSAN 6.0, this is no longer a concern as hosts participating in the VSAN Cluster can be placed in different failure domains. This means that component placement will take place across failure domains and not just across hosts. Let’s look at this in action.
In this example, I have a 4 node cluster. I am going to create 3 default domains. The first fault domain contains one host, the second fault domain also contains one host, and the third fault domain has two hosts. It looks something like this:
A1Of course, this isn’t a very realistic setup, as you would typically have many more hosts per rack, but this is what I had at my disposal to test this feature. However, the concept remains the same. The idea now is to have VSAN deploy virtual machine components across the fault domains in such a way so as a single rack failure will not make the VM inaccessible, in order words maintain a full copy of the virtual machine data even when a rack fails.
The first step is to setup the fault domains. This is done in the vSphere web client under Settings > Virtual SAN > Fault Domains:
A2Using the green + symbol, fault domains with hosts can be created. Based on the design outlined above, I ended up with a fault domain configuration looking like this:
A5Now in my configuration, each hosts has 2 magnetic disks (HDDs) so I decided that in order to use as much of the hardware as possible, I would create a VM Storage Policy with StripeWidth (NumberOfDiskStripesPerObject) = 3 and FTT (NumberOfFailuresToTolerate) = 1. I then deployed a virtual machine with this policy. I then examined the policy after the VM was deployed. First I made sure that the VM was compliant to the policy, in other words VSAN was able to meet the StripeWidth and FTT requirements in the policy, which it was (VM > Manage > Policies):
A14I then checked the placement of the components using the VM > Monitor > Policies view:
A15As we can see, one copy of the data (RAID 0, 3 way stripe) resides on host 1 and 2, and the other copy of the data (RAID 0, 3 way stripe) resides on hosts 3 and 4. Both are mirrored/replicated in a RAID 1 configuration. Now, these are the questions we need to ask ourselves:
  •  If rack 1 fails (containing host 1), do I still have a full copy of the data? The answer is Yes.
  •  If rack 2 fails (containing host 2), do I still have a full copy of the data? The answer is Yes.
  •  If rack 3 fails (containing hosts 3 & 4), do I still have a full copy of the data? The answer is still Yes.
What about quorum if rack 3 fails? There are no witnesses present in this configuration, so how is quorum achieved? Well this is another new enhancement in VSAN 6.0 whereby, under certain conditions, components can have votes rather than rely on witnesses. I discussed the new quorum behaviour in this earlier post.
Fault domains, a nice new addition to Virtual SAN 6.0. Previously with FTT, we stated that you needed ‘2n + 1’ hosts to tolerate ‘n’ failures. With fault domains, you now need ‘2n + 1’ fault domains to tolerate ‘n’ failures.
    Blogger Comment
    Facebook Comment