Thursday, 23 April 2015

SVM (Vserver) & LIF Concepts

         Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own namespace.

You explicitly choose and configure which storage protocols you want a given SVM to support at SVM creation time, and you can later add or remove protocols as desired.. A single SVM can host any combination of the supported protocols.

An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. As we saw earlier, an aggregate is directly tied to the specific node hosting its disks, which means that an SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associated characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs in part on all nodes that are hosting its LIFs.

When an SVM is configured with multiple data LIFs any of those LIFs can potentially be used to access volumes hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility under NetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some load balancing by responding to different clients with different LIF addresses, but this distribution is not sophisticated and requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFS Servers do not handle name resolution on their own.

DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline. To compensate for this condition DNS servers can be configured to delegate the name resolution responsibility for the SVM’s hostname records to the SVM itself so that it can directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name resolution request.

LIFS that are mapped to physical network ports that reside on the same node as a volume’s containing aggregate offer the most efficient client access path to the volume’s data. However, clients can also access volume data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If additional resiliency is desired then you can also create a NAS LIF on nodes not hosting aggregates for the SVM as well.

A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0 and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing network requests from that new node/port. Throughout this operation the NAS LIF maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in progress but as soon as it completes the clients resume any in-process NAS operations without any loss of data.

The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storage controller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node can also accommodate it’s HA partner’s LIFs in the event of a failover event).

Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a single logical filesystem view. Clients can access the entire namespace by mounting a single share or export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistent view of the SVM’s data to all clients rather than having to reproduce that view structure on each individual client. As an Administrator maps and unmaps volumes from the namespace those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVM’s namespace. Administrators can also create NFS exports at individual junction points within the namespace and can create CIFS shares at any directory path in the namespace.

3 comments:

  1. Nice post and congratulation..Just a question...I have read that if I have a cluster with two node and create a single SVM, with 2 volumes in separate aggregates assigned to different nodes, the intensive CPU task like dedupe runs on the node that hosting the aggregate..it's true ? How can one SVM to deliver CPUs on different nodes?
    Thanks

    ReplyDelete
  2. I read your post when it was fresh. I stumbled on this link again while searching for something else. Since this appears to be favored by Google it is worth pointing out MS has changed its mind.

    NetApp skills are highly recommended in today's job market. NetApp Certification Exams provides industry validation for your skills as a NetApp technical professional.
    KillerDumps offers the entire IT exam dump file and practice test simulator at a discounted price with 100% guaranteed success. We provide ongoing after-sales support for NetApp Q&A and practice software to address any inconvenience.

    You can get NetApp Exam Dumps

    ReplyDelete
  3. Good Article..
    Diagrams would have been even more efficient

    ReplyDelete