Friday, 24 April 2015

Vserver configuration for NAS

         Here we will create a new SVM named svm1 on cluster and will configure it to serve out a volume over NFS and CIFS. We will be configuring two NAS data LIFs on the SVM, one per node in the cluster.

Step:1

Open System manager => Storage virtual machines =>  Click "create"

Need to provide SVM name, Data protocols, Root aggregate & DNS details,


Step:2

In this step need to specify data interfaces (LIF) and CIFS server details,



Step:3

Specify the password for an SVM specific administrator account for the SVM, which can then be used to delegate admin access for just this SVM.


Step:4

New Storage Virtual Machine Summary window opens displaying the details of the newly created SVM.


Step:5

The SVM svm1 is now listed under cluster1 on the Storage Virtual Machines tab. The NFS and CIFS protocols are showing in right panel,which indictates that those protocols are enabled for the selected SVM svm1.



Creating Vserver through CLI:

Create Vserver using "vserver create" command,


We are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data LIF for svm1:


Configure the DNS domain and nameservers for the svm1,


That's it :)

Thursday, 23 April 2015

SVM (Vserver) & LIF Concepts

         Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own namespace.

You explicitly choose and configure which storage protocols you want a given SVM to support at SVM creation time, and you can later add or remove protocols as desired.. A single SVM can host any combination of the supported protocols.

An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. As we saw earlier, an aggregate is directly tied to the specific node hosting its disks, which means that an SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associated characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs in part on all nodes that are hosting its LIFs.

When an SVM is configured with multiple data LIFs any of those LIFs can potentially be used to access volumes hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility under NetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some load balancing by responding to different clients with different LIF addresses, but this distribution is not sophisticated and requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFS Servers do not handle name resolution on their own.

DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline. To compensate for this condition DNS servers can be configured to delegate the name resolution responsibility for the SVM’s hostname records to the SVM itself so that it can directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name resolution request.

LIFS that are mapped to physical network ports that reside on the same node as a volume’s containing aggregate offer the most efficient client access path to the volume’s data. However, clients can also access volume data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If additional resiliency is desired then you can also create a NAS LIF on nodes not hosting aggregates for the SVM as well.

A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0 and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing network requests from that new node/port. Throughout this operation the NAS LIF maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in progress but as soon as it completes the clients resume any in-process NAS operations without any loss of data.

The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storage controller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node can also accommodate it’s HA partner’s LIFs in the event of a failover event).

Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a single logical filesystem view. Clients can access the entire namespace by mounting a single share or export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistent view of the SVM’s data to all clients rather than having to reproduce that view structure on each individual client. As an Administrator maps and unmaps volumes from the namespace those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVM’s namespace. Administrators can also create NFS exports at individual junction points within the namespace and can create CIFS shares at any directory path in the namespace.

Tuesday, 21 April 2015

Troubleshooting CIFS issues


• Use "sysstat –x 1" to determine how many CIFS ops/s and how much CPU is being utilized.

• Check /etc/messages for any abnormal messages, especially for oplock break timeouts.

• Use "perfstat" to gather data and analyze (note information from "ifstat","statit", "cifs stat", and "smb_hist", messages, general cifs info).

• "pktt" may be necessary to determine what is being sent/received over the network.

• "sio" should / could be used to determine how fast data can be written/read from the filer.

• Client troubleshooting may include review of event logs, ping of filer, test using a different filer or Windows server.

• If it is a network issue, check "ifstat –a", "netstat –in" for any I/O errors or collisions.

• If it is a gigabit issue check to see if the flow control is set to FULL on the filer and the switch.

• On the filer if it is one volume having an issue, do "df" to see if the volume is full.

• Do "df –i" to see if the filer is running out of inodes.

• From "statit" output, if it is one volume that is having an issue check for disk fragmentation.

• Try the "netdiag –dv" command to test filer side duplex mismatch. It is important to find out what the benchmark is and if it’s a reasonable one.

• If the problem is poor performance, try a simple file copy using Explorer and compare it with the application's performance. If they both are same, the issue probably is not the application. Rule out client problems and make sure it is tested on multiple clients. If it is an application performance issue, get all the details about:
◦ The version of the application
◦ What specifics of the application are slow, if any
◦ How the application works
◦ Is this equally slow while using another Windows server over the
network?
◦ The recipe for reproducing the problem in a NetApp lab.

• If the slowness only happens at certain times of the day, check if the times coincide with other heavy activity like SnapMirror, SnapShots, dump, etc. on the filer. If normal file reads/writes are slow:
◦ Check duplex mismatch (both client side and filer side)
◦ Check if oplocks are used (assuming they are turned off)
◦ Check if there is an Anti-Virus application running on the client. This can cause performance issues especially when copying multiple small files.
◦ Check "cifs stat" to see if the Max Multiplex value is near the cifs.max_mpx option value. Common situations where this may need to be increased are when the filer is being used by a Windows Terminal Server or any other kind of server that might have many users opening new connections to the filer. What is CIFS Max Multiplex?
◦ Check the value of OpLkBkNoBreakAck in "cifs stat". Non-zero numbers indicate oplock break timeouts, which cause performance problem.

Sunday, 19 April 2015

Adding a node to cluster

    Manually run the cluster setup wizard to add the node to the cluster .This is exactly the same procedure you would follow to add even more nodes to the cluster, the only differences being that you would assign a different the IP address and possibly a different management interface port name.

Step:1

Launch a Putty section and type Cluster setup command,


Step:2

It ask's you want to create a new cluster or join an existing cluster, type "join", and it will show you the existing cluster interface configuration. If you want to use the same existing cluster configuration then type "yes"


Step:3

Here it will ask a cluster name in which you want to add a node. just hit enter,


Step:4

In this step need to configure SFO (Storage failover) if you use HA system. As we dont have HA cannot configure SFO.


Step:5

In this step need to configure the Node, assinging IP,Interface,Gateway..etc.


That's it. Successfully added a node to cluster1.


Wednesday, 15 April 2015

Create a New Aggregate on Cluster Node

                 Disks are the fundamental unit of physical storage in clustered Data ONTAP and are tied to a specific cluster node by virtue of their physical connectivity (i.e. cabling) to a given controller head.
             
Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a group of disks that are all physically attached to the same node. A given disk can only be a member of a single aggregate.

By default each node has one aggregate known as the root aggregate, which is a group of the node’s local disks that host the node’s Data ONTAP operating system. A node’s root aggregate is created during Data ONTAP installation in a minimal RAID-DP configuration, meaning it is initially comprised of 3 disks (1 data, 2 parity), and is assigned the name aggr0. Aggregate names must be unique within a cluster so when the cluster setup wizard joins a node it must rename that node’s root aggregate if there is a conflict with the name of any aggregate that already exists in the cluster. If aggr0 is already in use elsewhere in the cluster then it renames the new node’s aggregate according to the convention aggr0_<nodename>_0.

A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to use one or more specific aggregates to host the SVM’s volumes. Multiple SVMs can be assigned to use the same aggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a single SVM provides greater workload isolation.

Create a Aggr using System Manager:
--------------------------------------------
Step:1

Go to Cluster1 => Node cluster1-01 => Storage => Click "Aggregates",

It shows list of aggregates in that node owns. In aggr menu click "create"


Step:2

Now opens aggregate create wizard, click next,


Step:3

Here need to specify aggregate name and raid type, I'm going to specify aggregate name as "aggr_cluster1_01" and raid type is "RAID DP"


Step:4

Here it's shows aggregate details and select disk option, click "Select disks"


Step:5

Select the disk group from the table and specify the no of disks that you want to add to the aggregate.

I just selected only 2 disks, but for RAID-DP min 3 disks required, so here i'm selecting 8 disks,

Step:6

Review the details and click create,


Step:7

Aggregate "agg1_cluster1_01" successfully created.


Check the newly created aggregate in the aggregates menu,


Create a aggregate in CLI:
-------------------------------

Create a new aggregate using the "aggr create" command, with 6 disks,


Check the newly created aggregate using aggr show command,


That's it :)

Sunday, 12 April 2015

Cluster Setup

           A cluster is a group of physical storage controllers, or nodes, that have been joined together for the purpose of serving data to end users. The nodes in a cluster can pool their resources together and can distribute their work across the member nodes. Communication and data transfer between member nodes (such as when a client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management and client data traffic passes over separate management and data networks configured on the member nodes.

Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both controllers in an HA pair actively host and serve data but they are also capable of taking over their partner’s responsibilities in the event of a service disruption by virtue of their redundant cable paths to each other’s disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads and to support non-disruptive migrations of volumes and client connections to other nodes in the cluster resource pool, which means that cluster expansion and technology refreshes can take place while the cluster remains fully online and serving data.

Data ONTAP 8.2 clusters that will be only be serving NFS and CIFS can scale up to a maximum of 24 nodes, although the node limit may be lower depending on the model of FAS controller in use. Data ONTAP 8.2 clusters that will also host iSCSI and FC can scale up to a maximum of 8 nodes.

Cluster setup in CLI:


The cluster setup wizard gathers the data necessary to create a brand new cluster or to add a new node to a pre-existing cluster.The setup wizard is a text driven tool that will prompt you for information such as the name of the cluster you want to create, your Data ONTAP license keys, the TCP/IP address information for the cluster and the node, and so on.

In this example launch the Putty section and connected to host unjoined1,

Step:1

Login to node unjoined1 and enter cluster setup command,


  As we creating new cluster type create in the prompt,



Step:2

If already network interface configured you can choose that configuration,else create new configuration. then its prompts for cluster name and add additional license keys,


Step:4

In this step, we need to configure cluster  management interface ports and dns like following,


Step:5

If you are using HA system, then you can configure Configure Storage Failover(SFO), As we using standalone system we cannot configure SFO in this example,


Step:6

Final step, need to configure node management interface ports,


That's it cluster setup is completed. :)



Why Clustered Data ONTAP?

                 A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server virtualization. Before server virtualization system administrators frequently deployed applications on dedicated servers in order to maximize application performance and to avoid the instabilities often encountered when combining multiple applications on the same operating system instance. While this design approach was effective it also had the following drawbacks:
  •  It does not scale well – adding new servers for every new application is extremely expensive.
  •  It is inefficient – most servers are significantly underutilized meaning that businesses are not extracting the full benefit of their hardware investment.
  •  It is inflexible – re-allocating standalone server resources for other purposes is time consuming, staff intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instance from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, meaning that businesses can now consolidate their server workloads to a smaller set of more effectively utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a pool of physical servers enables businesses to reduce the impact of downtime due to scheduled maintenance activities.

Cluster Benefits:

Clustered Data ONTAP brings these same benefits and many others to storage systems. As with server virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a single logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP you can:

  •  Combine different types and models of NetApp storage controllers (known as nodes) into a shared physical storage resource pool (referred to as a cluster).
  •  Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the same storage cluster.
  •  Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data volumes, LUNs, CIFS shares, and NFS exports.
  •  Support multitenancy with delegated administration of SVMs. Tenants can be different companies, business units, or even individual application owners, each with their own distinct administrators whose admin rights are limited to just the assigned SVM.
  •  Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.
  •  Non-disruptively migrate live data volumes and client connections from one cluster node to another.
  •  Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during hardware refresh cycles.
  •  Leverage multiple nodes in the cluster to simultaneously service a given SVM’s storage workloads. This means that businesses can scale out their SVMs beyond the bounds of a single physical node in response to growing storage and performance requirements, all non-disruptively.
  •  Apply software & firmware updates and configuration changes without cluster, SVM, and volume downtime.