Wednesday, 11 May 2016

Aggregate Relocation

            Aggregate relocation operations take advantage of the HA configuration to move the ownership of storage aggregates within the HA pair. Aggregate relocation occurs automatically during manually initiated takeover to reduce downtime during planned failover events such as nondisruptive software upgrade, and can be initiated manually for load balancing, maintenance, and nondisruptive controller upgrade. Aggregate relocation cannot move ownership of the root aggregate. (Source: Netapp site)


The aggregate relocation operation can relocate the ownership of one or more SFO aggregates if the destination node can support the number of volumes in the aggregates. There is only a short interruption of access to each aggregate. Ownership information is changed one by one for the aggregates.
During takeover, aggregate relocation happens automatically when the takeover is initiated manually. Before the target controller is taken over, ownership of the aggregates belonging to that controller are moved one at a time to the partner controller. When giveback is initiated, the ownership is automatically moved back to the original node. The ‑bypass‑optimization parameter can be used with the storage failover takeover command to suppress aggregate relocation during the takeover.
Command Options:
ParameterMeaning
-node nodenameSpecifies the name of the node that currently owns the aggregate.
-destination nodenameSpecifies the destination node where aggregates are to be relocated.
-aggregate-list aggregate nameSpecifies the list of aggregate names to be relocated from source node to destination node. This parameter accepts wildcards.
-override-vetoes true|falseSpecifies whether to override any veto checks during the relocation operation.
-relocate-to-higher-version true|falseSpecifies whether the aggregates are to be relocated to a node that is running a higher version of Data ONTAP than the source node.
-override-destination-checks true|falseSpecifies if the aggregate relocation operation should override the check performed on the destination node.
CLI:
Aggregate name - HLTHFXDB1
Node1 name - cluster1-01
Node2 name - cluster1-02
Now create aggregate on node2, then relocate to node1 from node2,
Create a aggregate HLTHFXDB1 on node cluster1-02,

Check the newly created aggregate using aggr show command,

Now relocate the aggregate from node 2 to node 1,


Now HLTHFXDB1 relocation is done, check the status using aggr show command,


That's it :)

Wednesday, 4 May 2016

LIF Migration

    LIF migration is the ability to dynamically move logical interfaces from one physical port to another in a cluster, allowing you to migrate them to higher performing network ports or take nodes offline for maintenance while preserving data access. SAN LIFs do not support migrate in normal operation, as iSCSI and Fibre Channel instead use multipathing and ALUA to protect against network path failure. LIF migration is non-disruptive for NFS and for newer SMB protocol versions.

Step1: cluster1 Configuration Network -> select network interfaces

LIF1 - svm1_cifs_nfs_lif1 (cluster1-01 node, port e0c)
LIF2 - svm1_cifs_nfs_lif2 (cluster1-02 node, port e0c)

Now we are going to migrate lif "svm1_cifs_nfs_lif1 to cluster1-02,port e0d"







Svm1 uses DNS load balancing for it’s NAS LIFs, so we cannot predict in advance which of those two LIFs the host running,so in CLI we can determine which LIF is handling that traffic using lif statistics command,



Step 2: In the “Network” pane of System Manager, locate in the interface list the LIF which we want to migrate and note the current port assignment.



select the migrate option,


Select the destination node port, here it would be cluster1-02, port e0d


notice the Migrate Permanently check box in this window. If you check this box it indicates that the LIF’s home port should also be set to this new port value.


 Migration is completed. Now we can check in the system manager, network interface tab,


The “Current Port” value shown for the LIF in the Network Interfaces list has changed to reflect the nodes’ new port assignment. The small red X next to the current port entry indicates that the LIF does not currently reside on it’s configured home port.

now send the LIF back to it’s home port,




The LIF migrates back to it’s home port, once again without disrupting IO's.




The “Current Port” value for the LIF returns to it’s original value in the Network Interfaces list, and the red X disappears to indicate that the LIF is back on it’s home port.

LIF Migration in CLI:

Step:1 Using the migrate command, migrate the LIF1 to cluster1-02, port e0d,



LIf1 is migrated to cluster1-02, port e0d,


Now revert back to it's home port using revert command,



Now LIF1 is back to it's home port.

That's it :)