Creating a Metalun on EMC VNX & CLARiiON

30 May

Hi,

 

Please find the document below which gives step by step procedure for creation of Metalun in a EMC VNX and Clariion.

Creating_Metalun on EMC VNX & CLARiiON

Thanks,

Mavrick..!!

Best practices for creating metaLUNs on VNX or CLARiiON arrays

30 Apr

Matching and aligning the size of a metaLUN stripe to the write sizes from a host will help with the efficiency of I/O operations.  For that reason it is usually best to use powers of two, when deciding how many component LUNs go into a metaLUN stripe (that is, 2, 4 or 8).  The same applies to drives in the RAID group so for RAID 5 a 4+1 configuration is ideal (that is, five drives in the RAID 5 group).

While it is always best to match the RAID configuration to the expected I/O load, in large configurations this is not always practical (example, large VMFS volumes).  The following configurations are a good balance between performance, contention, and rebuild times:

RAID 5: 20 drive metaLUN stripes (4 * (4+1)), which is four RAID 5 groups, of five drives each

RAID 1/0: 24 drive metaLUN stripes (4 * (3+3)), which is four RAID 1/0 groups, of six drives each

These are examples, but it is always best to determine the application I/O load in advance and plan the LUN layout accordingly (please refer to the White Papers listed below).

Here are some further best practice rules for creating metaLUNs:

Do use the correct RAID level for the pattern of I/O (e.g. An application generating 8KB Random Writes should ideally be using RAID 1/0 LUNs)
Do not have more than one component LUN from any particular RAID group within the same metaLUN stripe.  This would cause linked contention.
Do use drives of the same capacity and rotational speed in all the RAID groups in the metaLUN.  The RAID groups should also contain the same number of drives.
Do not include the vault drives in the metaLUN.
Do use sets of RAID groups that only contain metaLUNs that are all striped in the same way, if possible.  Having standard LUNs in the same RAID group as metaLUN components will lead to some parts of a metaLUN having uneven response times across the metaLUN.  The order in which the component LUNs are added can be changed to evenly distribute the file system load (for example, RAID Group 1,2,3, and 4; then RAID Group 4,1,2, and 3, etc.).  The dedicated RAID group sets for metaLUNs are sometimes referred to as CLARiiON HyperGroups.
Do not concatenate stripes with large numbers of drives to components with much fewer drives.  This will lead to performance varying dramatically in different parts of the same metaLUN.
Do name the component LUNs in such a way as they are easy to identify (article emc103765).  Numbering the components in a logical order helps to choose the correct RAID group and default SP owner (article emc98038) although the component LUNs will be renumbered when the metaLUN is created.  The metaLUN will have its own default owner, but choosing the same default owner as all the components avoids the components being reported as being trespassed in some tools.

Steps to Configure SAN with EMC CLARiiON Array

30 Apr

The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.

 

  • Install Fibre Channel HBAs in all systems
  • Install EMC CLARiiON LP8000 port driver ( For Emulex) on all system
  • Connect each host to both switches ( Broace/Cisco/McData)
  • Connect SP1-A and SP2-A to the first switch
  • Connect SP1-B and SP2-B to the second switch
  • Note:- You can use cross SP connection for HA and connect SPA1 and SPB1 to first switch and SPB2 and SPA2 to the second switch.
  • Install Operating System on windows/solaris/linux/Vmware hosts
  • Connect all hosts to the Ethernet LAN
  • Install EMC CLARiiON Agent Configurator/Navisphere Agent on all hosts
  • Install EMC CLARiiON ATF software on all hosts if you are not using EMC powerpath fail-over software
  • otherwise install supported version EMC Powerpath on all hosts.
  • Install the Navisphere Manager on one of the NT hosts
  • Configure Storage Groups using the Navisphere Manager
  • Assign Storage groups to hosts as dedicated storage/Cluster/Shared Storage
  • Install cluster software on host.
  • Test the cluster for node failover
  • Create Raid Group with protection as application required(raid5,raid1/0 etc)
  • Bind LUN according to application device layout requirement.
  • Add LUN to storage Group.
  • Zone SP port and Host HBA on both switch
  • Register Host on CLARiiON using Navisphere Manager.
  • Add all hosts to storage group.
  • Scan the devices on host.
  • Label and Format the device on host.

 

Source :  http://www.emcstorageinfo.com/2009/04/steps-to-configure-san-with-emc.html

EMC Clariion & VNX SPCollect

11 Apr

EMC Clariion & VNX SPCollect:

 

The following is the procedure for SPCollects on a Clariion, CX, CX3 and CX4 machines.

If you are running release 13 and above, you will be able to perform the SP Collects from the GUI of Navisphere Manager Software.

Using Navisphere perform the following steps to collect and transfer the SPCollects to your local drive.

  1. Login to Navisphere Manager
  2. Identify the Serial number of the array you want to perform the SP Collects on
  3. Go to SP A using expand (+)
  4. Right click on it and from the menu, select SPCollects
  5. Now go to SP B in the same array
  6. Right click on it and from the menu, select SPCollects
  7. Wait for 5 to 10 minutes depending on the how big your array is and how busy your array is
  8. Now right click on SP A and from the menu select File Manager
  9. From the window, select the zip file SerialNumberOfClariion_spa_Date_Time_*.zip
  10. From the window, hit the transfer button to transfer the files to your local computer.
  11. Follow a similar process ( 8, 9, 10) for SPB, from the File Manager
  12. The SP B file name will be SerialNumberOfClariion_spb_Date_Time_*.zip

For customers that do not have SPCollects in the menu (running release 13 below), there is a manual way to perform SPCollects using Navicli from your management console or an attached host system.

To gather SPCollects from SP A, run the following commands

navicli  –h  xxx.xxx.xxx.xxx  spcollect  –messner

Wait for 5 to 7 mins

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –list

The name of the SPCollects file will be SerialNumberOfClariion_spa_Date_Time_*.zip

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –retrieve

where xxx.xxx.xxx.xxx is the IP Address of SP A

For SP B, similar process like above, the name of the file you will be looking for is SerialNumberOfClariion_spb_Date_Time_*.zip

Where xxx.xxx.xxx.xxx will be the IP Address of SP B

SPCollects information is very important with troubleshooting the disk array and will give the support engineer all the necessary vital data about the storage array (environment) for troubleshooting.

The following data that is collected using the SP Collects from both the SP’s:

Ktdump Log files

iSCSI data

FBI data (used to troubleshoot backend issues)

Array data (sp log files, migration info, flare code, sniffer, memory, host side data, flare debug data, metalun data, prom data, drive meta data, etc)

PSM data

RTP data (mirrors, snaps, clones, sancopy, etc)

Event data (windows security, application and system event files)

LCC data

Nav data (Navisphere related data)

Source:

http://storagenerve.com/2009/05/05/clariion-spcollects-for-cx-cx3-cx4/

http://www.nasti.be/site1/index.php/component/content/article/115-emc/165-how-to-run-the-spcollect-utility-and-retrieve-spcollect-files-to-collect-information

VNX_ARCHITECHTURE

18 Feb

Hi,

Please find the link for VNX_ARCHITECHTURE. This document will explain the various parts and devices in the storage devices.

VNX Architectural Overview

Thanks,

Mavrick.

 

 

 

EMC_VNX_SERIES_UNIFIED_STORAGE_SYSTEMS_MODELS

18 Feb

 

Hi Folks,

The below open a PDF file which explains different models available in VNX series.

VNX_SERIES_UNIFIED_STORAGE_SYSTEMS_MODELS

Thanks,

Mavrick

CLARiiON&VNX_Best Practices_Guides

15 Feb

Hi Friends,

Please find the links for EMC Clariion and VNX best practices guide.

clariion-best-practices-performance-availability-wp

VNX_Unifed-Storage-Best-Practices

Thanks,

Mavrick.

Vault Drives on the New EMC VNX Arrays

15 Feb

Hi Friends,

There are few changes on the VAULT drives on the VNX Storage boxes. I got this iformation from a blog which is given below.

Vault Drives on the New EMC VNX Arrays

Vault Drives on the New EMC VNX Arrays

Thanks,

Mavrick.

Powerpath_5.5 Installation on Windows 2008 R2 Server

15 Feb

Hi Friends,

Click on the following link for Step-by-Step procedure to install PowerPath 5.5 SP1 on Windows 2008 R2 machine.

Powerpath_Installation_Windows

Thanks,

Mavrick

Unisphere_Host_Agent_Installation.docx

4 Feb

Following document shows the details steps to install Unisphere Host Agent on a Windows Machine.

Unisphere_Host_Agent_Installation

Thanks,

Mavrick..!!