Hi,
Please find the document below which gives step by step procedure for creation of Metalun in a EMC VNX and Clariion.
Creating_Metalun on EMC VNX & CLARiiON
Thanks,
Mavrick..!!
Hi,
Please find the document below which gives step by step procedure for creation of Metalun in a EMC VNX and Clariion.
Creating_Metalun on EMC VNX & CLARiiON
Thanks,
Mavrick..!!
Matching and aligning the size of a metaLUN stripe to the write sizes from a host will help with the efficiency of I/O operations. For that reason it is usually best to use powers of two, when deciding how many component LUNs go into a metaLUN stripe (that is, 2, 4 or 8). The same applies to drives in the RAID group so for RAID 5 a 4+1 configuration is ideal (that is, five drives in the RAID 5 group).
While it is always best to match the RAID configuration to the expected I/O load, in large configurations this is not always practical (example, large VMFS volumes). The following configurations are a good balance between performance, contention, and rebuild times:
RAID 5: 20 drive metaLUN stripes (4 * (4+1)), which is four RAID 5 groups, of five drives each
RAID 1/0: 24 drive metaLUN stripes (4 * (3+3)), which is four RAID 1/0 groups, of six drives each
These are examples, but it is always best to determine the application I/O load in advance and plan the LUN layout accordingly (please refer to the White Papers listed below).
Here are some further best practice rules for creating metaLUNs:
Do use the correct RAID level for the pattern of I/O (e.g. An application generating 8KB Random Writes should ideally be using RAID 1/0 LUNs)
Do not have more than one component LUN from any particular RAID group within the same metaLUN stripe. This would cause linked contention.
Do use drives of the same capacity and rotational speed in all the RAID groups in the metaLUN. The RAID groups should also contain the same number of drives.
Do not include the vault drives in the metaLUN.
Do use sets of RAID groups that only contain metaLUNs that are all striped in the same way, if possible. Having standard LUNs in the same RAID group as metaLUN components will lead to some parts of a metaLUN having uneven response times across the metaLUN. The order in which the component LUNs are added can be changed to evenly distribute the file system load (for example, RAID Group 1,2,3, and 4; then RAID Group 4,1,2, and 3, etc.). The dedicated RAID group sets for metaLUNs are sometimes referred to as CLARiiON HyperGroups.
Do not concatenate stripes with large numbers of drives to components with much fewer drives. This will lead to performance varying dramatically in different parts of the same metaLUN.
Do name the component LUNs in such a way as they are easy to identify (article emc103765). Numbering the components in a logical order helps to choose the correct RAID group and default SP owner (article emc98038) although the component LUNs will be renumbered when the metaLUN is created. The metaLUN will have its own default owner, but choosing the same default owner as all the components avoids the components being reported as being trespassed in some tools.
The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.
Source : http://www.emcstorageinfo.com/2009/04/steps-to-configure-san-with-emc.html
EMC Clariion & VNX SPCollect:
The following is the procedure for SPCollects on a Clariion, CX, CX3 and CX4 machines.
If you are running release 13 and above, you will be able to perform the SP Collects from the GUI of Navisphere Manager Software.
Using Navisphere perform the following steps to collect and transfer the SPCollects to your local drive.
For customers that do not have SPCollects in the menu (running release 13 below), there is a manual way to perform SPCollects using Navicli from your management console or an attached host system.
To gather SPCollects from SP A, run the following commands
navicli –h xxx.xxx.xxx.xxx spcollect –messner
Wait for 5 to 7 mins
navicli –h xxx.xxx.xxx.xxx managefiles –list
The name of the SPCollects file will be SerialNumberOfClariion_spa_Date_Time_*.zip
navicli –h xxx.xxx.xxx.xxx managefiles –retrieve
where xxx.xxx.xxx.xxx is the IP Address of SP A
For SP B, similar process like above, the name of the file you will be looking for is SerialNumberOfClariion_spb_Date_Time_*.zip
Where xxx.xxx.xxx.xxx will be the IP Address of SP B
SPCollects information is very important with troubleshooting the disk array and will give the support engineer all the necessary vital data about the storage array (environment) for troubleshooting.
The following data that is collected using the SP Collects from both the SP’s:
Ktdump Log files
iSCSI data
FBI data (used to troubleshoot backend issues)
Array data (sp log files, migration info, flare code, sniffer, memory, host side data, flare debug data, metalun data, prom data, drive meta data, etc)
PSM data
RTP data (mirrors, snaps, clones, sancopy, etc)
Event data (windows security, application and system event files)
LCC data
Nav data (Navisphere related data)
Source:
http://storagenerve.com/2009/05/05/clariion-spcollects-for-cx-cx3-cx4/
Hi,
Please find the link for VNX_ARCHITECHTURE. This document will explain the various parts and devices in the storage devices.
Thanks,
Mavrick.
Hi Folks,
The below open a PDF file which explains different models available in VNX series.
VNX_SERIES_UNIFIED_STORAGE_SYSTEMS_MODELS
Thanks,
Mavrick
Hi Friends,
Please find the links for EMC Clariion and VNX best practices guide.
clariion-best-practices-performance-availability-wp
VNX_Unifed-Storage-Best-Practices
Thanks,
Mavrick.
Hi Friends,
There are few changes on the VAULT drives on the VNX Storage boxes. I got this iformation from a blog which is given below.
Vault Drives on the New EMC VNX Arrays
Thanks,
Mavrick.
Hi Friends,
Click on the following link for Step-by-Step procedure to install PowerPath 5.5 SP1 on Windows 2008 R2 machine.
Powerpath_Installation_Windows
Thanks,
Mavrick
Following document shows the details steps to install Unisphere Host Agent on a Windows Machine.
Unisphere_Host_Agent_Installation
Thanks,
Mavrick..!!