In Part 1 of this series we briefly touched upon a few key selling points that sealed the win on the 3Par over the VNX 5000 series.
So, once we decided which vendor we were going with we had to make many decisions on the actual guts of the system. As I mentioned, we knew from the very beginning we wanted to go with the latest and greatest model of 3Par which is the P10000 series. The P10000 is offered in both the V400 and V800. The V400 has a maximum capacity of 800TB and the V800, 1.6PB. That was an easy one, no way are we hitting even 800TB. When it comes to configuring a storage device most people get caught up in the space requirements, but in my opinion, that is the easy part. I like to focus on the amount of horsepower (I/O) that is needed. The I/O requirements will absolutely tell you at minimum how many disks will be required, from there you can then work on the space requirements. Since my client’s environment was 100% virtualized (servers & desktops) it was pretty easy to determine the workload. As we stated in Part 1 a virtualized environment presents a very unpredictable workload. For instance, I can have a single VMware datastore with several VM’s that all present a different types of workload – an Exchange server (HEAVY), FTP server (LIGHT), ERP system (VERY HEAVY), and a DHCP server (LIGHT). When we throw a Virtual Desktop environment into the mix this can really bring a SAN to it’s knees if everything wasn’t taken into account. To get a rough picture of what we were dealing with we used VMware’s Capacity Planner software. This along with several other tools and 3 years working knowledge with the client gave us a great idea of the I/O profile that we were faced with. Knowing that 3Par’s were specifically designed for this type of unpredictable heavy workload I knew the V400 with Quad controllers would easily handle the 150 VM server environment. That’s not even mentioning 3Par’s “Mesh-Active” design into the mix to further justify the above. Many other SAN’s provide what I call “phantom active-active” controller designs. In other words a competitor’s system may state an “active-active” architecture but each volume is only active on a single controller at a time (how cheap). However, 3Par’s “Mesh-Active” design allows each volume to be active on every controller in the system. (way to use those quad controllers :0) The result in a much more scalable, load-balanced system. In the backend a high-speed, passive backplane joins together all controllers to form a cache-coherent, active-active cluster. For the client’s secondary/DR site down south with only a dozen VM’s we went with the V400 with dual controllers. I know many people must be thinking that these systems (with costs in the 7 figures) were overkill for the environment but those who work in a Financial Firm that performs active daily trading you know that every millisecond counts. After we got our workload and space requirements figured out we spoke with a few HP Engineers to validate the design and confirm the specifications. The client’s onsite Engineer had such a bad experience with an incorrectly configured EMC CLARiiON that he wanted to throw SSD drives in the mix to be safe. Even though our workload didn’t command them you can never go wrong with throwing SSD’s into the mix. **TEASER PIC TIME** 😀 More to come below….
Before we placed the order we had to decide on which add-on software packages we would need. We skipped over the usual Exchange & SQL recovery software that comes with all the SAN vendors and went with the following pieces.
–THIN PROVISIONING – 3Par’s claim to fame is being one of the early pioneers on thin provisioning technology. 3Par aimed to take away the negative thoughts most administrators typically have when it comes to placing high workload servers on a thinly provisioned volume. 3Par has optimized the array from the ground up to tackle this by building Thin Provisioning right into the ASIC. For example, a 3Par processes (or moves) pages in 16K chunks. So, if the ASIC sees a 16K block of zero’s coming into the system it is smart enough to de-allocate this on the backend, without using any CPU cycles. This is incredibly noticeable.
–RECOVERY MANAGER FOR VSPHERE/VIRTUAL COPY – This software will allow us to take consistent, online virtual machines snapshots. The ability to recover VMDK’s on a granular basis is a huge value add for us. This software will also allow us to take advantage of VASA which gives us complete integration of our storage environment right in the vSphere client. The option to view storage information, alarms and events right from the vCenter console will make management that much easier for us.
–REMOTE COPY – Since we have two active production sites that also act as DR for each other the remote copy software was a must. The concept is very simple. We can replicate our volumes to another 3Par on either a synchronous or asynchronous basis.
–SYSTEM REPORTER – 3Par’s SR software will give us the visibility required to completely monitor our system. I want to be able to track my Thin usage and to see previous and future trends in my storage performance. SR will also allow us to generate reports on all aspects of the system. i do have to admit that I hope to see SR mature a little bit more into a much more friendly GUI. EMC definitely is above the pack when it comes to attractive looking management consoles. Hopefully HP moves SR into being much more more friendly to the GUI centric Windows administrator.
–ADAPTIVE OPTIMIZATION – AO gives the array full control of placing heavily used block of data on faster disks and scarcely accessed blocks on high density slower SATA disks. Since we incorporated SSD’s into the design the software will place the most frequently access blocks of data on these very fast drives.
So after all of these details were ironed out the final orders for both 3Par’s were placed. HP estimated that they would be delivered within 30-60 days. I don’t recall the exact time it took to receive them but it was much quicker than what we were told. HP then scheduled out a few people to start the assembly process of the array. About a week ago we went out to the co-location facility in New Jersey to break ground and get started. We ended up going with an APC 48U cabinet and not with the default one that comes with 3Par. This meant that the array did not come together in one piece from the factory so we had to put everything together, piece by piece. First, we assembled all of the shelves and rails into the cabinets which seemed to be the easiest part. Next, we had to install the HP PDU’s. This placement decision for the PDU’s turned out to be a pain. As you will in the pictures it ended up turning out well. Make sure you have your rack elevation diagram drawn out accurately to make it easier. The side mounted PDU’s turned out to be pretty clean and i was satisfied with the way the cables management came out. After the PDU’s were worked out the assembly process went pretty quick. The backplane where the controllers are housed got racked first and then came the disk shelves. Dual homed Fiber was run from the disk shelves in a meshed manner to the four controllers. Once the assembly process was complete the communication channel from the SAN to 3Par’s management NOC was setup to go out through the Internet via HTTPS. Management IP’s were then configured and the scripts to perform the initial configuration and hardware testing were kicked off. All hardware and wiring were confirmed good to go and we ended the say with a successful install!
In Part 3 of this series we will go over the process of configuring the 3Par and setting up all the software. We will also go through the connections process and wiring to the blade chassis. Virtual Connect firmware 3.70 will allow of to implement HP’s new FLAT SAN topology where we completely eliminate the middle fabric (SAN switch) layer. That means we will be plugging the 3Par directly into the back of the Blade chassis. Enjoy the pictures and stay tuned! -Justin Vashisht (3cVguy)