Diagnosing some common 3PAR Performance Issues

The quick reference guide below will help diagnose and rule out some common issues experienced in the 3PAR arrays.  I want to thank the 3PAR guys in New York for putting some of these into perspective.

CLI COMMAND – “STATVLUN” measures the round trip time of an IO as seen by the system. Running
STATVLUN in the following order should lead to the resolution of a few performance

“statvlun –ni” – The STATVLUN command without any switches will show the round trip time of IO on each path 
for exported Virtual Volumes . This is helpful to determine if there is a multipathing issue.

The “-ni” flag will eliminate any inactive volumes.

“statvlun –vvsum –ni –rw” This will show you the round trip time of the IO to each
volume with the paths condensed for consolidated reporting.  This is great to see an overall picture of what is going on

The “-rw” flag will break the IO down into Reads & Writes.

“statvlun –hostsum –ni –rw” The output of this command will show you the round trip time of the IO organized to the host level.

CLI COMMAND – “STATVV” will display the system’s internal response to the IO. Comparing STATVLUN to
STATVV will give you a really good idea if the array is having performance issues. You should
use the “-ni” flag here as well.

CLI COMMAND – “STATPD” displays the performance of the physical spindles. The “-rw” flag for read/write specific output can be helpful with this.

CLI COMMAND – “STATCMP” will display the Cache Statistics. What you want to look for is even IO
flow through the nodes and the amount of “Delayed Acknowledgements” from Cache.
Please refer to the CLI Reference guide for more information on these commands but utilizing the “-h” flag will give you an excellent view.

HBA Queue Throttling Rules

An 8Gb Port on the 3PAR array has a Queue Depth of 3,268. Some simple best practices should
help address Queue Depth throttling if this look to be an issue:

For higher block/IO size “high throughput intensive workloads” configure the host HBA port with a
higher queue. (A range from 64-128 will help improve the situation)

For high IO with a lower Block/IO size “IO intensive workloads” a lower queue depth should be
acceptable. In these cases usually the defaults work just fine.

-Justin Vashisht (3cVguy)

, , ,

No comments yet.

Leave a Reply

Time limit is exhausted. Please reload the CAPTCHA.

Powered by WordPress. Designed by Woo Themes