Storage Blog

Reliant - Key Performance IndicatorsWe are continuing our series on performance troubleshooting tips and tricks this week. Today we are looking into queue length and how queue length can tie into other performance indicators and affect the overall health of your array.

Performance Troubleshooting at Various Levels
Multiple performance indicators,such as SP utilization, cache utilization, and queue length tend to all come together and can be looked at on different levels. There are primarily two levels by which you can examine these various performance indicators – the LUN level or the array level.



Continue Reading →

Reliant - Cache UtilizationLet's say you look in your storage system and SP utilization is not an issue but you're still seeing lackluster utilization overall. The next thing you can look at is the cache utilization on the SP. So, the SP, or storage processor, has a layer of cache - which is basically its volatile memory where data is on the processor before it is written to disk.  

Now, let’s say you have a storage environment with very high IOPs requirements, and someone made the decision to run this environment on SATA drives. Let’s face it – you won’t have the IOPs capability that is required for that environment. So what happens? You storage processor identifies the hot blocks on these LUNs and keeps all of this data in cache. So, as the transactions are coming and the I/O is coming in, everything is being written to the cache. The disks are simply not fast enough for the storage processor to unload this data on the discs in a timely fashion and allow new data to come into the cache.



Continue Reading →

Reliant - Performance Capacity IssuesIf you are having performance problems within your storage environment, one of the first things you will need to do is to check the overall health of your array. How do you do that? Look at the performance data. Here are some of the things you can look at in this performance data to assess the overall health of your array:
        •  SP Utilization: Make sure each storage processor’s utilization is not over 50%. Why 50%? Storage processors are meant to be redundant, in that if one SP was to fail, the other will take over. You want to ensure that the other SP has enough capacity or CPU cycles on it to be able to take the load from the other SP in the event of a failure.
        •  Cache Utilization: Look at the dirty pages (cache) on the SPs and see if the number is too high. In a storage array, especially on VNXs, you can run into a situation known as forced flushing. Forced flushing is basically when your SP cache is at 100% and your system can no longer accept any more I/Os until it dumps some of the existing cache to disk. 



Continue Reading →

Reliant - Uncovering Underlying Issues of PerformanceIf you are seeing decreased performance within your storage infrastructure, there are a few things you can look at on your storage system to rule out the possibility of the issue being with your system array itself or to find out the underlying cause of the issue itself. Today, we will look at one of the first things you can examine in determining the reasoning behind poor performance - SP utilization. 

What is a SP?

SP is the storage processor on the array. The SP is a device that each client host is connected to via iSCSI or Fibre Channel that leverages requests going to the storage subsystems. With determining whether SP Utilization is the underlying cause of performance issues, you should see if the utilization on the storage processor is too high. Going over 50% on the SP utilization may be an indication of a performance issue.



Continue Reading →

Storage Industry Challenges

Posted on December 15, 2014 in Knowledge and Learning

Reliant - Storage Industry ChallengesWe asked a storage engineer what challenges he saw in the storage industry. Take a look at his response below:

Q: What challenges do you see in the storage industry?

A: One challenge that I see right now is there is a constant, I would say, battle between the system engineers and the developers for any particular company. At the end, they tend to work together pretty well if the environment allows it. I have been in environments where the DVA's and they System Admins, the Storage Admins, and Linux Admins all work together in pretty good harmony. However, what I am seeing is, in the application environment, the engineers that are writing the code and that are managing these applications don't tend to take into account what the true responsibilities of the System Engineer are.



Continue Reading →

Reliant - System Upgrade Series - Building Out vs Starting From Scratch

Starting a storage array from scratch can definitely be tricky. However, if you already have a pre-existing environment that you have been growing and you are simply moving into a new system, then you will have some basic data there that you can count on in order to make an educated decision on how to go about your new deployment.

If, however,you are starting completely from scratch – with a brand new idea – building all your servers and storage at once… it’s going to be quite a ride!



Continue Reading →

Reliant - System Upgrade Series - Planning Future PurchaseWhen considering a purchase, sometimes people forget to ask the obvious – am I buying the right storage array for my infrastructure? For example, if you have a heavily equipped Fibre Channel network, should your array be equipped with Fibre Channel I/O modules? Or do you have an Ethernet iSCSI network and do Ihave the right modules for that? Because there can be a big cost differencebetween a Fibre Channel I/O module and storage array to an iSCSI.

If you have the wrong modules on your system, you may have to call your vendor out to reinitialize the entire storage array. If your storage array is imaged with the Fibre Channel I/Omodules installed, you cannot simply remove the Fibre Channel modules andinstall the iSCSI modules. You will have to re-image the entire SP which can take up to 24 hours.

Getting What You Really Need



Continue Reading →

Reliant - When is the Right Time to UpgradeToday, we are beginning a series about System Upgrades. We will revist this topic from time to time, but keep an eye out this week for more posts about upgrading your system! But, for today, we are going to talk about knowing the right time to upgrade your system.

Knowing the Right Time to Upgrade

You can make several cases to upgrade your storage array. One case, for example, is the SP utilization of your storage array. If both SPs are pushing utilization of 50% or more, than there really isn’t that much you can do or add to that system to squeeze out and more I/Os. From a CPU perspective, even if you were to add more drives to the system, the system is pretty much at its max.

Can You Forecast CPU Utilization?

Normally,when you’re architecting an environment, your vendors such as Microsoft,Oracle, and VMware will tell you your basic IOPS requirements for the environment that you are about to deploy. However, what they don’t tell you is how many CPU cycles this is going to used up in your storage array and you can’t really forecast CPUs utilization.



Continue Reading →

Data center migrations must ensure that sufficient uptime, security and accessibility are maintained for all enterprise applications. When services shift from one data center to another, the interdependency of applications needs to be considered and included in your migration plan. One common approach to managing this is to create application groups having common resource dependencies. Services including Active Directory and DNS commonly remain in both the originating and destination data centers until migration is complete.

Lift and shift hardware migration

This method is conceptually easy. Prior to moving, complete backups are taken. Then the hardware is loaded onto a moving truck and installed at the new data center. This strategy can be risky, as hardware and backups can be lost through breakage. In such a physical transition, distance can be a factor affecting transition time.



Continue Reading →

How Secure is the Cloud?

Posted on December 3, 2014 in Knowledge and Learning

In just a few short years, the cloud has come along and has essentially revolutionized the way that personal users and businesses are using their computers. Instead of storing files and even software locally on a hard drive, it is all stored on a remote server that is always connected to the Internet. This means that enterprise deployments for businesses can occur in an "on demand" capacity, allowing businesses to only pay for the software they're going to use. This also means that all files and folders can be accessed from any computer with an Internet connection, allowing people to be just as productive while on the go as they are at home.

Considerations

As more and more people grow to depend on the cloud on a daily basis, "trust" becomes an important issue that needs to be addressed. If you're storing all of the files that your business depends on daily or all of your personal documents on a cloud-based server, the question of "just how secure is the cloud?" becomes of paramount importance. 



Continue Reading →