When should you replace data center hardware? This is a question often asked, but the answer is not obvious. Replace a server too soon and your costs go up. Wait too long and you waste money powering inefficient equipment or worse, risk a higher failure rate. Meanwhile, manufacturers churn out new upgrades and products every 18-24 months, and you’re left confused and bewildered by the many options.
Many companies wait three to five years to refresh their hardware. Maybe your hands are tied by budgetary constraints and unpredictable IT demands, and as a result, the refresh cycle is stretched out beyond a reasonable date.
Given this tricky situation, when is it best to replace hardware? We started this blog post with that question in mind and sought to pinpoint the moment in time when the total cost (CapEx and OpEx) to deploy a new server becomes cheaper than maintaining a legacy server. Unique constraints will yield different results, but in the example outlined below, we found that three years is the magic number. How do you find out the optimal refresh time for your own infrastructure? Read our case study to see how this can be done.
Unlike the many online calculators available to determine when we should retire from the workplace, no easy algorithms exist for when we should replace our aging servers. We started the analysis with our Total Cost of Ownership (TCO) model and made several assumptions along the way. First, all the servers in this scenario are storage servers, and the defining cost of a storage server is the hard drive inside. To keep the model simple we made the following assumptions:
From here, we looked at the ratio of CapEx to OpEx for a data center comprising 100PB of storage servers with 4TB hard drives, broken down by type of expense. We also took into account that data center CapEx would be amortized over 18 years, the standard length of time for a facility of this size. This data is the baseline for our analysis and outlines the assumptions in our TCO model:
Given our TCO model’s assumptions, continuously supporting 100PB of storage capacity using 4TB hard drives costs about $14M per year for the first three years. Expenses then drop to about $5.5M per year as most of the major CapEx would be amortized by that point. By the end of year six, the total expenses are approximately $57.7M. (Only the first seven years are shown in the chart, as we found that CapEx and OpEx remain constant and the same for the remaining 11 years of the amortization of the data center CapEx.)
Prices however don’t remain static, and hard drive densities change. Suppose, for example, one year after the data center begins operating, the cost of a single 6TB HDD drops to $300 — the price we used for the 4TB drives in the model to calculate the costs above. How would upgrading to the higher density drives affect TCO? It turns out that replacing the entire storage capacity of the data center with 6TB drives would result in total CapEx and OpEx costs of about $65.2M by the end of year six, as shown below:
As you can see, replacing the storage in your servers after just one year means you’re paying more over time, which is not an ideal scenario or a good replacement strategy. With the increase in drive densities, storage costs are realized and CapEx falls.
We decided to take this idea a little further. What if you decide to upgrade those 4TB drives to 8TB after 2 years of operations? At the same price per drive, the total cost for 6 years of operations is about $60.5M, with the same cost breakdowns as before:
These costs are better than the 6TB scenario, but the overall TCO is still more expensive than our baseline.
Finally, consider what happens if you replace all the 4TB hard drives with 12TB after three years of operations. Surprisingly, the total 6-year cost is $57M, the same as our initial cost!
How can this be? We arrive at the same total cost because the annual CapEx (purple) for the 12TB hard drives is less than two-thirds of the OpEx for the 4TB drives in our baseline. You need to deploy three times fewer servers to accommodate the same storage capacity. In general, when the combined annual cost for the storage servers and network falls below the combined OpEx times the hard drive capacity (1 – old_drive_density/new_drive_density), it becomes cost efficient to completely replace the hard drives with new ones.
Aging infrastructure costs more than you might think. Old machines lingering in a data center exact a hidden cost — with each new generation of hardware, servers become more powerful and energy efficient. Over time, the total cost of ownership drops through reduced energy bills, a lower risk of downtime, and improved IT performance.
Figuring out when to invest in new servers doesn’t have to be a guessing game, though. In the above scenario, our analysis suggests that after three years, the residual value of hardware is nominal, and it doesn’t hold much resale value.
You can find out what the optimal time is for your own refresh cycle using our TCO model, which isfreely available. The spreadsheet already includes typical cost estimates for data center equipment, labor, etc., which are based on 2013 market data and provided as simplified representations. (Please check the “References” tab for more information.) Feel free to modify the spreadsheet so that it’s customized for your needs. A blank spreadsheet can be found here.