High Level SAN Migration Strategy. Is this a good idea?

by Martin Surasky   Last Updated March 02, 2018 22:06 PM

In the company I work for we are planning decommissioning an old SAN serving our production servers Which are 2 Nodes running 2 SQL Server instances using Clustering (one is the backup for one instance and the primary for the other and visceversa). Instead of that old SAN device (I believe it is a VNX 500, but the technical specifications really doesn’t matter from what I’m asking here…) we will have a new one that more or less should replace the existing one with better capabilities and more storage. In a nutshell, what we are going to accomplish is this.

enter image description here

Me as a DBA, have participated in tons of SQL Server Migrations, but in this case we are not really migrating the database servers, we are only pulling the SAN rug underneath and replacing it with a flashier, fancier one… We have discussed an approach with our enterprise architect and it goes more or less like this (veeeery high level)

  1. Set up file synchronization using some tool (TBD) that will work at the file-block level to synchronize the contents of the data and log files. This will be running as a service while the existing database server in production is working (using the old SAN).
  2. Have this synchronization running for a day or two and observe no errors occur.
  3. On the night were we do the Cutover we
    • Shutdown the SQL Server Services (on both instance) using MS Cluster Management Console
    • Shutdown the Clusters themselves using MS Cluster Management Console
    • Present the new SAN device to the SQL Servers
    • Remove the old partitions (un assign letters)
    • Assign those letters to the LUNs on the new SAN
    • Bring the Cluster online
    • Start SQL Server Service
  4. Test, Test, Test: consistency Checks, Verify SQL Server Error Logs and as look for other issues

In other words, we are thinking about putting SQL Server to sleep for a while, make the necessary infrastructure changes that should be transparent to SQL Server so when we bring the service online things will look to SQL just the same they were before, but only this time we are reading off the new SAN.

The reason for not doing a straight backup/restore of the databases is because we are trying to minimize downtime and we believe if we find the right tool to synchronize at the file level, we could end up just doing a final sync to catch up with the latest changes (once we shut down SQL Server Service) as opposed to major file copy operations (our databases are rather big and the downtime must be minimal).

I am not asking for a full review of the strategy or discussing this at a super low level, all I’m asking the community members who would like to chime in is:

  • Have you done anything remotely similar to this? did it work?
  • Any gotchas?
  • Anything missing here (at a high level please, I know I’m missing TONS of stuff, but I’m talking about ciritical high level steps)
  • DO you know of any tool that may provide the functionality we are looking for in order to synchronize files this way (do an initial sync and then only sync bits/sectors/clusters inside the file that have changed)

Thanks and sorry for the long post, but I felt that if I didn’t include at least a bit of detail the questions would be rather vague.



Related Questions




MSDTC Issue SQL Server 2008 R2

Updated August 23, 2016 08:04 AM