Saturday, January 4, 2025

Slow backups

 Firstly check whether the space of the disk is enough for your backup. You can use sys.dm_exec_session_wait_stats to find out what the backup process is waiting for. The delay can also be when reading the data from the data file, not the least if the database is on a SAN. Here sys.dm_io_virtual_filestats can give information about read times. 

When the backup is running look at the wait_type from sys.dm_exec_requests 

select wait_type, wait_resource,* from sys.dm_exec_requests where command like '%backup%'. In most cases slow backup is due to storage issue because backup reads from storage and writes to storage. I would check disk I/O for drives using dmv sys.dm_io_virtual_file_stats. You can also fire disk related perfmon counters. Specially disk for Avg disk queue length. And also look for ASYNC_IO_COMPLETION wait type to see what this session is waiting for. 

Is the disk local to the machine or out on the network/internet?  If it is local to the machine, I would try copying a large file from the non-backup disk to the backup disk and see how long it takes. I would try copying a large file (10+ GB) to the backup disk from the SQL Server server to get a baseline for how long it should take and you can extrapolate that data to get a rough idea of how long it should take for the database backup to complete. Add a little overhead for the backup process itself and the compression and other things. 

Check what else is running on the instance, any maintenance plans such as index rebuilds, statistic updates, integrity checks?  

Try doing a backup to nul, as that will give you time taken to read from the data files. The difference is the time being taken to write out the backup files – there’s your ammo. NUL is a special 'file' in the file system (same as LPT1, COM1, CON if you remember back to the DOS days). It's the nul device, the trash bin, the black hole of the file system. 

Anything written to nul is discarded. Hence you're not writing the backup file anywhere, it's much the same as backing the database up to a file, then deleting the file (except the file is never written in the first place). We use that to determine the theoretical maximum backup throughput of the database, since your backup can only be as fast as what SQL Server can read from disk during the backup. 

 

MAXTRANSFERSIZE specifies the unit of transfer used by the SQL Server to perform the backups. The default value is 1024MB – the possible values are multiples of 65536 bytes (64KB) ranging up to 4MB. This parameter specifies the largest unit of data that SQL Server can read from or write to the backup media in a single I/O operation, and its value can range from 64 KB to 4 MB. 

Smaller transfer sizes result in more I/O operations, leading to longer backup or restore times. On the other hand, larger transfer sizes reduce the number of I/O operations and can significantly improve performance. 

 

BUFFERCOUNT determines the number of IO buffers used by the backup operations. The values for it are dynamically calculated by the MSSQL Server, however they are not always optimal. However be cautious as very high values may lead to ‘out of memory’ errors. 

The two parameters do work together – BUFFERCOUNT determines how many IO buffers you will be working with and MAXTRANSFERSIZE sets how full this buffers will be. IO buffers = BUFFERCOUNT * MAXTRANSFERSIZE 

You can enable two trace flags to see additional information about your backups and to see the calculated BUFFERCOUNT by the SQL server: 
3605 – That send the output to errorlog; and 3213 – Which provide information about backup or restore throughput and other configurations. 

 

Starting with SQL Server 2019 Cumulative Update 5 (CU5), setting MAXTRANSFERSIZE larger than 64KB is no longer required to enable the optimized compression algorithm with Transparent Data Encryption (TDE) encrypted databases. If the backup command is specified WITH COMPRESSION or the backup compression default server configuration is set to 1, MAXTRANSFERSIZE will automatically be increased to 128K to enable the optimized algorithm.  

If MAXTRANSFERSIZE is specified on the backup command with a value > 64K, the provided value will be honored. In other words, SQL Server will never automatically decrease the value, it will only increase it. If you need to back up a TDE encrypted database with MAXTRANSFERSIZE = 65536, you must specify WITH NO_COMPRESSION or make sure that the backup compression default server configuration is set to 0. 

No comments:

Post a Comment