Saturday, November 23, 2024

Transaction logs - 1

How the Transaction Log Works 

Whenever a change is made to a database object or the data it contains, not only is the data or object updated in the data file, but also details of the change are recorded as a sequence of log records in the transaction log. Each log record contains the details of a specific change that has been made to the database, allowing that change to be performed again as a part of REDO, or undone as a part of UNDO, during crash recovery.  

Other log records contain details regarding the ID of the transaction that performed the change, when that transaction started and ended, which pages were changed, the data changes that were made, and so on. 

SQL Server, like most transactional relational database systems, utilizes the Write-Ahead Logging (WAL) protocol for all data modifications that occur in all databases. 

 

The WAL protocol dictates that before a data page is changed in non-volatile storage, the information describing the change to the data page must first be written to stable storage, allowing the change to be redone or undone in the event of a failure. SQL Server implements this protocol by logging the information describing changes to database objects and data pages using log records in the transaction log. Before a transaction that changes data in the database completes, the log records describing the changes must first be hardened to disk.  

Changes made to the data pages in the buffer cache are then subsequently written out to permanent storage in the database files at checkpoint, or by the Lazy Writer process when the instance is under memory pressure in the buffer pool. Uncommitted changes can also be written to the data file by the CHECKPOINT or Lazy Writer processes, as long as the log records describing those changes have been first written to the transaction log file. 

By always writing changes to the log file first, SQL Server has the basis for a mechanism that can guarantee that the effects of all committed transactions will ultimately be reflected in the data files, and that any data modifications on disk that originate from incomplete transactions, i.e. those for which neither a COMMIT nor a ROLLBACK have been issued, are ultimately not reflected in the data files. 

This process of reconciling the contents of the data and log files occurs during the database recovery process (sometimes called crash recovery), which is initiated automatically whenever SQL Server restarts, or as part of the RESTORE command. Any transactions that were committed, and therefore recorded in the transaction log, before the service was interrupted, or before the time to which the database is being manually restored, but are not reflected in the data files, will be "rolled forward" (redone). 

Likewise, any data changes in the database that are associated with uncommitted transactions will be "rolled back" (undone), by reading the relevant operations from the log file, and performing the reverse physical operation on the data. 

 

How SQL Server writes to the transaction log 

SQL Server writes to the transaction log sequentially, one record after 

another, and so a disk storing only a transaction log will rarely need to perform random seek operations. Certain operations, such as transaction log backups, Change Data Capture jobs and the replication log reader agent, will read the transaction log in a "random" fashion, but writing operations are generally sequential and can be much faster. 

This is why the recommended best practice is to segregate the log file from the data files and store the former on physical disks that are configured for high-speed sequential writes. 

The fact that writes are always sequential also means that SQL Server will only ever write to one transaction log file at a time. There is therefore no advantage, in terms of log writing performance, to having multiple transaction log files. The only reason to have more than one log file is if space considerations dictate the need for multiple log files, on different disks, in order to achieve the necessary log size for a database. 

 

Understanding log truncation 

Log truncation is the mechanism through which SQL Server marks the space inside 

of the transaction log as available for reuse by the database. The allocated space inside of a transaction log file is internally divided into smaller segments known as virtual log files (VLFs), and the process of log truncation is simply the act of marking a VLF as "inactive" and so making the space in that VLF available for reuse. It does not, as the term "truncation" might suggest, reduce the physical size of the transaction log. 

 

A VLF can only be considered inactive if it contains no part of what is termed the active log. A full discussion of transaction log architecture is out of scope but, briefly, any log record relating to an open transaction is required for possible rollback and so must be part of the active log. In addition, there are various other activities in the database, including replication, mirroring and CDC (Change Data Capture) that use the transaction log and need transaction log records to remain around until they have been processed. 

These records will also be part of the active log. As discussed previously, transaction log files are sequential files and each log record inserted into the log file is stamped with a Logical Sequence Number (LSN). The log record with the lowest LSN (MinLSN) is defined as the oldest log record that may still be required for some database operation or activity, and this record marks the start of the active log. The log record with the highest LSN (i.e. the most recent record added) marks the end of the active log. 

 

A log record is no longer part of the active log if each of the following three conditions is met: 

1. It relates to a transaction that is committed, and so is no longer required for rollback. 

2. It is no longer required by any other database process, including a transaction log 

backup when using FULL or BULK LOGGED recovery models. 

3. It is older (i.e. has a lower LSN) than the MinLSN record. 

Any VLF that contains no part of the active log is inactive and can be truncated, although the point at which this truncation occurs depends on the recovery model in use. In the SIMPLE recovery model, truncation can occur immediately upon CHECKPOINT; cached data pages are flushed to disk (after first writing the transaction details) and any VLFs that contain no part of the active log are truncated. 

In the FULL (or BULK LOGGED) recovery model, once a full backup of the database has been taken, the inactive portion of the log is not marked as reusable on CHECKPOINT, 

because it is necessary to maintain a complete LSN chain, and so allow point-in-time recovery of the database. Truncation can only occur upon a BACKUP LOG operation. In this case, once the log backup has backed up the log, it marks any VLFs that are no longer necessary as inactive and hence reusable. 

 

Sizing and growing the log 

Whenever a log file needs to grow, and additional space is allocated, this space is divided evenly into VLFs, based on the amount of space that is being allocated. When additional space is allocated in small blocks, for example using a default ten percent auto-growth setting, the resulting transaction log may have a large number of small VLFs. When additional space is allocated in larger sizes, for example when initially sizing the log to 16 GB in a single operation, the resulting transaction log has a small number of larger VLFs. 

A very high number of small VLFs, known as log file fragmentation, can have a considerable impact on performance, especially for crash recovery, restores and backups, particularly log backups; in other words, operations that read the log file. Conversely, if the database has only a few VLFs which are large in size, this can lead to problems related to rapid log growth in cases where truncation is delayed, for some reason (see the Lack of log space reuse section).  

For example, let's assume that each VLF is 1 GB in size. If all VLFs within the log file contain some part of the active log then the log file will grow in 1 GB steps until some of the existing VLFs cease to contain any part of the active log, at which point it can be truncated by the next log backup if using FULL recovery, or the next CHECKPOINT operation in SIMPLE recovery. As such, it's important that the log is sized 

appropriately initially, and grown in appropriately-sized steps, to minimize fragmentation but also avoid tying up large portions of the log for long periods. 

 

The transaction log, when properly managed, works in a circular fashion, and the starting point of the transactions that must be processed as a part of crash recovery is maintained in the database boot page. However, nothing tracks the position of the last log record requiring processing during crash recovery, so the log records are stamped with a parity bit that gets flipped when the transaction log reaches the end of the file and wraps back around to the beginning of the file.  

To prevent the possibility of introducing corruption by processing random data that existed previously on disk and matches the parity bit for the existing log records, the space being allocated must be zeroed out whenever the log file grows. 

 

 

Diagnosing a Runaway Transaction Log 

 

If you are experiencing uncontrolled growth of the transaction log, it is due either to an incredibly high rate of log activity, or to factors that are preventing space in the log file from being reused, or both. 

If the growth is due primarily to excessive log activity, you need to investigate whether there might be log activity that could be avoided, for example, by adjusting how you carry out bulk data and index maintenance operations, so that these operations are not fully logged (i.e. the BULK LOGGED recovery model is used for these operations). However, any bulk logged operation will immediately prevent point-in-time recovery to any point within a log file that contains records relating to the minimally logged operations. If this is not acceptable, you must simply accept a large log as a fact, and plan its growth and management (such as frequency of log backups) accordingly, as described in the Proper Log Management section later in this chapter. 

If the growth is due to a lack of log space reuse, you need to find out what is preventing this reuse and take steps to correct the issue. 

 

 

Excessive logging: index maintenance operations 

 

Index maintenance operations are the second most common cause of transaction log- usage and growth, especially in databases using the FULL recovery model. The amount of log space required to perform index maintenance depends on the following factors: 

rebuild or reorganize – index rebuilds generally use a lot more space in the log 

recovery model – if the risks to point-in-time recovery are understood and acceptable, then index rebuilds can be minimally logged by temporarily switching the database to run in BULK LOGGED recovery mode. Index reorganization, however, is always fully logged. 

Index rebuilds 

Rebuilding an index offline, using ALTER INDEX REBUILD (or the deprecated DBCC DBREINDEX in SQL Server 2000) drops the target index and rebuilds it from scratch (online index rebuilds do not drop the existing index until the end of the rebuild operation). 

 

In the FULL recovery model, index rebuilds can be a very resource intensive operation, requiring a lot of space in the transaction log. In the SIMPLE or BULK LOGGED recovery model, rebuilding an index is a minimally logged operation, meaning that only the allocations are logged, and not the actual pages changed, therefore reducing the amount of log space required by the operation. 

If you switch to the SIMPLE model to perform an index rebuild, the LSN chain will be automatically broken. You'll only be able to recover your database to a point of time contained in the previous transaction log backup. To restart the chain, you'll 

need to switch back to the FULL model and immediately take a full or differential database backup. 

If you switch to the BULK LOGGED model, the LSN chain is always maintained but there are still implications for your ability to perform point-in-time restores, since a log backup that contains a minimally logged operation can't be used to recover to a point in time. In other words, you won't be able to use the STOPAT option when restoring a log file that contains minimally logged operations. It is still possible to restore the entire transaction log backup to roll the database forward, and it is still possible to restore to a point in time in a subsequent log file, which doesn't contain any minimally logged operations. 

However, in the event of an application bug, or a user change that causes data to be deleted, around the same period as the minimally logged operation, it will not be possible- to stop at a specific point in time in the log in which these changes are recorded, in order to recover the data that was deleted. 

If the ability to perform a point-in-time recovery is paramount for a database, the BULK LOGGED recovery model should not be used for index rebuilds or any other minimally logged operation, unless it can be done at a time when there is no concurrent user activity in the database. 

If the BULK LOGGED model is used, steps should be taken to minimize the time period where point-in-time restore is unavailable, and so minimize exposure to data loss. To do this, take a log backup in FULL mode, switch to BULK LOGGED, perform the index rebuild, then switch back to FULL and take another log backup. 

 

A final important point to note is that an ALTER INDEX REBUILD operation occurs in a single transaction. If the index is large, this could represent a long-running transaction that will prevent space reuse in the log for its duration. This means that, even if you rebuild an index in SIMPLE mode, where you might think that the log should remain small since it is auto-truncated during a checkpoint operation, the log file can expand rapidly during the operation. 

 

Index reorganization 

In contrast to rebuilding an index, reorganizing (defragmenting) an index, using ALTER INDEX REORGANIZE (or the deprecated DBCC INDEXDEFRAG in SQL Server 2000) is always a fully-logged operation, regardless of the recovery model, and so the actual page changes are always logged. However, index reorganizations generally require less 

log space than an index rebuild, although this is a function of the amount of fragmentation that exists in the index; a heavily fragmented index will require more log space to reorganize than a minimally fragmented one. 

Furthermore, the ALTER INDEX REORGANIZE operation is accomplished using multiple, shorter transactions. Therefore, when performed in conjunction with frequent log backups (or when working in SIMPLE mode), log space can be made available for reuse 

during the operation, so minimizing the size requirements for the transaction log during the operation. 

For example, rebuilding a 20 GB index can require more than 20 GB of space for the rebuild operation because it occurs in a single transaction. However, reorganizing a 20 GB index requires much less log space because each page allocation change in the reorganization is a separate transaction, and so the log records can be truncated with frequent log backups, allowing the log space to be reused. 

No comments:

Post a Comment