Transaction log survival guide: Shrink 100GB log

Introduction

You got yourself in situation where transaction log has massively grown, disk space became dangerously low and you want a quick way to shrink it before server stops ?

There are many articles on the net on that theme, but unfortunately many of them are wrong, giving false and even dangerous information. This will instruct you how to do a log shrinking properly, and give you the two methods: quick one that breaks log backup chain, and the one that does not break the log backup chain which is not as quick. But remember, this only buys you the time to learn about transaction log and how to properly set it up. That is your homework after that log crisis is gone!

Investigate the cause – quick!

To see the log size and percentage of uncleared log space (active/used log part cannot be overwritten. When log cannot wrap-around and reuse the space marked as “free”, log grows):

DBCC_SQLPERF_LOGSPACE

Normally, that percentage should be low or close to zero for bigger log files.

There is also a fancier query to get free space in transaction log files:

Run this query to see the reason why the log grows (why it is not reused):

LOG_REUSE_WAIT_DESC

The most often cause you will see is “LOG_BACKUP”, so the next steps solve this reason of log growth. If you see other reason, like “ACTIVE_TRANSACTION”, you have to deal with that reason prior to log shrinking (e.g. commit that long-running transaction. Use “DBCC OPENTRAN” to find it). “NOTHING” means log will not grow anymore because it can be reused.

Shrink the log – fastest method

If your log is really big, like 100GB, or you do not have enough time or space to backup your 100GB transaction log, you can use this very fast method. But, be aware that it breaks the log backup chain. That means you will lose point-in-time recovery ability in time period between last successful log backup and end of the next differential backup we will take here. Point-in-time recovery to time after backup will be possible again. This method should take no more than one minute, no matter how big your log is.

First, execute sp_helpfile to find the log name, and use it here:

Done! Your log should now be shrinked to about 1GB.
Now put your database back to full recovery model:

Although sys.databases will now show that database is in full recovery model, it is actually still in SIMPLE recovery until full or differential backup is taken! We will enter full recovery model and initialize log backup chain by taking a differential backup because it is much much faster than full backup:

Shrink the log – without breaking the log backup chain

If you do not want to lose point-in-time recovery ability (you don’t want to break the log backup chain), and you have the time and space to backup your 100GB log file, or log is not really so big – this is the method for you!

Execute the job that takes transaction log backup and wait for it to finish OR do it yourself with this command:

That first log backup will take a VERY long time because almost all 100GB must be copied. Subsequent log backups will be very fast and very small even if the log file itself is still 100GB!

Use “sp_helpfile” to find the log name and use it in this shrink command. Do not shrink to less than it’s normal operational size (do not truncate to zero!):

Repeat this 2 commands (log backup to filename2, shrink, log backup to filename3, shrink, …) until the log is shrunk. Usually 1 or 2 cycles are enough.

Be careful to change the filename in each log backup iteration!

Phew, log is small again. What now?

After your log is back to manageable size, do this:

  • Ensure your log backup job is set and is functioning properly.
  • Optimize the initial log size, growth, and number of VLF’s.
  • Read and learn about transaction log

1 Comment on “Transaction log survival guide: Shrink 100GB log

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.