12 November,2016 by Jack Vamvas
Question: During a very high workload period on a DB2 LUW we are experiencing slower than expected archival logging . The archival logging method utilises a VENDOR library – which utilises deduplication.
We are actively working with the vendor , but I’d like to explore the DB2 LUW configurations and maximise the resources on the host. Could you supply some notes and approaches to develop a plan?
We’ve used different approaches to try and measure the throughput of copying the file from the active log path to the archival log path. One of those has been to use the archive log command.
Answer: Without having any more detailed information or access to the throughput statistics , I’ll attempt to present some information which you may find useful
A quick note on using the db2 archive log command. If you overuse the command there is a risk of decreasing the disk space for the drive hosting active logs. This is because the command creates the file onto the active log drive. The more times you execute the command the more files you’ll create
There is an assumption that by issuing the command there will be an immediate archival log process. This is not a safe assumption. The db2logmgr process manages the archival log process , which is separate from the db2 archive log command.
Use db2pd to view the active log and the archive status
db2pd –db RESDB –logs -repeat 60 -10
There’s a fair bit of information in the output . Some information I find useful is:
-Current Log Number = Current active log file being written to
- Start LSN = show what log a particular transaction starts in
- Current LSN = LSN where we’ll start writing the next log record to This command will refresh every 60 seconds for 10 times /ol>
Deduplication algorithms can conflict with DB2 LUW default behaviour.DB2 LUW is attempting to maximise the throughput. This can conflict and cause a sub standard deduplication ratio. The deduplication process may have a different number of sessions and parallelism with the side effect of increasing the DB2 backup time
CPU Throttling – Check to see if the vendor deduplication utilises CPU throttling. CPU throttling is a common method to decrease the impact of the client side deduplication product on the server. Compare against server side deduplication, as there is less reliance over using client side resources
Some common DB2 tactics . Before applying any of these changes , there should be some rationale and testing. Increasing the values is not guaranteed to improve performance.
a) If you’re experiencing severe problems , change the logarchmeth1 to ensure that the archived logs are not stored on a data deduplication device. Local disk without deduplication may be the quickest method.
b) Increase util_heap_sz . According to the IBM documentation the util_heap_sz “parameter indicates the maximum amount of memory that can be used simultaneously by the BACKUP, RESTORE, and LOAD (including load recovery) utilities.”
c) Change the LOGBUFSZ parameter value. The LOGBUFSZ parameter Specifies size of memory buffer used to hold log records before they are written to disk.
The log records are written disk when any of these conditions are met:
a) Transaction commited
b) Log buffer is full
c) A data page is written to disk – requires matching log records up to the LSN to also be written to disk
Consider setting to at least 2 MB, but for various systems I've sized anything to 64 MB
Keep an eye on the NUM_LOG_BUFFER_FULL monitor element, want to keep this value low
Increase or decrease the LOGFILSIZ size – There are some factors to review before deciding on the size of files. The size of the file will impact the duration it takes to physically allocate the file.
There are also limitations around the amount of logs – 256.So if the log files are large , a larger active log space will be required. Read more on logprimary - Number of primary log files configuration parameter