Skip to content

A NetBackup for DB2 Cheat Sheet

25-Mar-09

This post is a cheat sheet for most of what I think the DB2 database administrator needs to know about Symantec NetBackup as a DB2 backup, recovery and log archival utility. My notes make version and platform assumptions of Unix DB2 version 9 and above, and NetBackup 6.0.

I wrote this because you don’t need to know that much to get NetBackup working for DB2, and the Symantec docs are very unclear and not well written. With that being said, if you get into a jam you will need to read the Symantec docs, and probably call their support. As I discovered, not everything is documented. But NetBackup DB2 Agent version 6.0 seems to work quite well most of the time.

Documentation

Be cautious looking up information on NetBackup support information on other sites, due to the often outdated release levels described on non-Symantec sites.

What your NetBackup Administrator should tell you

You need to know:

  • NetBackup version
  • Retention Period for backups
  • Number of multiplexing channels
  • Policy names
  • NetBackup server name for each NetBackup client
  • Scheduled times for all backups
  • Execution time for backups
  • You should also receive a daily report of DB2 backup activity from NetBackup – the administrator can set you up.

Agent Installation/Checking/Troubleshooting

Is NetBackup Agent Installed on your Database Server?

The easiest way to determine this is to do a find on the library file name (see table below) in directory /usr/openv/netbackup/bin. Or look for the existence of directory /usr/openv/netbackup . If both exist, that tells you the product was installed in the past. This does not yet confirm the product is up to current release, or configured correctly.

The name of the library file will vary by operating system1.

32-bit Solaris (SPARC) and Linux nbdb2.so
64-bit Solaris (SPARC) nbdb2.so64
32-bit AIX and HP-UX nbdb2.sl
64-bit AIX and HP-UX nbdb2.sl64
64-bit Linux, Itanium, and IBM pSeries nbdb2.so

If NetBackup Database Agent, NetBackup for DB2 option is not installed, installation process is detailed for the administrator in the NetBackup for DB2 PDF manual (beginning at page 21).

NetBackup Configuration

Configuration of both DB2 and NetBackup components will be required:

  • NetBackup Administrator must add policy
  • NetBackup configuration file bp.conf
  • NetBackup configuration file db2.conf
  • Database Configuration (UPDATE DB CFG) if using log archival
    Backup script

  • DB2 backup and restore special syntax

NetBackup Server Set-up

Your NetBackup administrator has either set up or will need to establish a policy for your DB2 backup. “Policies define the rules that NetBackup follows when clients are backed up. A backup policy can apply to one or more clients. Every client must be in at least one backup policy.” [NetBackup Admin Vol. 1 page 81]

When you edit the db2.conf file (see below) the policy name must match the POLICY keyword value for your database, example:

DATABASE MYDB
OBJECTTYPE DATABASE
POLICY S1MYDB_FULL_STAGING
SCHEDULE Default-Application-Backup
ENDOPER

If you are adding a new DB2 server, the NetBackup administrator will have other work that needs to be done, all covered by the NetBackup for DB2 and system administrator manuals. You will have to convey to your NetBackup administrator:

  • Database name
  • Policy name
  • NetBackup Client name

Beyond that, the NetBackup administrator will need to know your backup schedule, location of backup script, and required retention period. Many other tape characteristics can be controlled beyond that; see the Admin Guide for details.

Depending on what other NetBackup server activity is present during a desired backup time, a new time may need to be negotiated in order to not conflict with other requests. You may also have to work with the administrator to increase your backup’s job priority to prevent backups from running too long.

NetBackup Configuration – bp.conf

The bp.conf file in the /usr/openv/netbackup directory is common to all NetBackup clients on all platforms.

The only keywords I found important were SERVER and CLIENT_NAME. SERVER must be configured to match the NetBackup server for this client. CLIENT_NAME should be configured to match the name of this NetBackup client (at this shop configured the same as hostname).

You might want this file editable by the DBA using their normal Unix account authority. The file may only be owned by root and only writable by root if the system administrator has set this up without DBA involvement. The only time these values should be changed by the DBA (or by the system administrator at the DBA’s behest) is during a redirected restore (see below).

Example:

=== BEGIN bp.conf
# to do a redirected restore from Production to DEV,
# flip the settings for both SERVER and CLIENT_NAME
# Production SERVER = egnbu01pp
# Dev SERVER = netbackupm
SERVER = egnbu01pp
CLIENT_READ_TIMEOUT = 9600
CLIENT_CONNECT_TIMEOUT = 7200
# Production CLIENT_NAME = p1db2dbvip
# Dev CLIENT_NAME = d1db2dbvip
CLIENT_NAME = p1db2dbvip
=== END bp.conf

Remember if you edit this file for a redirected restore, change it back when the restore is complete, or backups will fail. The settings in this file are global and backups and redirected restore settings are mutually exclusive. When redirected restore is underway, other backups may fail. When backups are successfully configured, redirected restore will fail.

NetBackup Configuration – db2.conf

db2.conf is the other NetBackup configuration file, besides bp.conf, residing on the client. This configuration file controls NetBackup client behavior for the DB2 database product itself. Configuration keywords control DB2 backup, restore and log archival.

If you have only one database on your server, , that database uses archival logging, you do not intend to do a redirected restore onto this server as target your db2.conf file can be as simple as the following:


=== BEGIN db2.conf
DATABASE MYDB
OBJECTTYPE DATABASE
POLICY P1DB2DB_FULL_PRODUCTION
SCHEDULE Default-Application-Backup
SRCALIAS PMYDB00
ENDOPER


DATABASE MYDB
OBJECTTYPE ARCHIVE
POLICY P1DB2DB_FULL_PRODUCTION
SCHEDULE Default-Application-Backup
SRCALIAS PMYDB00
ENDOPER
=== END db2.conf

If you have more than one database set up similarly on a single database server, using the same policy, you could simply repeat the two above clauses and vary the DATABASE and SRCALIAS values.

Much more detail on db2.conf is available in the NetBackup for DB2 manual from page 53 onward. Keywords and syntax are described from page 58.

In the above example the clause for OBJECTTYPE ARCHIVE is only required if the database uses log archive.

OBJECTTYPE ARCHIVE clauses can use the same policy used for OBJECTTYPE DATABASE.

Database Configuration for log archival

If you are archiving logs using NetBackup you will need to update the database configuration parameter LOGARCHMETH1 as follows:

UPDATE DB CFG FOR MYDB USING LOGARCHMETH1 VENDOR:[NetBackup library filename]

So, for example for 64-bit Solaris you would execute:

db2 update db cfg for mydb using LOGARCHMETH1 VENDOR:/usr/openv/netbackup/bin/nbdb2.so64;

This will require an instance restart and a backup.

Backup Script installation

When NetBackup runs a scheduled backup on a Unix DB2 server, it only executes a shell script that resides in directory /usr/local/adm/scripts . The NetBackup for DB2 manual describes different ways of generating the script. I found it to be easiest to copy the sample script ( found in /usr/openv/netbackup/ext/db_ext/db2/scripts) to /usr/local/adm/scripts and then edit directly (make sure your Unix account privilege permits that).

NetBackup scripts can also be run manually or from cron, but this may be in violation of your Operations standards and may cause problems or conflicts for other database backups. Always coordinate backup, restore and log archive processes with the NetBackup administrator.

Backup and Restore Syntax

Modifications to the DB2 backup and restore commands to support NetBackup are very simple.

Online Backup example DB2 version 9:


BACKUP DATABASE [MY_DB] ONLINE LOAD [LIBRARY] OPEN 4 SESSIONS with 8 buffers BUFFER 1024 include logs without prompting

Where LIBRARY is the appropriate OS library, for example Solaris 64-bit would be /usr/openv/netbackup/bin/nbdb2.so64 , so the actual command for my database named MYDB on that platform would be:

BACKUP DATABASE MYDB ONLINE LOAD /usr/openv/netbackup/bin/nbdb2.so64 OPEN 4 SESSIONS with 8 buffers BUFFER 1024 include logs without prompting

Offline backup can be just as easy:

BACKUP DATABASE [MY_DB] LOAD [LIBRARY] OPEN 4 SESSIONS with 8 BUFFERS BUFFER 1024 without prompting

Restore example DB2 version 9:

RESTORE DATABASE [MY_DB] LOAD [LIBRARY] OPEN 4 SESSIONS TAKEN AT [timestamp] BUFFER 1024 REPLACE EXISTING WITHOUT PROMPTING

Restore from offline without requiring logs:

restore DATABASE [MY_DB] LOAD [LIBRARY] open 4 sessions taken at [TIMESTAMP] with 8 buffers buffer 1024 without rolling forward without prompting

Is NetBackup Version Current?

Though the product may be installed, the version may be outdated. Make sure client version matches NetBackup server.

How to check client version level:

cat /usr/openv/netbackup/bin/version
NetBackup-Solaris9 6.0MP4

Is the NetBackup Listener Active?

Is the NetBackup service listening? Check via:

netstat -a | grep bpcd

If the listener is inactive nothing will be returned.

Using NetBackup Diagnostic Logs

NetBackup client logs (that is, the logs local to the DB2 server) are located under the directory /usr/openv/netbackup/logs . Quickly display the logs and their full path by this command:


find /usr/openv/netbackup/logs | xargs ls -l

If you are investigating a backup, restore or log archive problem, it is best to copy the NetBackup logs (see Neat Transition page for location) from their normal location to a directory under DBA ownership, using the cp -p option to preserve file timestamp information. The reason is that NetBackup regularly purges its logs, so unless you make a copy you will soon lose that diagnostic information.

How to Know your Backup Failed

If a DB2 backup fails you may not get a message from NetBackup clearly indicating that. JobState will show as “Done” but for an abnormally long execution time. A backup that ordinarily takes 45 minutes might show as requiring 2 hours to complete. Here is an example:

Symantec NetBackup Backup Job Summary

Symantec NetBackup Backup Job Summary

Failures of backup occurred on both March 5th and March 17th in the above report. DB2 LIST HISTORY BACKUP, and NetBackup logs on the server, all clearly indicated the failure and related error diagnostic data. The NetBackup Job console however did not.

In conclusion, though NetBackup is supposed to recognize non-zero return codes returned by the DB2 backup command, it doesn’t seem to work well in practice. Look for abnormal execution times instead.

Useful NetBackup Commands

The NetBackup command-line utilities bprestore and bplist can only be used to retrieve individual logs if the USEREXIT (a deprecated syntax) method of setting up NetBackup for DB2 is used. [This isn’t clearly indicated in docs; I discovered this through a long troubleshooting session with Symantec.]

bprestore can be used to save a database image from tape to disk without actually fully restoring a database. You can retrieve image names for March 24, 2009 for example via something like:

$ /usr/openv/netbackup/bin/bplist -C p1db2dbvip -t 18 -l -R / | grep 20090324
-rw-rw---- db2 db2 26214420K Mar 24 04:33 /DB2/MYDB/node0000/20090324020029/MYDB.0.db2.node0000.30574.20090324020029.4
-rw-rw---- db2 db2 26042372K Mar 24 04:33 /DB2/MYDB/node0000/20090324020029/MYDB.0.db2.node0000.30574.20090324020029.3
-rw-rw---- db2 db2 25157636K Mar 24 04:33 /DB2/MYDB/node0000/20090324020029/MYDB.0.db2.node0000.30574.20090324020029.2
-rw-rw---- db2 db2 26824708K Mar 24 04:33 /DB2/MYDB/node0000/20090324020029/MYDB.0.db2.node0000.30574.20090324020029.1

If there are no backups of any kind on a client, bplist will give you the error:

EXIT STATUS 227: no entity was found

Look up Symantec docs to understand how to use bprestore to restore a file to disk. Remember that if you are using the VENDOR log archival method, you cannot use bprestore to access individual logs. The logs will need to be included in the backup image if you intend to restore a database with rollforward from a bprestore retrieval.

Doing a Redirected Restore

Step 1: edit /usr/openv/netbackup/bp.conf

On page 92 of the NetBackup for DB2 PDF manual it describes how you must edit the bp.conf file before you do a redirected restore.

Example of a bp.conf on the target host of a redirected restore from Production DB2 server to Development. The source is Production NetBackup client p1db2dbvip using NetBackup server egnbu01pp. The target of the redirected restore is Development NetBackup client d1db2dbvip using NetBackup server netbackupm.

You must, for the purpose of a redirected restore, edit the bp.conf file on the restore target to look as if it were configured for NetBackup as the restore source server, hence:

=== BEGIN bp.conf
# to do a redirected restore from Production to DEV,
# flip the settings for both SERVER and CLIENT_NAME
# Production SERVER = egnbu01pp
# Dev SERVER = netbackupm
SERVER = egnbu01pp
CLIENT_READ_TIMEOUT = 9600
CLIENT_CONNECT_TIMEOUT = 7200
# Production CLIENT_NAME = p1db2dbvip
# Dev CLIENT_NAME = d1db2dbvip
CLIENT_NAME = p1db2dbvip
=== END bp.conf

Remember if you edit this file for a redirected restore, change it back when the restore is complete, or backups will fail. The settings in this file are global and backups and redirected restore settings are mutually exclusive. When redirected restore is underway, other backups may fail. When backups are successfully configured, redirected restore will fail.

Step 2: edit db2.conf

You must also edit db2.conf. Assuming your target server is already configured for NetBackup DB2, you need to add these lines to db2.conf for your target server. These lines represent the database for the source of the redirected restore. Let’s assume for the same of the example that your source database is named ORIGINDB, it’s NetBackup policy is named ORIGIN_FULL, source database instance name is DB2INST, and target database name is TARGETDB.


=== db2.conf BEGIN ADD FOR REDIRECTED RESTORE
DATABASE ORIGINDB
OBJECTTYPE DATABASE
POLICY ORIGIN_FULL
SCHEDULE Default-Application-Backup
ENDOPER
DATABASE ORIGINDB
OBJECTTYPE ARCHIVE
POLICY ORIGIN_FULL
SCHEDULE Default-Application-Backup
SRCALIAS ORIGIN00
ENDOPER
OBJECTTYPE ALTERNATE # Specifies an alternate restore
SRCINST DB2INST # Names the source instance that was backed up
SRCALIAS ORIGINDB # Names the source database that was backed up
DESTINST ORIGINDB # Names the destination instance name
DESTALIAS TARGETDB # Names the destination database alias name
ENDOPER # Ends the object identifier

=== db2.conf END ADD FOR REDIRECTED RESTORE

Redirected Restore Tasks Example

An example now of the steps required to complete the redirected restore. Obviously you will have to edit for your local values.


db2 restore db origindb load /usr/openv/netbackup/bin/nbdb2.so64 into targetdb logtarget /db2/targetdb/db2/NODE0000/SQL00001/SQLOGDIR/ redirect

Run set tablespace containers

SET TABLESPACE CONTAINERS statements are the same used in all redirected restores:
set tablespace containers for 0 using (path "/db2/ targetdb /db2/NODE0000/SQL00001/SQLT0000.0");
set tablespace containers for 1 using (path "/db2/ targetdb /db2/NODE0000/SQL00001/SQLT0001.0");
set tablespace containers for 2 using (path "/db2/ targetdb /db2/NODE0000/SQL00001/SQLT0002.0");
set tablespace containers for 3 using (path "/db2/ targetdb /tspace/ /temp00/smtemp");

. . . etc. etc. 2


db2 restore db odc continue
db2 rollforward to end of logs and stop

Footnotes

  1. see page 15 of the NetBackup for DB2 PDF. []
  2. To quickly generate SET TABLESPACE CONTAINERS statements you can use this script:

    === script BEGIN
    #!/bin/ksh
    # create the set tablespaces script for redirected restore
    db2 connect to targetdb
    db2 -x "select max(tbspaceid) + 1 from syscat.tablespaces" | read container_count_plusone
    i=0
    while [ $i -lt $container_count_plusone ]; do
    db2 -x list tablespace containers for $i | awk '/Name/ { print $3 }' | read container_name
    echo "set tablespace containers for $i using (path \"$container_name\");"
    i=`expr $i + 1`
    done
    === script END
    []

A Simple DB2 Audit System

10-Mar-09

I just wrote a small Unix script-based db2audit capture system for one of my version 9 DB2 databases. Let me tell you about it.

My initial references were the IBM Information Center:

For this example only, I want to capture the following audit scopes: AUDIT, OBJMAINT, SECMAINT, SYSADMIN, VALIDATE. I leave out CHECKING and CONTEXT. I want to capture both successful and unsuccessful operations. My DDL and script reflect this. Use my DDL as an example for how you might modify the tables for CHECKING and CONTEXT, using the IBM DDL as your starting point. Both the DDL and the script will have to be modified for a different choice of scopes.

Step 1: Create the Audit Tables

This is my DDL for the five audit scopes I am monitoring. The modifications I have made to the IBM DDL are:

  • Added IN tablespace clause
  • Chosen a schemaname
  • Made TIMESTAMP not null, so it can be used as primary key
  • Create unique index and alter primary key for the TIMESTAMP column
  • Created another index for one of the larger tables for better performance.

Step 2: Activate db2audit

Run the following two commands:

db2audit configure scope AUDIT, OBJMAINT, SECMAINT, SYSADMIN, VALIDATE status BOTH

db2audit start

Step 3: Process db2audit data to audit table

I have this set to run under crontab every 12 hours.

===> BEGIN script db2audit_extract_import.ksh

#!/bin/ksh

# Example of db2audit extract to table script

# Run this frequently enough that the db2audit log does not overflow space available in the …/sqllib/security directory

# Change to your own output logging preference
outputLog=/db2/support/scripts/db2audit_extract_load.log
DBNAME=MYDB
# Change to your own DB2 instance home
DB2HOME=/home/db2
prfile=$DB2HOME/sqllib/db2profile

echo “db2audit_extract_load run on `date`” >> $outputLog

if [ -f $prfile ]; then
. $prfile
else
echo “$(date) Cannot locate $prfile.” >> $outputLog
Usage
exit 1
fi

# ==== FILE CLEANUP ====
# Even if you are not collecting some of these SCOPEs,
# db2audit will generate an empty file for them. The next time
# you try to run extract, it will fail if it finds a file even if
# you are not collecting the information. So rm -f all of them. IBM docs do not note this.

rm -f $DB2HOME/sqllib/security/checking.del  >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/context.del  >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/audit.del  >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/objmaint.del  >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/secmaint.del >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/sysadmin.del  >> $outputLog  2>&1
rm -f $DB2HOME/sqllib/security/validate.del >> $outputLog  2>&1

# ==== DB2AUDIT DESCRIBE (DOCUMENTATION) ====
# It is a good idea to self-document the scope parameters.
db2audit describe >> $outputLog  2>&1

# ==== DB2AUDIT FLUSH ====
# Important to include these documenting ECHOs because db2audit does
# not print to sysout which operation it has performed.

echo “db2audit flush” >> $outputLog  2>&1
db2audit flush >> $outputLog  2>&1

# ==== DB2AUDIT EXTRACT ====
echo “db2audit extract delasc” >> $outputLog  2>&1
db2audit extract delasc >> $outputLog  2>&1

# ==== DB2AUDIT PRUNE ====
export YYYYMMDDHH=`date +”%Y%m%d%H”`
echo “Prune timestamp will be: $YYYYMMDDHH” >>  $outputLog  2>&1
echo “db2audit prune”  >>  $outputLog
db2audit prune date $YYYYMMDDHH  >> $outputLog  2>&1

# ==== CONNECT TO DATABASE ====
db2 connect to $DBNAME  >>  $outputLog  2>&1

# ==== IMPORT AUDIT DATA ====
# Note that this list of IMPORT steps depends on your chosen db2audit SCOPE
# Even though db2 Info Center examples tell you to LOAD,
# if you do this frequently enough I think IMPORT is better.
db2 -v import from $DB2HOME/sqllib/security/audit.del of del modified by CHARDEL0xff insert into DBA.AUDIT   >>  $outputLog  2>&1
db2 -v import from $DB2HOME/sqllib/security/objmaint.del of del modified by CHARDEL0xff insert into DBA.OBJMAINT   >>  $outputLog  2>&1
db2 -v import from $DB2HOME/sqllib/security/secmaint.del of del modified by CHARDEL0xff insert into DBA.SECMAINT  >>  $outputLog  2>&1
db2 -v import from $DB2HOME/sqllib/security/sysadmin.del of del modified by CHARDEL0xff insert into DBA.SYSADMIN  >>  $outputLog  2>&1
db2 -v import from $DB2HOME/sqllib/security/validate.del of del modified by CHARDEL0xff insert into DBA.VALIDATE  >>  $outputLog  2>&1

# From InfoCenter:
# “The audit facility will continue to write new audit records to the db2audit.log file,
# and these records will have a timestamp that is later than YYYYMMDDHH. Pruning records from the db2audit.log file
# that you have already extracted prevents you from extracting the same records a second time.
# All audit records that are written after YYYYMMDDHH will be written to the .del files the next time you extract the audit data.”

# ==== AUDIT TABLE PRUNE TIMESTAMP ====
# Prune rows that will be picked up the next time we import from next audit extract
# (thereby eliminating dupes)
db2 -vm DELETE FROM DBA.AUDIT WHERE TIMESTAMP “> TIMESTAMP(‘${YYYYMMDDHH}0000’)”   >>  $outputLog  2>&1
db2 -vm DELETE FROM DBA.OBJMAINT WHERE TIMESTAMP “> TIMESTAMP(‘${YYYYMMDDHH}0000’)”   >>  $outputLog  2>&1
db2 -vm DELETE FROM DBA.SECMAINT WHERE TIMESTAMP “> TIMESTAMP(‘${YYYYMMDDHH}0000’)”  >>  $outputLog  2>&1
db2 -vm DELETE FROM DBA.SYSADMIN WHERE TIMESTAMP “> TIMESTAMP(‘${YYYYMMDDHH}0000’)”  >>  $outputLog  2>&1
db2 -vm DELETE FROM DBA.VALIDATE WHERE TIMESTAMP “> TIMESTAMP(‘${YYYYMMDDHH}0000’)”  >>  $outputLog  2>&1

# ==== PRUNE AUDIT DATA TO FILTER  ====
# Filter Tasks
# All of these following DELETE steps are environment-specific and NONE may apply to you.
# These are only examples of what you might consider doing.
# Here is where you delete repetitive, high volume security events that you
# understand and do not care about. Doing this improves the signal to noise ratio.

# DO NOT JUST PASTE THESE INTO YOUR SCRIPT! THEY ARE EXAMPLES ONLY:

# Filters for DBA.OBJMAINT
# We know AUTHID = MYAPPAPL and OBJSCHEMA = SESSION and OBJNAME = SCOPEACCESSFORROLE
# and OBJTYPE = TABLE is a normal operation (use of global temporary tables by the application)
# so we will delete those rows.
db2 -vm “delete from DBA.OBJMAINT where AUTHID = ‘MYAPPAPL’ and OBJSCHEMA = ‘SESSION’ and OBJNAME = ‘SCOPEACCESSFORROLE’ and OBJTYPE = ‘TABLE'”   >>  $outputLog  2>&1

# Eliminate from DBA.SYSADMIN the DBSET operations that occur every 10 seconds – these
# are probably caused by Veritas Cluster Services (VCS)
db2 -vm “delete from DBA.SYSADMIN where EVENT = ‘DB2SET’ and USERID = ‘db2′”   >>  $outputLog  2>&1

# Eliminate from DBA.SYSADMIN operations associated with snapshot sys admin views being taken every 5 minutes
db2 -vm “delete from DBA.SYSADMIN where EVENT in ( ‘ESTIMATE_SNAPSHOT_SIZE’, ‘GET_SNAPSHOT’ )”   >>  $outputLog  2>&1

# Filter out expected and high volume events in the environment.

# Eliminate from DBA.VALIDATE where AUTHID = ‘DB2’ or ‘DB2CERAPHITE_PRD’
# (graph ID) or ‘MYAPPAPL’ (application ID) and STATUS = 0
db2 -vm “delete from DBA.VALIDATE where AUTHID in ( ‘DB2’, ‘DB2CERAPHITE_PRD’, ‘MYAPPAPL’ ) and STATUS = 0”   >>  $outputLog  2>&1

# Eliminate CHECK_GROUP_MEMBERSHIP events. LDAP DB2 in my example environment is not set up correctly for groups
# so there are many -1092 sqlcode events that still occur even with successful transactions.
db2 -vm “delete from DBA.VALIDATE where EVENT = ‘CHECK_GROUP_MEMBERSHIP’ ”   >>  $outputLog  2>&1

# ==== CLEANUP / EXIT ====
db2 connect reset >>  $outputLog  2>&1
db2 terminate >>  $outputLog  2>&1
echo “db2audit_extract_load FINISHED at `date`” >> $outputLog
===> END script db2audit_extract_import.ksh

Step 4: Report on and maintain your data

This part is up to you. You may decide to trigger alerts based on detected security events; or the data may be feed to a dedicated security interface tool like Dragon Squire.

Remember to monitor the growth of your audit tables. It is probably a good idea to purge data based on organization purge criteria. Remember to add these tables for scheduled RUNSTATS and REORGs, where appropriate.

Here is the script that I wrote to purge my own audit tables: db2audit_tables_prune.ksh