- Environments
- visitor environment
- host environment
- visitor environment
- Hosts
- application host
- DBMS host
- LeasePak
- End of Period
- General
- OS
- shell
- command prompt
- OS Utilities
- df
- disk drive
- filesystem
- Users and Roles
- System Administrator
Common Usage
The most common usage of the df command depends on the OS:
HP-UX
Linux
Solaris
IMPORTANT NOTE
Make Sure There Is Enough Disk Space for End of Period
The administrator should carefully monitor disk space on both the application host and the DBMS host. Any filesystem that has more than 90% in the Use%
column is in danger of running out of space during End of Period processing. If the usage pattern of the host indicates periods of heavy use, frequent checks of available space are advised.
Any filesystem with more than 95% in Use%
should be regarded as critical.
Running df
Run df
as shown above for the OS of the host. For example, on Linux:
[lpuser:~] df
All three forms of the command produce similar output, for example as follows:
[lpuser:~]df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8062392 657072 6989168 9% / /dev/md0 194366 25394 158937 14% /boot /dev/mapper/VolGroup01-LogVol01 4128448 139468 3779268 4% /tmp /dev/mapper/VolGroup01-LogVol00 8256952 2653856 5183668 34% /usr /dev/mapper/VolGroup01-LogVol02 4128448 2565024 1353712 66% /var /dev/mapper/VolGroup01-LogVol03 20642428 1245624 18348228 7% /home /dev/mapper/VolGroup01-LogVol04 25803068 8847068 15645280 37% /opt /dev/mapper/VolGroup01-LogVol05 77409288 4153856 69323272 6% /opt/nst /dev/mapper/VolGroup01-LogVol06 25803068 12881880 11610468 53% /opt/oracle/oradata /dev/mapper/VolGroup01-LogVol07 25803068 531904 23960444 3% /opt/sybase/sybdata tmpfs 4087568 5708 4081860 1% /dev/shm netfs:/export/nfs/backup 371902464 48316928 304694272 14% /backup[lpuser:~]
cleanup
and cleanse_s7
Top df – disk space Clean up Queue Manager
Releasing held jobs Monthly RHR maintenance
Clean up Cores Logs needing supervision
Cleaning Up Queue Manager Temporary Files
- LLDB
- LeasePak
- LeasePak client
- End of Period
- General
- shell
- command prompt
- process
- process ID
- OS Utilities
- cron
- crontab
- shell
- disk drive
- temporary directory
- System Administrator
- Queues
- system spooler
- Queue Manager configuration
- Logical names
- logicals tables
- Logical names
- Maintaining Queue Manager
- cleanse_s7
- cleanup
cleanse_s7
immediately before executing End of Period. The recommended form is the first usage, cleanse_s7 –a
, as this performs all of the tasks that NetSol recommends be performed on the Queue Manager in preparation for EOP.
cleanse_s7
may be run only by $NSTADMIN
. It does not require any LLDB services.
Common Usage
cleanup
This is an example of running cleanup
:
cleanup
removes temporary files from the Queue Manager temporary directories that belong to processes that are no longer running. It is important to remove these files, because if another process with the same process ID as that of an existing file starts, it will abort because it sees the file as already in use. This will lead to multiple, random and confounding failures in almost any LeasePak process, from logging into the LeasePak client to running End of Period.
Typically, the administrator sets up a job in crontab to periodically execute cleanup, about once every hour or two. See the example entries below.
cleanse_s7
This is the most common example of using cleanse_s7
:
first form
cleanse_s7
–a
where:
- –a - indicates that all functions that can be performed on the Queue Manager should be performed. It is equivalent of the second usage below
second form
where:
- –s indicates that the stop queues function should be executed
-
–p indicates that the
.JOB
files in the system spooler directories are to be removed, effectively clearing the queues. -
–t indicates that the
VXRT.LOG
(the Queue Manager error log) is to be truncated - –d indicates that certain temporary files more than 24 hours old are to be removed
- –l indicates that the logicals tables are to be removed
- –r indicates that the system spooler should be restarted
- –q indicates that the individual queues should be restarted
third form
which is the equivalent of cleanse_s7 –sptdrq
, and performs only the tasks supported in earlier versions of the program, which were all of the above except -l remove logical tables.
To summarize the three forms for runing cleanse_s7, refer to this table:
Form | Options given | Options executed |
first | –a | s,p,t,l,d,r,q |
second | –sptldrq (one or more) | s,p,t,l,d,r,q (as specified) |
third | none | s,p,t,d,r,q |
Running cleanup
Here is an example of running cleanup:
cleanup
cleanup
produces no output.
Running clease_s7
This is an example of running cleanse_s7:
cleanse_s7
–a
This command produes output similar to the screen-print below.
[nsadm77a:~]cleanse_s7 –a 2011-10-15 14:22:46 cleanse_s7: Start: -sptdlrq RT_CONFIG=/opt/nst/qm/v65a/qm_3_17/library/Config, LNMDIR=/tmp/qm/v65a, JSPDIR=/opt/nst/qm/v65a/qm_3_17/spool 2011-10-15 14:22:46 cleanse_s7: Stopping queues ... VX/DCL - DEC VMS DCL Emulation for Unix Copyright (C) 1985-1995 Isleworth Ltd. All Rights Reserved 2011-10-15 14:22:52 cleanse_s7: Purging spooler .JOBS ... 2011-10-15 14:22:53 cleanse_s7: Truncating VXRT.LOG ... 2011-10-15 14:22:53 cleanse_s7: Purging JOB* PD_* PT_* SOR* *ERR and PJOB* files ... 2011-10-15 14:22:53 cleanse_s7: Logical name tables ... 2011-10-15 14:22:53 cleanse_s7: Restarting spooler ... VX/DCL - DEC VMS DCL Emulation for Unix Copyright (C) 1985-1995 Isleworth Ltd. All Rights Reserved 2011-10-15 14:23:00 cleanse_s7: Restarting queues ... VX/DCL - DEC VMS DCL Emulation for Unix Copyright (C) 1985-1995 Isleworth Ltd. All Rights Reserved Batch queue SYS$BATCH, running Batch queue LP$EOP1, running Print queue SYS$BLACKHOLE, running, on LPBH:, mounted form DEFAULT Print queue SYS$PRINT, running, on LPSP:, mounted form DEFAULT Print queue DEV$PRINT, running, on LP5SI:, mounted form DEFAULT Print queue DEVL$PRINT, running, on LP5SIL:, mounted form LANDSCAPE 2011-10-15 14:23:24 cleanse_s7: End [nsadm77a]
Here are examples of crontab entries to run cleanup every two hours at 53 minutes after the hour.
# Linux RHEL5 53 */2 * * * root ". ~nsadmnna/.lpprofile; cleanup" >/dev/null 2>&1
# HP-UX, Solaris, & Linux RHEL4 53 1,3,5,7,9,11,13,15,17,19,21,23 * * * ". ~nsadmnna/.lpprofile; cleanup" \ >/dev/null 2>&1
Releasing Held Jobs
Top df – disk space Clean up Queue Manager
Releasing held jobs Monthly RHR maintenance
Clean up Cores Logs needing supervision
Releasing jobs held on a batch queue
- Hosts
- application host
- LeasePak
- LeasePak client
- End of Period
- General
- job
- shell
- command prompt
- Users and Roles
- LeasePak administrative users
- NSTADMIN
- LeasePak administrative users
- Queue Manager
- Queues
- batch queue
- queue start and stop
- show queue
- Queue Statuses
- Hold
- Release jobs
There are occasions when there may be jobs on batch queues with the status of Holding (for example, if End of Period was submitted with the state of Hold). On such occasions, the operator should use the following procedure to release jobs so identified so that they begin executing.
Procedure to Release Held Jobs
-
Log on the application host as
$NSTADMIN
. -
Enter DCL by executing:
[nsadm77a:~]dcl -
At the
$prompt, execute:$show queue/allThis will display all of the queues and any jobs on them. If the queue is already known, the operator may execute:
$show queue/all queue-namewhere:
* queue-name – the queue known to have a held job
This is an example of the output of this command:
$show queue/all Batch queue SYS$BATCH, running Jobname Username Entry Status ------- -------- ----- ------ TESTHOLD jstettner 2480 Holding Batch queue LP$EOP1, running Batch queue LP$EOP2, running - The operator makes note of the entry numbers of jobs in Holding status that are to be released
-
For each job to be released, the operator executes the following:
$set entry entry-number/releaseshow queue
command to verify that the job is executing or has executed. -
When finished, the operator exits DCL by executing:
$logout
db_truncate_rhr
Top df – disk space Clean up Queue Manager
Releasing held jobs Monthly RHR maintenance
Clean up Cores Logs needing supervision
Purge RHR table monthly
- Environments
- production environment
- test environment
- visitor environment
- host environment
- setup_new_env
- LLDB
- NetSol utility script
- db_create
- db_truncate_rhr
- LeasePak tables
- RHR
- LeasePak
- End of Period
- End of Month
- End of Period
- Users & Roles
- System Administrator
- LeasePak administrative users
- NSTDBA
- LeasePak database administrator
- DBO
- Logical database owner
- NSTDBA
- password
db_truncate_rhr
to purge the RHR table after Month End processing from the LLDB for a particular environment.
db_truncate_rhr
may only be run by $NSTDBA
, the LeasePak database administrator. It requires that the operator know the password for the database owner or DBO.
db_truncate_rhr
determines the appropriate LLDB to use via the environment created by setup_new_env.
db_truncate_rhr
command may only be run from production and test environments; visitor environments are not allowed to perform data-destroying tasks on the LLDBs they point to. The original, or host environment, given when the visitor environment was created, must be used for performing data-destroying tasks such as db_truncate_rhr.
IMPORTANT NOTE
End of Month processing must be completed satisfactorily
The Administrator must be absolutely certain that Month End has completed successfully and that the RHR records are no longer needed before running this command
Common Usage
The most common way db_truncate_rhr is used is:
where:
- environment-name – is the name of the environment containing the LLDB with the RHR table that needs to be purged.
-
Other inputs
-
DBO's password - the operator will be prompted for the database owner's password, which is usually assigned when
db_create
was run to create the LLDB.
-
DBO's password - the operator will be prompted for the database owner's password, which is usually assigned when
nsdba77a:~] db_truncate_rhr
prod
2011-10-15 22:42:23 db_truncate_rhr: Truncate rhr table in database lpr_prod of environment prod
2011-10-15 22:42:23 db_truncate_rhr: Running commands as DBO: lpr_prod
Database Owner 'lpr_prod' password: database owner's password
2011-10-15 22:42:31 db_truncate_rhr: End
[nsdba77a:~]
Cleanup Cores
Top df – disk space Clean up Queue Manager
Releasing held jobs Monthly RHR maintenance
Clean up Cores Logs needing supervision
Purge the eyetem of these huge files
- Environments
- LeasePak
- LeasePak instance
- LeasePak driver
- lpadriver.exe
- End of Period
- lpaeopdrvr.exe
- Environment directories
- ueop
- ueop/com
- ueop
- General
- OS
- shell
- command prompt
- process
- process ID
- core files
- OS Utilities
- cron
- crontab
- find
- cron
- disk drive
- disk space
- Users and Roles
- super-user
- System Administrator
- OS login account
- $HOME directory
Core Files and Dumping Core
A core file or core dump is a file created by the OS when a program terminates under certain abnormal conditions. See elsewhere in this System Administration Guide for information about how to enable or disable the creation of core files by LeasePak. If cores are enabled in LeasePak, then every time a driver aborts with an error, LeasePak itself causes a core dump.
A core file is a snapshot of memory that was in use by a program at the moment of its termination. With this image, programmers can often determine the immediate cause of an error and sometimes even what led up to the error.
Because the LeasePak server programs, or drivers, are so complex they are very large and require large amounts of memory for data besides what is required for the program itself. This means that core files can be huge, and so occupy a great deal of disk space.
It is necessary for the administrator to be constantly vigilant in regards to core files.
What core files look like and where do they lurk
Cores can occur in any directory and can be caused by any program, not just by LeasePak. However, LeasePak cores are some of the largest the administrator is likely to see, so they are the main focus here.
- the user's home directory, most often left by interactive drivers
-
the environment
$ueop
or$ueop/com
directory, most often left by end of period drivers
Core files are named differently in Linux.
- core files on HP-UX are named core
- core files on Solaris are named core
- core files on Linux are named core.PID, where PID is the process ID of the process that created the core file
- Make sure cores are enabled in LeasePak only when necessary and when constructive use of them is planned (cores are always enabled in End of Period);
- Run a job as root at least daily, removing core files older than a day or two.
# . ~nsadm77a/.lpprofile
# find /home/ $TOPDIR/ -name core -mtime +2 \ -exec ls -l {} \; -exec rm -f {} \; >> /tmp/core.log 2>&1This scans the /home directories and the LeasePak instance directory for core files older than 2 days, prints their name, size, and timestamp and then removes them, with the resulting information recorded in a log file.
# . ~nsadm77a/.lpprofile
# find /home/ $TOPDIR/ -name core.[0-9]* -mtime +2 \ -exec ls -l {} \; -exec rm -f {} \; > /tmp/core.log 2>&1
Log Files
Top df – disk space Clean up Queue Manager
Releasing held jobs Monthly RHR maintenance
Clean up Cores Logs needing supervision
Monitoring Growth
LeasePak's various log files
The log files listed in the following table are not all universally high growth rate and large sized files. Some are trivial. There is an indicator of what the potential is for excessively rapid growth and/or excessive size or excessive number of individual log files.
These log files should be monitored on a regular schedule. If the Administrator wishes, a cron job or jobs can be set up to check for the problems outlined below. Until the characteristics of the growth of these logs is clear to the administrator, it is advised that they initially be manually checked on a monthly basis, and after any significant batch jobs.
Potential Issues Concerning LeasePak-related Log files
These Potential Issue Codes are used in the Table of Log Files below; they may be combined to signify multiple potential problems.
CODE | DESCRIPTION |
G | excessively rapid growth |
S | excessive size |
Q | quantity of log files may become issue |
X | Debug logs that should not normally be used |
H | logs Handled by cleanse_s7 |
++ | high potential for a problem |
+ | moderate potential for a problem |
~ | low potential for problem |
Table of log files
SUB-SYSTEM | DIRECTORY | LOGFILE | POTEN. |
LLDB Environments | $msilog | admin.log | ~ |
dblib_lib.log | ~ | ||
dbms.log | ~ | ||
job.log | GS++ | ||
leasepakd.log | GS+ | ||
mPowerd.log | G+ | ||
ENVNAME.log | GS++ | ||
Conversions | $msilog | ENVNAME_convYYMMDDHHMM.log | SQ+ |
Oracle 9i | BACKGROUND_DUMP_DEST/ | alert_ORADB.log | Q+ |
$ORACLE_HOME/network/log/ | listener.log | SQ+ | |
USER_DUMP_DEST/ | *.trc | X++ | |
Oracle 11g | $ORACLE_BASE/diag/rdbms/ORADB/ORAINST/alert/ | log.xml | Q+ |
$ORACLE_BASE/diag/rdbms/ORADB/ORAINST/trace/ | alert_ORADB.log | Q+ | |
$ORACLE_BASE/diag/tnslsnr/HOST/listener/alert/ | log.xml | Q+ | |
$ORACLE_BASE/diag/tnslsnr/HOST/listener/trace/ | listener.log | SQ+ | |
DIAGNOSTIC_DEST/... | *.trc | X++ | |
DAVOX | $ueop/log | davox_dl.log | ~ |
$udata/ | m2_BBBB_YYMMDD_HHMMSS.log | ~ | |
EOPS-II | $ueop/log | eop_run.log | G~ |
pPOR_status.log | GSQ+ | ||
EOPS-III | $ueop/log | eops_event.log | GS++ |
pPOR_session.log | SQ+ | ||
eops_proc.log | S+ | ||
eops_gen.log | S+ | ||
eops_dbg.log | X+ | ||
eops_state.log | X+ | ||
event_handler.log | X++ | ||
Queue Manager | /tmp | VXRT.log | H~ |
Config:LNMDIR | job_*, PD_*, PT_* | HQ+ | |
Config:JSPDIR | JSPLOG, QMGRLOG.*, PJOBLOG.* | Q+S++ | |
LeasePak | $HOME | leasepak_error.log | GS++ |
lxcommand_PID.log | XQ+ | ||
lxbuffer_PID.log | XQ+ | ||
lxproc_PID.log | XQ+ | ||
lpadbdrvr_PID.log | XQ+ | ||
msi_xsqlPID.log | XQ++ | ||
init services | $NSTDIR/log | nst_dbora_startup_shutdown.log | ~ |
nst_dbsyb_startup_shutdown.log | ~ | ||
nst_qm_ID99X.log | ~ |