Queue Manager – Queue Definitions
LeasePak Documentation Suite NETSOL website
Queue Manager

Queue Manager

Queue Definitions

IPC

Top IPC Queue Defs

start_queues Job Limit

msg_queues and shmem

Topics

  • Queue Manager
    • service script
    • nst_qm_{ID}76a
    • init.d
      • rc1.d
      • rc3.d
    • $INITDIR
    • $QMTMPDIR
    • $QMSPOOLDIR
    • $QMJOB_LIM_FILE
    • $QLIM
    • Sector7
      • jspctl
      • qmgr
    • DCL
      • command introducer
      • command qualifier
      • DCL Commands
        • init
        • wait
        • set
        • define
        • if
      • com-files
        • start_qmgr.com
        • start_queues.com
        • stop_queueus.com
    • Inter-Process Communication
      • shared memory
      • message queue
      • key
      • ipcs
      • ipcrm
The Queue Manager uses the OS IPC, or inter process communications, capabilities to manage the queues and their work-load, and to manage the pseudo-devices with which the Queue Manager interacts.
The pseudo-devices and jobs are handled using shared memory. Both database systems' server configurations require shared memory. This creates a potential conflict with the Queue Manager's requirements. If the databasre server and the Queue Manager both reside on the same host (called a combined host), then care must be take to reserve adequate shared memory for the Queue Manager. This is because the database servers prefer to consume all available shared memory. When configuring one of the database servers, it is important to leave some of the combined host's shared memory resource for the Queue Manager. Possibly the best strategy is to configure and activate the queues before configuring the database server, so that the Queue Manager is not left with too little resource.
If the database server and Queue Manager reside on different hosts (split hosts), then there will be no conflict over shared memory between the database system and the Queue Manager.
The individual queues are managed by the system spooler using a form of IPC called message queues.
IMPORTANT NOTE
The Queue Manager queues and the message queues of the IPC system are not the same!
The are both formally "queues", as in "first-in/first-out" data structures, but they are otherwise completely separate and distinct. True, the Queue Manager does use message queues to manage its job and print queues, but there is no one-to-one or intrinsic relationship.
Both types of IPC, shared memory and message queues, are identified by numeric keys, which are arbitrary numbers that cooperating applications share among themselves so that they can hook up to the same resource that they are sharing. Thus each LeasePak driver that is running within a LeasePak instance on a single application host must know the key to the shared memory that it shares with all the other drivers in the same instance.
This key is determined by the SETUP program when LeasePak is installed. The administrator is prompted for the key by the configuration prompt, Shared memory key for Queue Mgr IPC [76000]:. It is recommended that the administrator accept the offered default value unless it is known that this value will conflict with other users of the resource.
The key input at installation is generally available in the environment variable$QMSHMEMKEY, and is stored in the Queue Manager Configuration in the Config file located in the $QMDIR/library directory. The parameter name in the Config file is SYSTEM.
The actual keys used by the Queue Manager for shared memory are $QMSHMEMKEY and $QMSHMEMKEY + 1; the key used for the message queues is $QMSHMEMKEY.
Use the ipcs command to display the status of the various IPC resources in use on the application host. Use the ipcrm command to remove any unneeded or obsolete IPC instances. ipcrm should be used sparingly if at all. It is often better to reboot the application host than to use ipcrm.
The ipcs command may be executed by any user with shell access to the application host. The ipcrm command may be executed only by root on the application host.
These commands require no services of the database server, nor of the Queue Manager. All LeasePak queues and drivers should be terminated before using the ipcrm command to remove any Queue Manager resources. The ipcs command may be run at any time.

Common Usage

There are two primary OS commands for handling the IPC resources: ipcs and ipcrm. The most common ways to use ipcs are:

[nsadm76a:~]  ipcs -mbco

where

  • -mbco - these are the options that tell ipcs to report on shared memory

and:

[nsadm76a:~]  ipcs -q

where

  • -q - this is the option that tells ipcs to report on message queues

ipcs shows the status of the various IPC resources in use. To determine which ones belong to the Queue Manager, one needs to find the keys in use. The size of the resources are also shown by ipcs. Unfortunately, the key is displayed in hexadecimal notation while the Queue Manager configuration in Config stores it in decimal notation.

To convert the decimal notation key to hexadecimal notation, the operator may use the following small script at the command line:

[lpuser:~]  $AWK '{printf "%08x\n", $1}'

This little program will wait for operator input. When the operator types the key (SYSTEM in the Config file) and presses ENTER; the script will print the value of the key in hexadecimal notation, and then wait for further input. Press CTRL-D when done.

The most common way to use ipcrm is:

#  ipcrm -opt key [-opt key] ...

where

  • key is the IPC key that the administrator wishes to remove. Multiple keys may be listed, each preceded by opt.
  • opt is either -M for shared memory segments or -Q for message queues.
In order to determine the amount of shared memory resource the Queue Manager will require, one may attempt to calculate it from the two determining factors in the Config file, MAXDEV and MAXJOB, but this is notoriously unreliable as the Queue Manager package silently makes adjustments, so the actual amount of shared memory required can really only be determined by using ipcs. This will print the segment sizes actually allocated by the Queue Manager, under the SEGSZ column.

Calculating Device and Job Slots

Generally the default Config:MAXDEV parameter is sufficient unless there are a great many (> 2000) terminals to support. The Config:MAXJOB parameter is calculated via the following formula:

Calculation of Queue Manager Config:MAXJOB
(2 x CU) +
(4 x JL x NQ) +
PQ

where:

CU = concurrent users

JL = queue Job Limit

NQ = number of Batch Queues

PQ = number of Print Queues

The value derived from this formula should be taken as a starting place for tuning the Queue Manager instance. It is advisory only.

Running ipcs

Log on the application host as $NSTADMIN.

Run ipcs as follows to display shared memory:

[nsadm76a:~]  ipcs -mbco

The command will produce a display similar to the screen print below.

  [nsadm76a:~] ipcs -mbco
  IPC status from /dev/kmem as of Fri Oct  2 09:24:42 2011
  T    ID     KEY        MODE        OWNER   GROUP   CREATOR  CGROUP NATTCH   SEGSZ
  Shared Memory:
  m  1548 0x652821f4 --rw-------    sybase  sybase    sybase  sybase    6 554074112
  m   525 0x000119A4 --rw-rw-rw-  nsadm76a     nst  nsadm76a     nst   13    307808
  m  4624 0x000119A5 --rw-rw-rw-  nsadm76a     nst  nsadm76a     nst   13     47112
  m  4625 0x00000000 D-rw-------    sybase  sybase    sybase  sybase    2     65540
  m  4626 0x00011D28 --rw-rw-rw- jstettner     nst jstettner     nst    0    307808
  m  4627 0x00011D29 --rw-rw-rw- jstettner     nst jstettner     nst    0     47112
  [nsadm76a:~] 
Determining size of Queue Manager's Shared Memory Segment
Key Config:SYSTEM Release SEGSZ
0x000119A4;0x000119A5 75000 v75a 307,808
+ 47,112
= 354,920
0x00011D28;0x00011D29 66000 v66a 307,808
+ 47,112
= 354,920

Also, under NATTCH, the operator can determine how many application connections there are to the resources. 13 happens to be the number of queues plus the system spooler, so the operator can see that the queues for v75a and v76a are likely running.

Run ipcs as follows to display message queue information:

[nsadm76a:~]  ipcs -q

The command will produce a display similar to the screen print below.

  [nsadm76a:~] ipcs -q
  IPC status from /dev/kmem as of Fri Oct  2 09:26:45 2011
  T ID     KEY        MODE         OWNER GROUP
  Message Queues:             
  q  0 0x3c300734 -Rrw--w--w-       root  root
  q  1 0x3e300734 --rw-r--r--       root  root
  q  4 0x000119A4 --rw-rw-rw-       root  root
  q  5 0x000119A5 --rw-rw-rw-       root  root
  q  7 0x00011D28 --rw-rw-rw-  jstettner   nst
  q  8 0x00011D29 --rw-rw-rw-  jstettner   nst
  [nsadm76a:~] 

IMPORTANT NOTE

The ipcrm command is given here for the sake of thoroughness. NetSol does not know of any particular circumstance where its use is indicated in the maintenance of LeasePak. The administrator should use this command only if there is a specific known benefit to its use.

Running ipcrm

  1. Log on the application host as root.
  2. Execute ipcs -mboc as explained above.
  3. Run ipcrm as follows to remove shared memory keys 0x000119A4 and 0x000119A5:
    #   ipcrm -M 0x000119A4 -M 0x000119A5
    The command will produce a display similar to the screen print below.
  4. Re-execute the ipcs -mboc command.

Items 2 through 4 above produce output similar to the following:

  # ipcs -mboc
  IPC status from /dev/kmem as of Mon Oct  5 00:14:08 2011                                     
  T    ID     KEY        MODE        OWNER   GROUP   CREATOR  CGROUP NATTCH   SEGSZ
  Shared Memory:                                                                      
  m     0 0x413007ba --rw-rw-rw-      root    root      root    root    0       348
  m     1 0x4e0c0002 --rw-rw-rw-      root    root      root    root    1     61760
  m     2 0x4134487f --rw-rw-rw-      root    root      root    root    1      8192
  m   525 0x000119A4 --rw-rw-rw-  nsadm75a     nst  nsadm75a     nst   13    307808
  m  4624 0x000119A5 --rw-rw-rw-  nsadm75a     nst  nsadm75a     nst   13     47112
  m  4625 0x00000000 D-rw-------    sybase  sybase    sybase  sybase    2     65540
  m  4626 0x00011D28 --rw-rw-rw- jstettner     nst jstettner     nst    0    307808
  m  4627 0x00011D29 --rw-rw-rw- jstettner     nst jstettner     nst    0     47112 
  # ipcrm -M 0x000119A4 -M 0x000119A5 
  # ipcs -mboc
  IPC status from /dev/kmem as of Mon Oct  5 00:15:51 2011                                     
  T    ID     KEY        MODE        OWNER   GROUP   CREATOR  CGROUP NATTCH   SEGSZ
  Shared Memory:                                                                      
  m     0 0x413007ba --rw-rw-rw-      root    root      root    root    0       348
  m     1 0x4e0c0002 --rw-rw-rw-      root    root      root    root    1     61760
  m     2 0x4134487f --rw-rw-rw-      root    root      root    root    1      8192
  m 26115 0x5e2c003a --rw-------      root    root      root    root    1       512
  m  4626 0x00011D28 --rw-rw-rw- jstettner     nst jstettner     nst    0    307808
  m  4627 0x00011D29 --rw-rw-rw- jstettner     nst jstettner     nst    0     47112 
Why aren't the Queue Manager's shared memory segments consistently owned?

In one screen print from ipcs above, we see the shared memory for a couple of LeasePak release versions. The owner of each release's shared segment, though, is different. One set of segments is owned by root; the next is owned by a regular user.

The reason is: the shared memory segments are owned by the process that first executed a Queue Manager task since the last time shared memory was cleared, usually when the application host is rebooted.

  • owned by root - probably started at boot by an init script
  • owned by $NSTADMIN - probably started by $NSTADMIN, possibly using cleanse_s7
  • owned by regular user - probably started by user launching LeasePak

About Batch Queue Definitions

Top IPC Queue Defs

start_queues Job Limit

Batch and Job Queues

Batch queues have the following features:

  • Job limit - the maximum number of concurrent jobs allowed to execute on the queue.
  • Queue name - the name by which the queue is known to the Queue Manager, an to the Queue Manager's clients, such as LeasePak.
  • Batch Queues are defined entirely within $QMDIR/com/start_queues.com.
  • Job Queue is synonymous with Batch Queue; they are to be distinguished from a Print Queue.
The Queue Manager, as installed on the application host by SETUP, contains fully configured batch queues, targeting hardware and operations at NetSol's LeasePak laboratories. All of these entries are examples for the administrator's use in configuring entries for his or her site. They should not be used in their installed form.

IMPORTANT NOTE

The System Administrator must complete the Queue Manager configuration

The System Administrator must configure the Batch queues. The example values in the Queue Manager configuration files at install are inappropriate for site use.

LeasePak will not function properly using the installed configuration.

DCL Batch init - Batch Queue Definitions

The DCL init command has two forms as used in LeasePak. The first form, discussed here, is for creating the batch queues. It is:

$ init/que/start/job_limit=n/batch batch-queue-name

where

  • $ - the DCL command introducer
  • init - the actual command name
  • in DCL, /-characters introduce command qualifiers, which are like "-"-options in Unix; they are not file path component separators.
  • /que - indicates a that a queue is to be created by init
  • /start - indicates that the individual manager process, qmgr, should be started
  • job_limit= - controls the maximum number of jobs that can run on the queue concurrently
  • n - indicates the value of job_limit
  • /batch - indicates the kind of queue being created
  • batch-queue name - name by which the queue will be known to the Queue Manager and its clients.

start_queues.com

Top IPC Queue Defs

start_queues Job Limit

Defining and Starting the queues
The DCL init command both defines a queue, and starts the queue. Thus, the queue has no separate existence from the processes that implement it.

Common Usage

The com-file start_queues.com has three distinct but related parts: first is the form definition section, related to the third section, the print queue definition section, and the second section, the batch queue definition section.

It should be noted that a com-file is a type of script, and so contains programming language and constructs.

Before any actual statements, it is essential that each com-file begin with the DCL statement: set noon. This sets up default error handling.

The syntax of the DCL com-file in general is important to note, especially:

  • the command introducer $ which must precede every command
  • the command continuation - which must terminate every command line that wraps to the next line
  • the comment character, !, which hides everything up to the end of the line, or until the next !

After every three queue-defining init commands, there should be a wait command to slow down the process for about 3-5 seconds, so that the system spooler can catch up. See Example below.

IMPORTANT NOTE

How many batch queues should there be?

LeasePak requires a queue named sys$batch. NetSol recommends that the administrator create a batch queue for each portfolio. The conventional name for the portfolio queues is lp$eopnn, where nn is the two digit portfolio number.

Batch Queue Initialization Worksheet

Note the following values for writing batch queue initializations:

Name Description Your Value Notes
basic command for batch queues $ init/que/start Combine this with the values below to make entries in start_queues.com
/job_limit= maximum concurrent jobs /job_limit=10 a number between 1 and 255; see discusion here.
/batch queue name /batch lp$eopnn lower-case alphanumeric, or $. Conventionally, lp$eopnn
etc.

Writing the Queue Initialization entries

Log on the application host as $NSTADMIN.

Use vi to create a new queue definition file:

[nsadm76a:~]  vi qdef.new
  $ set noon
  $ init/que/start/job_limit=10/batch SYS$BATCH
  $ init/que/start/job_limit=10/batch lp$eop01
  $ init/que/start/job_limit=10/batch lp$eop02
  $ wait 00:00:03 

and so on for the remainder of the batch queues to be initialized.

In earlier versions of LeasePak, the wait commands used an abbreviated format for the time to delay, and allowed commands such as wait 3 to delay 3 seconds. There have been reports that using this format can result in very long delays, as if the 3 were being taken to mean 3 hours!. It is best to use the prescribed HH:MM:SS syntax.

Job Limit

Top IPC Queue Defs

start_queues Job Limit

Parallel Processing

CRITICAL NOTE

Batch Queue Job Limit Impacts EOP Performance

Possibly no other single setting can affect EOP performance as much as the setting of the queues' job limit parameter. NetSol recommends that the administrator set the job limit, a SETUP parameter, according to the following guidelines:

Guidelines for setting the Batch Queue Job Limit

The Batch Queue Job Limit is the maximum number of jobs that can run concurrently on that Batch Queue. Possible values for the Job Limit range from 1 to 255. The minimum recommended setting for the Job Limit is 10, though if circumstances warrant, it could beas low as 6. In many cases, the Job Limit should be set to a value greater than 10.

The optimal value for the Job Limit depends on multiple factors, such as:

  • system hardware configuration (# of CPUs, amount of main memory, etc)
  • Database system configuration (for example, the Oracle buffer cache size)
  • total number of leases to be processed during EOP
  • LeasePak modules selected to run during EOP
  • number of portfolios running EOP
  • number of Batch Queues used by EOP
  • number of portfolios assigned to each EOP Batch Queue (A separate EOP Batch Queue for each portfolio is recommended)

The goal is to set the Job Limit high enough to effectively make use of the available hardware and database system resources, without overstressing those resources.

If the Job Limit is set too low, an insufficient number of jobs will run in parallel, leaving resources idle, thus decreasing the performance of EOP.

If the Job Limit is set too high, an excess number of jobs will run in parallel, leaving jobs idle without resources, leading to hardware and database system resource competition and decreased EOP performance.

There is an optimal Job Limit setting where the number of jobs running in parallel maximizes the performance of EOP. For each particular system, experimentation with different settings for the Job Limit is recommended to determine the optimal Job Limit setting.

How to set the job_limit

Working backwards, starting with the job limit as it becomes a parameter to start_queues.com: the sample start_queues.com being shipped with LeasePak will take an optional job limit parameter. In the absence of the parameter, the job limit defaults to a reasonable level. It also issues a warning that the administrator has not set a job limit. The administrator is fully in charge of the contents of the com-file of course, and has a range of choices of how to implement the job limit:

  • The administrator can hard code the job limits and even set different job limits for different queues within start_queues.com.
  • The administrator can hard code the job limit as a local variable within start_queues.com, and set all the queue job limits uniformly to that single value, or a set of local variables and set particular queues with particular variables.
  • The administrator can use start_queues.com as it is presently shipped, but make his or her own provisions for passing a job limit parameter to the queue startup.
  • The administrator can use start_queues.com and nst_qm_{ID}99x (such as nst_qm_{ID}76a) as they are shipped, but make other arrangements for setting up the contents of the $QMJOB_LIM_FILE.
  • Finally, the administrator can use start_queues.com and nst_qm_{ID}99x (such as nst_qm_{ID}76a) as they are shipped and rely on the default mechanism for providing the job limit:
    • SETUP queries the operator for the job limit, which was determined before SETUP is run.
    • Then SETUP creates an assignment in the file $QMDIR/library/qjob_limit.
    • The assignment will be of the job limit from the interview being assigned to the environment variableQLIM:
      QLIM=10
      The assignment in the qjob_limit file is evaluated by nst_qm_{ID}99x and by the cleanse_s7 utility, and passed to start_queues.com when it is called to start the individual queues.

The following code has been added to start_queues.com to handle the job limit parameter passed to it:
$ if P1 .eqs. "" then -
  write sys$output -
  "WARNING: Job Limit Parameter is empty; defaulting Queue Job Limit to 6"
$ if P1 .eqs. "" then P1 = 6

How to adjust the job_limit

Simply using vi, or even just echoing from the shell, the administrator can change the value used by nst_qm_{ID}99x and by the cleanse_s7 utility in order to fine tune the value in $QMJOB_LIM_FILE. The file is owned by $NSTADMIN but is still read-only, see Handling read-only files with vi to edit the file, or make it writable by $NSTADMIN ( chmod 644 $QMDIR/library/$QMJOB_LIM_FILE) until the tuning effort has been completed.

Simply change the value assigned to $QLIM to the desired number, write, quit, and test. First test: restart the queues with service nst_qm_{ID}99x restart and then check the job limit using service nst_qm_{ID}99x status, which should show the new value.

IMPORTANT NOTE

Print queues have an inherent, immutable job limit of one (1)
This is because, clearly, a single printer device cannot print more than one job simultaneously. Consequently, the job limit parameter is not valid with DCL printer init commands.