queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
NAME
queue and qsh - farm and batch-process jobs out on the local network
SYNOPSIS
queue [-h hostname|-H hostname] [-i|-q] [-d spooldir] [-o|-p|-n]
[-w|-r] -- command command.options
qsh [-l ignored] [-d spooldir] [-o|-p|-n] [-w|-r] hostname command
command.options
DESCRIPTION
This documentation is no longer being maintained and may be inaccurate
or incomplete. The Info documentation is now the authoritative
source.
This manual page documents GNU Queue load-balancing/batch-processing
system and local rsh replacement.
queue with only a -- followed immediate execution (-i) wait for output
(-w) and full-pty emulation (-p).
The defaults for qsh are a slightly different: no-pty emulation is the
default, and a hostname argument is required. A plus (+) is the
wildcard hostname; specifying + in place of a valid hostname is the
same as not using an -h or -H option with queue. qsh is envisioned as
a rsh compatibility mode for use with software that expects a rsh-like
syntax. This is useful with some MPI implementations; see See section
MPI in the Info file.
The options are:
-h hostname
--host hostname
force queue to run on hostname.
-H hostname
--robust-host hostname
Run job on hostname if it is up.
-i|-q
--immediate|--queue
Shorthand for the (now spooldir) and queue (queue spooldir).
[-d spooldir]
[--spooldir spooldir]
- 1 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
With -q option, specifies the name of the batch processing
directory, e.g., mlab
-o|-p|-n
--half-pty|--full-pty|--no-pty
Toggle between half-pty emulation, full-pty emulation (default),
and the more efficient no-pty emulation.
-w|-r
--wait|--batch
Toggle between wait (stub daemon; default) and return (mail
batch) mode.
-v
--version
Version
--help
List of options
GNU Queue is a UNIX process network load-balancing system that
features an innovative 'stub daemon' mechanism which allows users to
control their remote jobs in a nearly seamless and transparent
fashion. When an interactive remote job is launched, such as say EMACS
interfacing Allegro Lisp, a stub daemon runs on the remote end. By
sending signals to the remote stub - including hitting the suspend key
- the process on the remote end may be controlled. Resuming the stub
resumes the remote job. The user's environment is almost completely
replicated, including not only environmental variables, but nice
values, rlimits, terminal settings are all replicated on the remote
end. Together with MIT_MAGIC_COOKIE_1 (or xhost +) the system is X-
windows transparent as well, provided the users local DISPLAY variable
is set to the fully qualified pathname of the local machine.
One of the most appealing features of the stub system even with
experienced users is that asynchronous job control of remote jobs by
the shell is possible and intuitive. One simply runs the stub in the
background under the local shell; the shell notifies the user when the
remote job has a change in status by monitoring the stub daemon.
When the remote process has terminated, the stub returns the exit
value to the shell; otherwise, the stub simulates a death by the same
signal as that which terminated or suspended the remote job. In this
way, control of the remote process is intuitive even to novice users,
as it is just like controlling a local job from the shell. Many of my
original users had to be reminded that their jobs were, in fact,
- 2 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
running remotely.
In addition, Queue also features a more traditional distributed batch
processing environment, with results returned to the user via email.
In addition, traditional batch processing limitations may be placed on
jobs running in either environment (stub or with the email mechanism)
such as suspension of jobs if the system exceeds a certain load
average, limits on CPU time, disk free requirements, limits on the
times in which jobs may run, etc. (These are documented in the sample
profile file included.)
In order to use queue to farm out jobs onto the network, the queued
must be running on every host in your cluster, as defined in the host
Access Control File (default: /usr/local/share/qhostsfile).
Once queued is running, jobs may normally be farmed out to other hosts
withing the homogenous cluster. For example, try something like
queue -i -w -p -- emacs -nw. You should be able to background and
foreground the remote EMACS process from the local shell just as if it
were running as a local copy.
Another example command is queue -i -w -n -- hostname which should
return the best host, as controlled by options in the profile file
(See below) to run a job on.
The options on queue need to be explained:
-i specifies immediate execution mode, placing the job in the now
spool. This is the default. Alternatively, you may specify either the
-q option, which is shorthand for the wait spool, or use the -d
spooldir option to place the job under the control of the profile file
in the spooldir subdirectory of the spool directory, which must
previously have been created by the Queue administrator.
In any case, execution of the job will wait until it satisfies the
conditions of the profile file for that particular spool directory,
which may include waiting for a slot to become free. This method of
batch processing is completely compatible with the stub mechanism,
although it may disorient users to use it in this way as they may be
unknowingly forced to wait until a slot on a remote machine becomes
available.
-w activates the stub mechanism, which is the default. The queue stub
process will terminate when the remote process terminates; you may
send signals and suspend/resume the remote process by doing the same
to the stub process. Standard input/output will be that of the 'queue'
stub process. -r deactivates the stub process; standard input/output
will be via email back to the users; the queue process will return
immediately.
- 3 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
-p or -n specifies whether or not a virtual tty should be allocated at
the remote end, or whether the system should merely use the more
efficient socket mechanism. Many interactive processes, such as EMACS
or Matlab, require a virtual tty to be present, so the -p option is
required for these. Other processes, such as a simple hostname do not
require a tty and so may be run without the default -p. Note that
queue is intelligent and will override the -p option if it detects
both stdio/stdout have been re-directed to a non-terminal; this
feature is useful in facilitating system administration scripts that
allow users to execute jobs. [At some point we may wish to change the
default to -p as the system automatically detects when -n will
suffice.] Simple, non-interactive jobs such as hostname do not need
the less efficient pty/tty mechanism and so should be run with the -n
option. The -n option is the default when queue is invoked in rsh
compatibility mode with qsh.
The -- with queue specifies `end of queue options' and everything
beyond this point is interpreted as the command, or arguments to be
given to the command. Consequently, user options (i.e., when invoking
queue through a script front end, may be placed here):
#!/bin/sh exec queue -i -w -p -- big_job $*
or
#!/bin/sh exec queue -q -w -p -d big_job_queue -- big_job $*
for example. This places queue in immediate mode following
instructions in the now spool subdirectory (first example) or in
batch-processing mode into the big_job spool subdirectory, provided it
has been created by the administrator. In both cases, stubs are being
used, which will not terminate until the big_job process terminates on
the remote end.
In both cases, pty/ttys will be allocated, unless the user redirects
both the standard input and standard output of the simple invoking
scripts. Invoking queue through these scripts has the additional
advantage that the process name will be that of the script, clarifying
what is the process is. For example, the script might called big_job
or big_job.remote, causing queue to appear this way in the user's
process list.
queue can be used for batch processing by using the -q -r -n options,
e.g.,
#!/bin/sh exec queue -q -r -n -d big_job -- big_job $*
would run big_job in batch mode. -q and -d big_job options force Queue
to follow instructions in the big_job/profile file under Queue's spool
- 4 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
directory and wait for the next available job slot. -r activates
batch-processing mode, causing Queue to exit immediately and return
results (including stdout and stderr output) via email.
The final option, -n, is the option to disable allocation of a pty on
the remote end; it is unnecessary in this case (as batch mode disables
ptys anyway) but is here to demonstrate how it might be used in a -i
-w -n or -q -w -n invocation.
Under /usr/spool/queue you may create several directories for batch
jobs, each identified with the class of the batch job (e.g., big_job
or small_job). You may then place restrictions on that class, such as
maximum number of jobs running, or total CPU time, by placing a
profile file like this one in that directory.
However, the now queue is mandatory; it is the directory used by the
-i mode (immediate moe) of queue to launch jobs over the network
immediately rather than as batch jobs.
Specify that this queue is turned on:
exec on
The next two lines in profile may be set to an email address rather
than a file; the leading / identifies then as file logs. Files now
beginning with cf,of, or ef are ignored by the queued:
mail /usr/local/com/queue/now/mail_log supervisor
/usr/local/com/queue/now/mail_log2
Note that /usr/local/com/queue is our spool directory, and now is the
job batch directory for the special now queue (run via the -i or
immediate-mode flag to the queue executable), so these files may
reside in the job batch directories.
The pfactor command is used to control the likelihood of a job being
executed on a given machine. Typically, this is done in conjunction
with the host command, which specifies that the option on the rest of
the line be honored on that host only.
In this example, pfactor is set to the relative MIPS of each machine,
for example:
host fast_host pfactor 100 host slow_host pfactor 50
Where fast_host and slow_host are the hostnames of the respective
machines.
This is useful for controlling load balancing. Each queue on each
- 5 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
machine reports back an `apparant load average' calculated as follows:
1-min load average/ (( max(0, vmaxexec - maxexec) + 1)*pfactor)
The machine with the lowest apparant load average for that queue is
the one most likely to get the job.
Consequently, a more powerful pfactor proportionally reduces the load
average that is reported back for this queue, indicating a more
powerful system.
Vmaxexec is the ``apparant maximum'' number of jobs allowed to execute
in this queue, or simply equal to maxexec if it was not set. The
default value of these variables is large value treated by the system
as infinity.
host fast_host vmaxexec 2 host slow_host vmaxexec 1 maxexec 3
The purpose of vmaxexec is to make the system appear fully loaded at
some point before the maximum number of jobs are already running, so
that the likelihood of the machine being used tapers off sharply after
vmaxexec slots are filled.
Below vmaxexec jobs, the system aggressively discriminates against
hosts already running jobs in this Queue.
In job queues running above vmaxexec jobs, hosts appear more equal to
the system, and only the load average and pfactor is used to assign
jobs. The theory here is that above vmaxexec jobs, the hosts are fully
saturated, and the load average is a better indicator than the simple
number of jobs running in a job queue of where to send the next job.
Thus, under lightly-loaded situations, the system routes jobs around
hosts already running jobs in this job queue. In more heavily loaded
situations, load-averages and pfactors are used in determining where
to run jobs.
Additional options in profile
exec
on, off, or drain. Drain drains running jobs.
minfree
disk space on specified device must be at least this free.
- 6 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
maxfree
maximum number of jobs allowed to run in this queue.
loadsched
1 minute load average must be below this value to launch new
jobs.
loadstop
if 1 minute load average exceeds this, jobs in this queue are
suspended until it drops again.
timesched
Jobs are only scheduled during these times
timestop
Jobs running will be suspended outside of these times
nice
Running jobs are at least at this nice value
rlimitcpu
maximum cpu time by a job in this queue
rlimitdata
maximum data memory size by a job
rlimitstack
maximum stack size
rlimitfsize
maximum fsize
- 7 - Formatted: October 29, 2025
queue(1) GNU Queue Version 1.20.1 www.gnuqueue.org queue(1)
GNU Queue GNU Queue
07/2000
rlimitrss
maximum resident portion size.
rlimitcore
maximum size of core dump
These options, if present, will only override the user's values (via
queue) for these limits if they are lower than what the user has set
(or larger in the case of nice).
FILES
These are the default file paths. PREFIX is typically
'/usr/local/bin'.
PREFIX/share/qhostsfile Host Access Control List File
PREFIX/com/queue spool directory
PREFIX/local/com/queue/now spool directory for immediate execution
PREFIX/com/queue/wait spool directory for the '-q' shorthand
SPOOLDIR/profile control file for the SPOOLDIR job queue
PREFIX/com/queue/now/profile control file for immediate jobs
PREFIX/var/queue_pid_hostname temporary file
COPYING
Copyright 1998-2000 W. G. Krebs <wkrebs@gnu.org>
Permission is granted to make and distribute verbatim copies of this
manpage provided the copyright notice and this permission notice are
preserved on all copies.
BUGS
Bug reports to <bug-queue@gnu.org>
AUTHORS
W. G. Krebs <wkrebs@gnu.org> is the primary author of GNU Queue.
- 8 - Formatted: October 29, 2025