You've come here because you have either perpetrated one of the classic design mistakes in your Unix dæmon, or asked a question similar to the following:
What mistakes should I avoid making when designing how my Unix dæmon program operates ?
This is the Frequently Given Answer to that question.
(You can find a different approach to this answer on one of Chris's Scribbles.)
fork()in order to "put the dæmon into the background".
This whole idea is wrongheaded. The concepts of "foreground" and "background" don't apply to dæmons. They apply to user interfaces. They apply to processes that have controlling terminals. There is a "foreground process group" for controlling terminals, for example. They apply to processes that present graphical user interfaces. The window with the distinctive highlighting is conventionally considered to be "in the foreground", for example. Dæmons don't have controlling terminals and don't present textual/graphical user interfaces, and so the concepts of "foreground" and "background" simply don't apply at all.
When people talk about
fork()ing "in order to put the
dæmon in the background" they don't actually mean "background" at
all. They mean that the dæmon is executed asynchronously by the
shell, i.e. the shell does not wait for the dæmon process to
terminate before proceeding to the next command. However, Unix shells are
perfectly capable of arranging this. No code is required in your
dæmon for doing so.
Let the invoker of your dæmon decide whether xe wants it to run synchronously or asynchronously. Don't assume that xe will only ever want your dæmon to run asynchronously. Indeed, on modern systems, administrators want dæmons to run synchronously, as standard behaviour.
dæmon supervisors assume (quite reasonably) that if their child process
exits then the dæmon has died and can be restarted. (Conversely,
they quite reasonably assume that they can do things like stop the
dæmon cleanly by sending their child process a
The traditional BSD and System V
init programs do this.
So, too, do
supervise from Dan Bernstein's
runsv from Gerrit Pape's
Service Access Controller
in Solaris 9,
Service Management Facility
in Solaris 10,
the System Resource Controller in AIX,
in Mac OS 10,
init from Scott James Remnant's
Forking to "put the dæmon into the background" entirely defeats such
tools; but it is ironic that it does so to no good end, because, even
fork()ing, dæmons invoked by such
supervisors are already "in the background". The dæmon
supervisors are already running asynchronously from interactive shells,
without controlling terminals, and without any interactive shells as their
If some novice system administrator, running your dæmon from an
(rather than from
init or under the aegis of
the SAC, the SMF, or the SRC)
then asks you how to persuade your dæmon to run
asynchronously so that the script does not wait for it to finish, point
them in the direction of the documentation for the shell's
Running a dæmon synchronously doesn't necessarily mean that vast
quantities of debugging information are required. If (because you haven't
bitten the bullet and eliminated the totally unnecessary
fork()ing from your code) you have a command option that
switches your dæmon between running "in the foreground" and running
"in the background", do not make that command option do double
duty. Running the dæmon without
fork()ing should have
no bearing upon whether or not debugging output from your program is
enabled or disabled.
In general, don't conflate options that affect the actual operation of your program with options that affect the output of log or debug information.
syslog() is a very poor logging mechanism. Don't use it.
Amongst its many faults and disadvantages are:
Write your log output to standard error, just like all other Unix programs
do. The logging output from your dæmon will as a consequence automatically
be entirely separate from the logging output of other dæmons. (If a system
administrator wishes to combine them, xe can always send your dæmon's
standard error through a pipe to
logger.) You won't need any extra files in the
chroot() jail at all. And log data will not be lost and will
be recorded in the same order that they were generated.
You'll find your dæmon easier to write, to boot. Code using
fprintf(stderr,…) is generally easier to maintain than code
Let programs such as
(from Dan Bernstein's UCSPI-TCP),
deal with the nitty gritty of opening, listening on, and accepting
connections on sockets. All that your program needs to do is read from
its standard input and write to its standard output. Then if someone
comes along wanting to connect your program to some other form of stream,
xe can do so easily.
Make your program into an application that is suitable for being spawned
from a UCSPI server. If you
really do need to have access to TCP-specific information, such as
socket addresses, don't call
getpeername() and so forth
directly. Parse the TCP/IP local and remote information that should be
provided by the UCSPI server in the
TCP environment variables.
(For one thing, a system administrator will find it a lot easier to test
an access-control mechanism that is based upon
than to test one that is based upon
This design allows you to more closely follow the Principle of Least
Privilege, too. If your program were to handle the TCP sockets itself and
the number of the port that it used was in the reserved range (1023 and
below), it would need to be run as the superuser. All of the code of your
program would need to be audited to ensure that it had no loopholes
through which one could gain superuser access. If, however, your program
tcpsvd to perform all of the socket control, it could be
invoked under a non-superuser UID and GID via
Loopholes in your program would only allow an attacker to do things that
that UID and GID could do. If the UID and GID did not own and had no
write access to any files or directories on the filesystem, for example,
and no other processes ran under the same UID, then an attacker who
compromised your program could do very little (apart from disrupt the
operation of your program itself).
If your program is one of the (exceedingly) rare cases where you do need to create, listen on, and accept connections on sockets yourself, allow the system administrator full control over the IP addresses (and port numbers) that your program will use.
/var/run(or anywhere else).
Creating PID files in
/var/run has all sorts of flaws and
disadvantages, among them:
kill `head -1 /var/run/pidfile`.)
Let the system administrator use whatever dæmon supervisor is invoking
your dæmon to handle killing the correct process. dæmon
supervisors don't need PID files. They know what process IDs to use because
they remember it from when they
fork()ed the dæmon process
in the first place.
init doesn't need a PID file in
/var/run to tell it which of its children to kill when the run
level changes, for example.
/var/run done the right way".
runit uses the same approach.
With both, there is no need for PID files. dæmons are controlled with
runsvctrl commands, which are what
shutdown scripts should use instead of
pkill, and the like.