To serve DNS data from multiple peer content DNS servers, it is necessary to replicate the DNS database across all servers.
Much DNS literature talks of using the "zone transfer" protocol (the realm
of the
DNSGETZ command
in the Internet Utilities) in order to do this. Such literature is
BIND-centric. Other DNS server softwares have different replication
mechanisms available, and that includes the Internet Utilities. Some,
such as Microsoft's DNS server (which uses Active Directory for its
database replication mechanism), have been around for several years, but
the writers of the DNS literature have yet to understand that the world is
not BIND any more.
The zone transfer protocol is an inferior protocol for DNS database replication. It doesn't preserve internal database metadata, such as server-side aliases or explicit bidirectional mappings. (All such metadata are "flattened" by the protocol.) It is not recommended for use on production systems.
Assuming that all peer content DNS servers that need to replicate data are
using the Internet Utilities, one very simple database replication
mechanism is to replicate the DNS database source file. This can
be done easily by serving a master copy of the the source file up with
the content HTTP server
and downloading slave copies of it with the
HTTPGET command.
(There are, of course, many tools for the ordinary task of copying
files from machine to machine over a network. HTTPGET is used simply
as an example here. One could just as easily use rsync,
ftp, scp, cvs, or even, presuming
that appropriate LAN access has been set up, the plain old
copy command. The Internet Utilities does not mandate
HTTPGET, or any other file copying mechanism.)
Rebuilding the database is then a simple matter of running a tool such as
make on a regular basis, using a Makefile similar to the
following example:
source/example.com: httpget // $(MASTER) /source/example.com $@.tmp move $@.tmp $@ data/@example.com: source/example.com dnszcomp example.com source/example.com $@
Because HTTPGET will preserve the last modification datestamp on the
source/example.com file, which make checks,
DNSZCOMP will only be invoked to re-compile (the example.com.
part of) the DNS database when the source file is actually modified.
Replicating the source file in this way means that all metadata, especially all metadata that only exist in source form, are preserved. Client-side aliases and explicit bidirectional mappings are preserved, for examples.
Furthermore, any shorthand schemes that one might employ can be preserved in a similar manner. If the source file is itself automatically generated, by (say) using a pre-processor to generate hundreds of identical database records from a template, then one again applies the replicate the source file principle and replicates the original, short, template, with (for example) a Makefile such as:
template/example.com: httpget // $(MASTER) /template/example.com $@.tmp move $@.tmp $@ source/example.com: template/example.com preprocess template/example.com $@ data/@example.com: source/example.com dnszcomp example.com source/example.com $@
There are of course further ways in which one can get creative in this fashion (not least with generalizing such a Makfile to handle multiple parts of the database in the same fashion with pattern rules). They are beyond the scope of this documentation. The above should provide an idea of where to start.
Note: Some other DNS server softwares have a replicate the compiled database file philosophy. Those softwares can employ such a maxim because their database files are constant database files, that are never written to again once first created. In the Internet Utilities, a DNS database file may be written to multiple times during its lifetime, by the database compiler, and replicating a database file partway through compilation will not yield a complete database, since the database compiler record-locks parts of the file as it is updating them. Thus the Internet Utilities' maxim of replicate the database source file.
This is a general issue, not confined to the Internet Utilities. One cannot go around copying the database files for, say, an SQL database server, for much the same reasons. Database servers use locking, and live and active databases are not necessarily ever in a consistent state where they can be simply copied using file copy utilities.