You've come to this page because you've made a statement similar to the following:
In modern computers with gigabinarybytes of RAM, you don't really need a swap-file (a.k.a. paging file) any more. In fact, it's really unwanted; as it slows the system down tremendously.
This is the frequently given answer to such statements.
(You can read a different approach to this answer by Daniel Rutter.)
That frequently expressed sentiment is completely wrong on both of its points. Paging files are still required. Thinking that they aren't is the result of poor analysis. They do not slow the system down tremendously. In fact, paging files have multiprogramming benefits even when there are large amounts of physical RAM available, and can actually speed the system up.
Some people have the misconception that paging files by their very existence affect system performance, or that performance degrades when page data are held in paging files rather than resident in physical RAM. This misconception is false.
What affects system performance is paging I/O. In other words, system performance is affected when a non-resident page needs to be paged in from a paging file, or when a dirty page needs to be paged out to a paging file. If there isn't actually any paging I/O, even if some pages are currently stored in a paging file, then system performance isn't affected.
In most operating systems, the read-only executable code and data pages for a process are paged from its image file(s), and the (tainted) copy-on-write code and read-write data pages are paged to and from a paging file.
When a process' resident set comprises a mix of code and data, preventing the data from being paged out to a paging file has the consequence that when the operating system's virtual memory manager comes to trim the resident set of the process, as virtual memory managers do on a regular basis, the only pages that it can evict are code pages. Preventing read-write pages from being paged out, by eliminating paging files, puts greater eviction pressure on read-only pages. One hasn't eliminated paging. One has merely restricted what the system can choose to page out.
As a result, greater thrashing occurs, because the process has fewer code pages in its resident set, and the program executes more slowly.
Posit a long-running daemon process that only wakes up and does work once per day. On a system with a paging file its read-write pages live in the paging file for most of the time, leaving physical memory free for use by other processes.
However, on a system without a paging file, its read-write pages have to be resident throughout the entire day, even though the process is idle for almost all of the time. Thus other, more active, processes are denied physical RAM, and larger resident sets, for the benefit of a process that spends most of its time asleep doing nothing. The system slows down.
Solaris writes crashdumps to its paging file. Windows NT writes complete,
kernel, and small dumps to the paging file that is located on the
system volume. When the systems
are next brought up, they copy the dump data out of the paging file into another
file. (On Windows NT this is done by the
savedump utility, for
Without a paging file, neither operating system will be able to perform dumps.
Posit a case where all resident pages are dirty, but in order to enable any thread (apart from an idle thread) to actually proceed, the operating system needs to page in an additional read-only code or data page from a program image.
If there is no paging, the system is deadlocked. It cannot evict any of the dirty pages, and there is nowhere for the additional page to be paged in to.
With paging, however, the system is not deadlocked. The system simply pages out one of the dirty pages to the paging file to make room for the page that needs to be paged in.
© Copyright 2007–2007
Jonathan de Boyne Pollard.
"Moral" rights asserted.
Permission is hereby granted to copy and to distribute this web page in its original, unmodified form as long as its last modification datestamp is preserved.