NFS client patches for Linux 2.6.18-rc1

The following set of patches fix known issues with the 2.6.18-rc1 NFS client code, and significantly enhance the support for NFSv4.

linux-2.6.18-001-materialise-dentry.dif:

From: David Howells <dhowells@redhat.com>

NFS: Add dentry materialisation op

The attached patch adds a new directory cache management function that prepares a disconnected anonymous function to be connected into the dentry tree. The anonymous dentry is transferred the name and parentage from another dentry.

The following changes were made in [try #2]:

(*) d_materialise_dentry() now switches the parentage of the two nodes around correctly when one or other of them is self-referential.

The following changes were made in [try #7]:

(*) d_instantiate_unique() has had the interior part split out as function __d_instantiate_unique(). Callers of this latter function must be holding the appropriate locks.

(*) _d_rehash() has been added as a wrapper around __d_rehash() to call it with the most obvious hash list (the one from the name). d_rehash() now calls _d_rehash().

(*) d_materialise_dentry() is now __d_materialise_dentry() and is static.

(*) d_materialise_unique() added to perform the combination of d_find_alias(), d_materialise_dentry() and d_add_unique() that the NFS client was doing twice, all within a single dcache_lock critical section. This reduces the number of times two different spinlocks were being accessed.

The following further changes were made:

(*) Add the dentries onto their parents d_subdirs lists.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-002-nfs-split-inode.c.dif:

From: David Howells <dhowells@redhat.com>

NFS: Fix up split of fs/nfs/inode.c

Fix ups for the splitting of the superblock stuff out of fs/nfs/inode.c, including:

(*) Move the callback tcpport module param into callback.c.

(*) Move the idmap cache timeout module param into idmap.c.

(*) Changes to internal.h:

(*) namespace-nfs4.c was renamed to nfs4namespace.c.

(*) nfs_stat_to_errno() is in nfs2xdr.c, not nfs4xdr.c.

(*) nfs4xdr.c is contingent on CONFIG_NFS_V4.

(*) nfs4_path() is only uses if CONFIG_NFS_V4 is set.

Plus also:

(*) The sec_flavours[] table should really be const.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-003-nfs-disambiguate-nfs_stat_to_errno.dif:

From: David Howells <dhowells@redhat.com>

NFS: Disambiguate nfs_stat_to_errno()

Rename the NFS4 version of nfs_stat_to_errno() so that it doesn't conflict with the common one used by NFS2 and NFS3.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-004-nfs-callback-prototypes.dif:

From: David Howells <dhowells@redhat.com>

NFS: Fix NFS4 callback up/down prototypes

Make the nfs_callback_up()/down() prototypes just do nothing if NFS4 is not enabled. Also make the down function void type since we can't really do anything if it fails.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-005-nfs-rename-nfs4_client.dif:

From: David Howells <dhowells@redhat.com>

NFS: Rename struct nfs4_client to struct nfs_client

Rename struct nfs4_client to struct nfs_client so that it can become the basis for a general client record for NFS2 and NFS3 in addition to NFS4.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-006-nfs-rename-nfs4_state.dif:

From: David Howells <dhowells@redhat.com>

NFS: Rename nfs_server::nfs4_state

Rename nfs_server::nfs4_state to nfs_client as it will be used to represent the client state for NFS2 and NFS3 also.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-007-nfs-idmap-return-error.dif:

From: David Howells <dhowells@redhat.com>

NFS: Return an error when starting the idmapping pipe

Return an error when starting the idmapping pipe so that we can detect it failing.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-008-nfs-lookupfh-op.dif:

From: David Howells <dhowells@redhat.com>

NFS: Add a lookupfh NFS RPC op

Add a lookup filehandle NFS RPC op so that a file handle can be looked up without requiring dentries and inodes and other VFS stuff when doing an NFS4 pathwalk during mounting.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-009-nfs-setcap-op.dif:

From: David Howells <dhowells@redhat.com>

NFS: Add a lookupfh NFS RPC op

Add a set_capabilities NFS RPC op so that the server capabilities can be set.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-010-nfs-generalise-nfs_client.dif:

From: David Howells <dhowells@redhat.com>

NFS: Generalise the nfs_client structure

Generalise the nfs_client structure by:

(1) Moving nfs_client to a more general place (nfs_fs_sb.h).

(2) Renaming its maintenance routines to be non-NFS4 specific.

(3) Move those maintenance routines to a new non-NFS4 specific file (client.c) and move the declarations to internal.h.

(4) Make nfs_find/get_client() take a full sockaddr_in to include the port number (will be required for NFS2/3).

(5) Make nfs_find/get_client() take the NFS protocol version (again will be required to differentiate NFS2, 3 & 4 client records).

Also:

(6) Make nfs_client construction proceed akin to inodes, marking them as under construction and providing a function to indicate completion.

(7) Make nfs_get_client() wait interruptibly if it finds a client that it can share, but that client is currently being constructed.

(8) Make nfs4_create_client() use (6) and (7) instead of locking cl_sem.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-011-nfs-unify-sb.dif:

From: David Howells <dhowells@redhat.com>

NFS: Share NFS superblocks per-protocol per-server per-FSID

The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol.

It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have.

We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point.

Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons:

(1) The root and intervening nodes may not be accessible to the client.

With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something).

With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go.

(2) Inaccessible symbolic links.

If we're asked to mount two exports from the server, eg:

mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn

We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS.

This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory.

With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place.

This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example).

This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file.

Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache.

This patch makes the following changes:

(1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c.

All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management.

(2) The sequence of events undertaken by NFS mount is now reordered:

(a) A volume representation (struct nfs_server) is allocated.

(b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version.

(c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts.

(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance.

(e) The volume FSID is probed for on the root FH.

(f) The volume representation is initialised from the FSINFO record retrieved on the root FH.

(g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID.

(h) If allocated, the superblock is initialised.

(i) If the superblock is shared, then the new nfs_server record is discarded.

(j) The root dentry for this mount is looked up from the root FH.

(k) The root dentry for this mount is assigned to the vfsmount.

(3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops).

The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory.

(4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.

(5) Clone mounts are now called xdev mounts.

(6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy).

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-012-nfs-procfs-lists.dif:

From: David Howells <dhowells@redhat.com>

NFS: Add server and volume lists to /proc

Make two new proc files available:

/proc/fs/nfsfs/servers /proc/fs/nfsfs/volumes

The first lists the servers with which we are currently dealing (struct nfs_client), and the second lists the volumes we have on those servers (struct nfs_server).

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-013-fsmisc.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: Provide a filesystem-specific sync'able page bit

The attached patch provides a filesystem-specific page bit that a filesystem can synchronise upon. This can be used, for example, by a netfs to synchronise with CacheFS writing its pages to disk.

The PG_checked bit is replaced with PG_fs_misc, and various operations are provided based upon that. The *PageChecked() macros still exist, though now they just convert to *PageFsMisc() macros. The name of the "checked" macros seems appropriate as they're used for metadata page validation by various filesystems.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-014-kernel-file.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: Avoid ENFILE checking for kernel-specific open files

Make it possible to avoid ENFILE checking for kernel specific open files, such as are used by the CacheFiles module.

After, for example, tarring up a kernel source tree over the network, the CacheFiles module may easily have 20000+ files open in the backing filesystem, thus causing all non-root processes to be given error ENFILE when they try to open a file, socket, pipe, etc..

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-015-fscache.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: Generic filesystem caching facility

The attached patch adds a generic intermediary (FS-Cache) by which filesystems may call on local caching capabilities, and by which local caching backends may make caches available:

+---------+ | | +--------------+ | NFS |--+ | | | | | +-->| CacheFS | +---------+ | +----------+ | | /dev/hda5 | | | | | +--------------+ +---------+ +-->| | | | | | |--+ | AFS |----->| FS-Cache | | | | |--+ +---------+ +-->| | | | | | | +--------------+ +---------+ | +----------+ | | | | | | +-->| CacheFiles | | ISOFS |--+ | /var/cache | | | +--------------+ +---------+

The patch also documents the netfs interface and the cache backend interface provided by the facility.

There are a number of reasons why I'm not using i_mapping to do this. These have been discussed a lot on the LKML and CacheFS mailing lists, but to summarise the basics:

(1) Most filesystems don't do hole reportage. Holes in files are treated as blocks of zeros and can't be distinguished otherwise, making it difficult to distinguish blocks that have been read from the network and cached from those that haven't.

(2) The backing inode must be fully populated before being exposed to userspace through the main inode because the VM/VFS goes directly to the backing inode and does not interrogate the front inode on VM ops.

Therefore:

(a) The backing inode must fit entirely within the cache.

(b) All backed files currently open must fit entirely within the cache at the same time.

(c) A working set of files in total larger than the cache may not be cached.

(d) A file may not grow larger than the available space in the cache.

(e) A file that's open and cached, and remotely grows larger than the cache is potentially stuffed.

(3) Writes go to the backing filesystem, and can only be transferred to the network when the file is closed.

(4) There's no record of what changes have been made, so the whole file must be written back.

(5) The pages belong to the backing filesystem, and all metadata associated with that page are relevant only to the backing filesystem, and not anything stacked atop it.

The attached patch adds a generic core to which both networking filesystems and caches may bind. It transfers requests from networking filesystems to appropriate caches if possible, or else gracefully denies them.

If this facility is disabled in the kernel configuration, then all its operations will be trivially reducible to nothing by the compiler.

FS-Cache provides the following facilities:

(1) Caches can be added / removed at any time, even whilst in use.

(2) Adds a facility by which tags can be used to refer to caches, even if they're not mounted yet.

(3) More than one cache can be used at once. Caches can be selected explicitly by use of tags.

(4) The netfs is provided with an interface that allows either party to withdraw caching facilities from a file (required for (1)).

(5) A netfs may annotate cache objects that belongs to it.

(6) Cache objects can be pinned and reservations made.

(7) The interface to the netfs returns as few errors as possible, preferring rather to let the netfs remain oblivious.

(8) Cookies are used to represent indices, files and other objects to the netfs. The simplest cookie is just a NULL pointer - indicating nothing cached there.

(9) The netfs is allowed to propose - dynamically - any index hierarchy it desires, though it must be aware that the index search function is recursive, stack space is limited, and indices can only be children of indices.

(10) Indices can be used to group files together to reduce key size and to make group invalidation easier. The use of indices may make lookup quicker, but that's cache dependent.

(11) Data I/O is effectively done directly to and from the netfs's pages. The netfs indicates that page A is at index B of the data-file represented by cookie C, and that it should be read or written. The cache backend may or may not start I/O on that page, but if it does, a netfs callback will be invoked to indicate completion. The I/O may be either synchronous or asynchronous.

(12) Cookies can be "retired" upon release. At this point FS-Cache will mark them as obsolete and the index hierarchy rooted at that point will get recycled.

(13) The netfs provides a "match" function for index searches. In addition to saying whether a match was made or not, this can also specify that an entry should be updated or deleted.

FS-Cache maintains a virtual indexing tree in which all indices, files, objects and pages are kept. Bits of this tree may actually reside in one or more caches.

FSDEF | +------------------------------------+ | | NFS AFS | | +--------------------------+ +-----------+ | | | | homedir mirror afs.org redhat.com | | | +------------+ +---------------+ +----------+ | | | | | | 00001 00002 00007 00125 vol00001 vol00002 | | | | | +---+---+ +-----+ +---+ +------+------+ +-----+----+ | | | | | | | | | | | | | PG0 PG1 PG2 PG0 XATTR PG0 PG1 DIRENT DIRENT DIRENT R/W R/O Bak | | PG0 +-------+ | | 00001 00003 | +---+---+ | | | PG0 PG1 PG2

In the example above, you can see two netfs's being backed: NFS and AFS. These have different index hierarchies:

(*) The NFS primary index will probably contain per-server indices. Each server index is indexed by NFS file handles to get data file objects. Each data file objects can have an array of pages, but may also have further child objects, such as extended attributes and directory entries. Extended attribute objects themselves have page-array contents.

(*) The AFS primary index contains per-cell indices. Each cell index contains per-logical-volume indices. Each of volume index contains up to three indices for the read-write, read-only and backup mirrors of those volumes. Each of these contains vnode data file objects, each of which contains an array of pages.

The very top index is the FS-Cache master index in which individual netfs's have entries.

Any index object may reside in more than one cache, provided it only has index children. Any index with non-index object children will be assumed to only reside in one cache.

The FS-Cache overview can be found in:

Documentation/filesystems/caching/fscache.txt

The netfs API to FS-Cache can be found in:

Documentation/filesystems/caching/netfs-api.txt

The cache backend API to FS-Cache can be found in:

Documentation/filesystems/caching/backend-api.txt

Further changes [try #11] that have been made:

(*) FSCACHE_NEGATIVE_COOKIE has been removed. NULL should be used instead.

(*) Add read/write context maintenance for in case the cache operation fails.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-016-release-page.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: Release page->private in failed readahead

The attached patch causes read_cache_pages() to release page-private data on a page for which add_to_page_cache() fails or the filler function fails. This permits pages with caching references associated with them to be cleaned up.

Further changes [try #9] that have been made:

(*) The try_to_release_page() is called instead of calling the releasepage() op directly.

(*) The page is locked before try_to_release_page() is called.

(*) The call to try_to_release_page() and page_cache_release() have been abstracted out into a helper function as this bit of code occurs twice..

Further changes [try #10] that have been made:

(*) The comment header on the helper function is much expanded. This states why there's a need to call the releasepage() op in the event of an error.

(*) BUG() if the page is already locked when we try and lock it.

(*) Don't set the page mapping pointer until we've locked the page.

(*) The page is unlocked after try_to_release_page() is called.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-017-fscache-afs.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: Make kAFS use FS-Cache

The attached patch makes the kAFS filesystem in fs/afs/ use FS-Cache, and through it any attached caches. The kAFS filesystem will use caching automatically if it's available.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-018-cachefiles.dif:

From: David Howells <dhowells@redhat.com>

FS-Cache: CacheFiles: A cache that backs onto a mounted filesystem

Add a cache backend that permits a mounted filesystem to be used as a backing store for the cache.

CacheFiles uses a userspace daemon to do some of the cache management - such as reaping stale nodes and culling. This is called cachefilesd and lives in /sbin. The source for the daemon can be downloaded from:

http://people.redhat.com/~dhowells/cachefs/cachefilesd.c

And an example configuration from:

http://people.redhat.com/~dhowells/cachefs/cachefilesd.conf

The filesystem and data integrity of the cache are only as good as those of the filesystem providing the backing services. Note that CacheFiles does not attempt to journal anything since the journalling interfaces of the various filesystems are very specific in nature.

CacheFiles creates a proc-file - "/proc/fs/cachefiles" - that is used for communication with the daemon. Only one thing may have this open at once, and whilst it is open, a cache is at least partially in existence. The daemon opens this and sends commands down it to control the cache.

CacheFiles is currently limited to a single cache.

CacheFiles attempts to maintain at least a certain percentage of free space on the filesystem, shrinking the cache by culling the objects it contains to make space if necessary - see the "Cache Culling" section. This means it can be placed on the same medium as a live set of data, and will expand to make use of spare space and automatically contract when the set of data requires more space.

Further changes [try #11] that have been made:

(*) Make the calls to the statfs() superblock op supply a dentry not a vfsmount.

(*) CONFIG_CACHEFILES_DEBUG permits _enter(), _debug() and _exit() to be enabled dynamically.

(*) debugging macros are checked by gcc for printf format compliance even when completely disabled.

============ REQUIREMENTS ============

The use of CacheFiles and its daemon requires the following features to be available in the system and in the cache filesystem:

- dnotify.

- extended attributes (xattrs).

- openat() and friends.

- bmap() support on files in the filesystem (FIBMAP ioctl).

- The use of bmap() to detect a partial page at the end of the file.

It is strongly recommended that the "dir_index" option is enabled on Ext3 filesystems being used as a cache.

============= CONFIGURATION =============

The cache is configured by a script in /etc/cachefilesd.conf. These commands set up cache ready for use. The following script commands are available:

(*) brun <N>% (*) bcull <N>% (*) bstop <N>%

Configure the culling limits. Optional. See the section on culling The defaults are 7%, 5% and 1% respectively.

(*) dir <path>

Specify the directory containing the root of the cache. Mandatory.

(*) tag <name>

Specify a tag to FS-Cache to use in distinguishing multiple caches. Optional. The default is "CacheFiles".

(*) debug <mask>

Specify a numeric bitmask to control debugging in the kernel module. Optional. The default is zero (all off).

================== STARTING THE CACHE ==================

The cache is started by running the daemon. The daemon opens the cache proc file, configures the cache and tells it to begin caching. At that point the cache binds to fscache and the cache becomes live.

The daemon is run as follows:

/sbin/cachefilesd [-d]* [-s] [-n] [-f <configfile>]

The flags are:

(*) -d

Increase the debugging level. This can be specified multiple times and is cumulative with itself.

(*) -s

Send messages to stderr instead of syslog.

(*) -n

Don't daemonise and go into background.

(*) -f <configfile>

Use an alternative configuration file rather than the default one.

=============== THINGS TO AVOID ===============

Do not mount other things within the cache as this will cause problems. The kernel module contains its own very cut-down path walking facility that ignores mountpoints, but the daemon can't avoid them.

Do not create, rename or unlink files and directories in the cache whilst the cache is active, as this may cause the state to become uncertain.

Renaming files in the cache might make objects appear to be other objects (the filename is part of the lookup key).

Do not change or remove the extended attributes attached to cache files by the cache as this will cause the cache state management to get confused.

Do not create files or directories in the cache, lest the cache get confused or serve incorrect data.

Do not chmod files in the cache. The module creates things with minimal permissions to prevent random users being able to access them directly.

============= CACHE CULLING =============

The cache may need culling occasionally to make space. This involves discarding objects from the cache that have been used less recently than anything else. Culling is based on the access time of data objects. Empty directories are culled if not in use.

Cache culling is done on the basis of the percentage of blocks available in the underlying filesystem. There are three "limits":

(*) brun

If the amount of available space in the cache rises above this limit, then culling is turned off.

(*) bcull

If the amount of available space in the cache falls below this limit, then culling is started.

(*) bstop

If the amount of available space in the cache falls below this limit, then no further allocation of disk space is permitted until culling has raised the amount above this limit again.

These must be configured thusly:

0 <= bstop < bcull < brun < 100

Note that these are percentages of available space, and do _not_ appear as 100 minus the percentage displayed by the "df" program.

The userspace daemon scans the cache to build up a table of cullable objects. These are then culled in least recently used order. A new scan of the cache is started as soon as space is made in the table. Objects will be skipped if their atimes have changed or if the kernel module says it is still using them.

=============== CACHE STRUCTURE ===============

The CacheFiles module will create two directories in the directory it was given:

(*) cache/

(*) graveyard/

The active cache objects all reside in the first directory. The CacheFiles kernel module moves any retired or culled objects that it can't simply unlink to the graveyard from which the daemon will actually delete them.

The daemon uses dnotify to monitor the graveyard directory, and will delete anything that appears therein.

The module represents index objects as directories with the filename "I..." or "J...". Note that the "cache/" directory is itself a special index.

Data objects are represented as files if they have no children, or directories if they do. Their filenames all begin "D..." or "E...". If represented as a directory, data objects will have a file in the directory called "data" that actually holds the data.

Special objects are similar to data objects, except their filenames begin "S..." or "T...".

If an object has children, then it will be represented as a directory. Immediately in the representative directory are a collection of directories named for hash values of the child object keys with an '@' prepended. Into this directory, if possible, will be placed the representations of the child objects:

INDEX INDEX INDEX DATA FILES ========= ========== ================================= ================ cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400 cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry

If the key is so long that it exceeds NAME_MAX with the decorations added on to it, then it will be cut into pieces, the first few of which will be used to make a nest of directories, and the last one of which will be the objects inside the last directory. The names of the intermediate directories will have '+' prepended:

J1223/@23/+xy...z/+kl...m/Epqr

Note that keys are raw data, and not only may they exceed NAME_MAX in size, they may also contain things like '/' and NUL characters, and so they may not be suitable for turning directly into a filename.

To handle this, CacheFiles will use a suitably printable filename directly and "base-64" encode ones that aren't directly suitable. The two versions of object filenames indicate the encoding:

OBJECT TYPE PRINTABLE ENCODED =============== =============== =============== Index "I..." "J..." Data "D..." "E..." Special "S..." "T..."

Intermediate directories are always "@" or "+" as appropriate.

Each object in the cache has an extended attribute label that holds the object type ID (required to distinguish special objects) and the auxiliary data from the netfs. The latter is used to detect stale objects in the cache and update or retire them.

Note that CacheFiles will erase from the cache any file it doesn't recognise or any file of an incorrect type (such as a FIFO file or a device file).

This documentation is added by the patch to:

Documentation/filesystems/caching/cachefiles.txt

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-019-fscache-nfs.dif:

From: David Howells <dhowells@redhat.com>

NFS: Use local caching

The attached patch makes it possible for the NFS filesystem to make use of the network filesystem local caching service (FS-Cache).

To be able to use this, an updated mount program is required. This can be obtained from:

http://people.redhat.com/steved/cachefs/util-linux/

To mount an NFS filesystem to use caching, add an "fsc" option to the mount:

mount warthog:/ /a -o fsc

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-020-autofs-dcache.dif:

From: David Howells <dhowells@redhat.com>

AUTOFS: Make sure all dentries refs are released before calling kill_anon_super()

Make sure all dentries refs are released before calling kill_anon_super() so that the assumption that generic_shutdown_super() can completely destroy the dentry tree for there will be no external references holds true.

What was being done in the put_super() superblock op, is now done in the kill_sb() filesystem op instead, prior to calling kill_anon_super().

This makes the struct autofs_sb_info::root member variable redundant (since sb->s_root is still available), and so that is removed. The calls to shrink_dcache_sb() are also removed since they're also redundant as shrink_dcache_for_umount() will now be called after the cleanup routine.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

linux-2.6.18-021-dcache-crunch.dif:

From: David Howells <dhowells@redhat.com>

VFS: Destroy the dentries contributed by a superblock on unmounting

The attached patch destroys all the dentries attached to a superblock in one go by:

(1) Destroying the tree rooted at s_root.

(2) Destroying every entry in the anon list, one at a time.

(3) Each entry in the anon list has its subtree consumed from the leaves inwards.

This reduces the amount of work generic_shutdown_super() does, and avoids iterating through the dentry_unused list.

Note that locking is almost entirely absent in the shrink_dcache_for_umount*() functions added by this patch. This is because:

(1) at the point the filesystem calls generic_shutdown_super(), it is not permitted to further touch the superblock's set of dentries, and nor may it remove aliases from inodes;

(2) the dcache memory shrinker now skips dentries that are being unmounted; and

(3) the superblock no longer has any external references through which the VFS can reach it.

Given these points, the only locking we need to do is when we remove dentries from the unused list and the name hashes, which we do a directory's worth at a time.

We also don't need to guard against reference counts going to zero unexpectedly and removing bits of the tree we're working on as nothing else can call dput().

A cut down version of dentry_iput() has been folded into shrink_dcache_for_umount_subtree() function. Apart from not needing to unlock things, it also doesn't need to check for inotify watches.

In this version of the patch, the complaint about a dentry still being in use has been expanded from a single BUG_ON() and now gives much more information.

Signed-Off-By: David Howells <dhowells@redhat.com>

Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>

Acked-by: NeilBrown <neilb@suse.de>

linux-2.6.18-NFS_ALL.dif:

All of the above

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory   -  
[TXT]linux-2.6.18-001-mat..>2006-07-05 17:12 7.9K 
[TXT]linux-2.6.18-002-nfs..>2006-07-05 17:12 5.4K 
[TXT]linux-2.6.18-003-nfs..>2006-06-30 12:29 2.2K 
[TXT]linux-2.6.18-004-nfs..>2006-07-05 17:12 1.8K 
[TXT]linux-2.6.18-005-nfs..>2006-07-05 17:12 30K 
[TXT]linux-2.6.18-006-nfs..>2006-07-05 17:12 12K 
[TXT]linux-2.6.18-007-nfs..>2006-06-30 12:29 2.5K 
[TXT]linux-2.6.18-008-nfs..>2006-07-07 18:10 2.8K 
[TXT]linux-2.6.18-009-nfs..>2006-07-07 18:10 1.4K 
[TXT]linux-2.6.18-010-nfs..>2006-07-05 20:55 25K 
[TXT]linux-2.6.18-011-nfs..>2006-07-05 20:55 117K 
[TXT]linux-2.6.18-012-nfs..>2006-07-05 20:55 11K 
[TXT]linux-2.6.18-013-fsm..>2006-07-05 20:55 10K 
[TXT]linux-2.6.18-014-ker..>2006-07-05 17:12 9.7K 
[TXT]linux-2.6.18-015-fsc..>2006-06-30 12:29 132K 
[TXT]linux-2.6.18-016-rel..>2006-06-30 12:29 3.1K 
[TXT]linux-2.6.18-017-fsc..>2006-06-30 12:29 53K 
[TXT]linux-2.6.18-018-cac..>2006-07-05 17:12 128K 
[TXT]linux-2.6.18-019-fsc..>2006-07-05 20:55 41K 
[TXT]linux-2.6.18-020-aut..>2006-07-05 20:55 4.1K 
[TXT]linux-2.6.18-021-dca..>2006-07-05 20:55 7.3K 
[TXT]linux-2.6.18-NFS_ALL..>2006-07-06 12:19 537K 

Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_perl/2.0.11 Perl/v5.16.3 Server at linux-nfs.org Port 80