Administration Guide


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic] [Next Topic] [Index]


Administering Client Machines and the Cache Manager

This chapter describes how to administer an AFS client machine, which is any machine from which users can access the AFS filespace and communicate with AFS server processes. (A client machine can simultaneously function as an AFS server machine if appropriately configured.) An AFS client machine has the following characteristics:

To learn how to install the client functionality on a machine, see the IBM AFS Quick Beginnings.


Summary of Instructions

This chapter explains how to perform the following tasks by using the indicated commands:

Display cache size set at reboot cat /usr/vice/etc/cacheinfo
Display current cache size and usage fs getcacheparms
Change disk cache size without rebooting fs setcachesize
Initialize Cache Manager afsd
Display contents of CellServDB file cat /usr/vice/etc/CellServDB
Display list of database server machines from kernel memory fs listcells
Change list of database server machines in kernel memory fs newcell
Check cell's status regarding setuid fs getcellstatus
Set cell's status regarding setuid fs setcell
Set server probe interval fs checkservers -interval
Display machine's cell membership cat /usr/vice/etc/ThisCell
Change machine's cell membership Edit /usr/vice/etc/ThisCell
Flush cached file/directory fs flush
Flush everything cached from a volume fs flushvolume
Update volume-to-mount-point mappings fs checkvolumes
Display Cache Manager's server preference ranks fs getserverprefs
Set Cache Manager's server preference ranks fs setserverprefs
Display client machine addresses to register fs getclientaddrs
Set client machine addresses to register fs setclientaddrs
Control the display of warning and status messages fs messages
Display and change machine's system type fs sysname
Enable asynchronous writes fs storebehind

Overview of Cache Manager Customization

An AFS client machine's kernel includes a set of modifications, commonly referred to as the Cache Manager, that enable access to AFS files and directories and communications with AFS server processes. It is common to speak of the Cache Manager as a process or program, and in regular usage it appears to function like one. When configuring it, though, it is helpful to keep in mind that this usage is not strictly accurate.

The Cache Manager mainly fetches files on behalf of application programs running on the machine. When an application requests an AFS file, the Cache Manager contacts the Volume Location (VL) Server to obtain a list of the file server machines that house the volume containing the file. The Cache Manager then translates the application program's system call requests into remote procedure calls (RPCs) to the File Server running on the appropriate machine. When the File Server delivers the file, the Cache Manager stores it in a local cache before delivering it to the application program.

The File Server delivers a data structure called a callback along with the file. (To be precise, it delivers a callback for each file fetched from a read/write volume, and a single callback for all data fetched from a read-only volume.) A valid callback indicates that the Cache Manager's cached copy of a file matches the central copy maintained by the File Server. If an application on another AFS client machine changes the central copy, the File Server breaks the callback, and the Cache Manager must retrieve the new version when an application program on its machine next requests data from the file. As long as the callback is unbroken, however, the Cache Manager can continue to provide the cached version of the file to applications on its machine, which eliminates unnecessary network traffic.

The indicated sections of this chapter explain how to configure and customize the following Cache Manager features. All but the first (choosing disk or memory cache) are optional, because AFS sets suitable defaults for them.

You must make all configuration changes on the client machine itself (at the console or over a direct connection such as a telnet connection). You cannot configure the Cache Manager remotely. You must be logged in as the local superuser root to issue some commands, whereas others require no privilege. All files mentioned in this chapter must actually reside on the local disk of each AFS client machine (they cannot, for example, be symbolic links to files in AFS).

AFS's package program can simplify other aspects of client machine configuration, including those normally set in the machine's AFS initialization file. See Configuring Client Machines with the package Program.


Configuration and Cache-Related Files on the Local Disk

This section briefly describes the client configuration files that must reside in the local /usr/vice/etc directory on every client machine. If the machine uses a disk cache, there must be a partition devoted to cache files; by convention, it is mounted at the /usr/vice/cache directory.

Note for Windows users: Some files described in this document possibly do not exist on machines that run a Windows operating system. Also, Windows uses a backslash ( \ ) rather than a forward slash ( / ) to separate the elements in a pathname.

Configuration Files in the /usr/vice/etc Directory

The /usr/vice/etc directory on a client machine's local disk must contain certain configuration files for the Cache Manager to function properly. They control the most basic aspects of Cache Manager configuration.

If it is important that the client machines in your cell perform uniformly, it is most efficient to update these files from a central source. The following descriptions include pointers to sections that discuss how best to maintain the files.

afsd
The binary file for the program that initializes the Cache Manager. It must run each time the machine reboots in order for the machine to remain an AFS client machine. The program also initializes several daemons that improve Cache Manager functioning, such as the process that handles callbacks.

cacheinfo
A one-line file that sets the cache's most basic configuration parameters: the local directory at which the Cache Manager mounts the AFS filespace, the local disk directory to use as the cache, and how many kilobytes to allocate to the cache.

The IBM AFS Quick Beginnings explains how to create this file as you install a client machine. To change the cache size on a machine that uses a memory cache, edit the file and reboot the machine. On a machine that uses a disk cache, you can change the cache size without rebooting by issuing the fs setcachesize command. For instructions, see Determining the Cache Type, Size, and Location.

CellServDB
This ASCII file names the database server machines in the local cell and in any foreign cell to which you want to enable access from this machine. (Database server machines are the machines in a cell that run the Authentication, Backup, Protection, and VL Server processes; see Database Server Machines.)

The Cache Manager must be able to reach a cell's database server machines to fetch files from its filespace. Incorrect or missing information in the CellServDB file can slow or completely block access. It is important to update the file whenever a cell's database server machines change.

As the afsd program initializes the Cache Manager, it loads the contents of the file into kernel memory. The Cache Manager does not read the file between reboots, so to incorporate changes to the file into kernel memory, you must reboot the machine. Alternatively, you can issue the fs newcell command to insert the changes directly into kernel memory without changing the file. It can also be convenient to upgrade the file from a central source. For instructions, see Maintaining Knowledge of Database Server Machines.

(The CellServDB file on client machines is not the same as the one kept in the /usr/afs/etc directory on server machines, which lists only the local cell's database server machines. For instructions on maintaining the server CellServDB file, see Maintaining the Server CellServDB File).

NetInfo
This optional ASCII file lists one or more of the network interface addresses on the client machine. If it exists when the Cache Manager initializes, the Cache Manager uses it as the basis for the list of interfaces that it registers with File Servers. See Managing Multihomed Client Machines.

NetRestrict
This optional ASCII file lists one or more network interface addresses. If it exists when the Cache Manager initializes, the Cache Manager removes the specified addresses from the list of interfaces that it registers with File Servers. See Managing Multihomed Client Machines.

ThisCell
This ASCII file contains a single line that specifies the complete domain-style name of the cell to which the machine belongs. Examples are abc.com and stateu.edu. This value defines the default cell in which the machine's users become authenticated, and in which the command interpreters (for example, the bos command) contact server processes.

The IBM AFS Quick Beginnings explains how to create this file as you install the AFS client functionality. To learn about changing a client machine's cell membership, see Setting a Client Machine's Cell Membership.

In addition to these files, the /usr/vice/etc directory also sometimes contains the following types of files and subdirectories:

Cache-Related Files

A client machine that uses a disk cache must have a local disk directory devoted to the cache. The conventional mount point is /usr/vice/cache, but you can use another partition that has more available space.

Do not delete or directly modify any of the files in the cache directory. Doing so can cause a kernel panic, from which the only way to recover is to reboot the machine. By default, only the local superuser root can read the files directly, by virtue of owning them.

A client machine that uses a memory cache keeps all of the information stored in these files in machine memory instead.

CacheItems
A binary-format file in which the Cache Manager tracks the contents of cache chunks (the V files in the directory, described just following), including the file ID number (fID) and the data version number.

VolumeItems
A binary-format file in which the Cache Manager records the mapping between mount points and the volumes from which it has fetched data. The Cache Manager uses the information when responding to the pwd command, among others.

Vn
A cache chunk file, which expands to a maximum size (by default, 64 KB) to house data fetched from AFS files. The number of Vn files in the cache depends on the cache size among other factors. The n is the index assigned to each file; they are numbered sequentially, but the Cache Manager does not necessarily use them in order or contiguously. If an AFS file is larger than the maximum size for Vn files, the Cache Manager divides it across multiple Vn files.

Determining the Cache Type, Size, and Location

This section explains how to configure a memory or disk cache, how to display and set the size of either type of cache, and how to set the location of the cache directory for a disk cache.

The Cache Manager uses a disk cache by default, and it is the preferred type of caching. To configure a memory cache, include the -memcache flag on the afsd command, which is normally invoked in the machine's AFS initialization file. If configured to use a memory cache, the Cache Manager does no disk caching, even if the machine has a disk.

Choosing the Cache Size

Cache size influences the performance of a client machine more directly than perhaps any other cache parameter. The larger the cache, the faster the Cache Manager is likely to deliver files to users. A small cache can impair performance because it increases the frequency at which the Cache Manager must discard cached data to make room for newly requested data. When an application asks for data that has been discarded, the Cache Manager must request it from the File Server, and fetching data across the network is almost always slower than fetching it from the local disk. The Cache Manager never discards data from a file that has been modified locally but not yet stored back to the File Server. If the cache is very small, the Cache Manager possible cannot find any data to discard. For more information about the algorithm it uses when discarding cached data, see How the Cache Manager Chooses Data to Discard).

The amount of disk or memory you devote to caching depends on several factors. The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute limit. In addition, you cannot allocate more than 95% of the space available on the cache directory's partition to a disk cache. The afsd program exits without starting the Cache Manager and prints an appropriate message to the standard output stream if you violate this restriction. For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate more memory than is actually available, the afsd program exits without initializing the Cache Manager and produces the following message on the standard output stream:

   afsd: memCache allocation failure at number KB

where number is how many kilobytes were allocated just before the failure.

Within these hard limits, the factors that determine appropriate cache size include the number of users working on the machine, the size of the files with which they usually work, and (for a memory cache) the number of processes that usually run on the machine. The higher the demand from these factors, the larger the cache needs to be to maintain good performance.

Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on the factors mentioned previously, and is difficult to predict.

Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use a smaller memory cache.

AFS imposes an absolute limit on cache size in some versions. See the IBM AFS Release Notes for the version you are using.

Displaying and Setting the Cache Size and Location

The Cache Manager determines how big to make the cache by reading the /usr/vice/etc/cacheinfo file as it initializes. As directed in the IBM AFS Quick Beginnings, you must create the file before running the afsd program. The file also defines the directory on which to mount AFS (by convention, /afs), and the local disk directory to use for a cache directory.

To change any of the values in the file, log in as the local superuser root. You must reboot the machine to have the new value take effect. For instructions, see To edit the cacheinfo file.

To change the cache size at reboot without editing the cacheinfo file, include the -blocks argument to the afsd command; see the command's reference page in the IBM AFS Administration Reference.

For a disk cache, you can also use the fs setcachesize command to reset the cache size without rebooting. The value you set persists until the next reboot, at which time the cache size returns to the value specified in the cacheinfo file or by the -blocks argument to the afsd command. For instructions, see To change the disk cache size without rebooting.

To display the current cache size and the amount of space the Cache Manager is using at the moment, use the fs getcacheparms command as detailed in To display the current cache size.

To display the cache size set at reboot

  1. Use a text editor or the cat command to display the contents of the /usr/vice/etc/cacheinfo file.
       % cat /usr/vice/etc/cacheinfo
    

To display the current cache size

  1. Issue the fs getcacheparms command on the client machine.
       % fs getcacheparms
    

    where getca is the shortest acceptable abbreviation of getcacheparms.

    The output shows the number of kilobyte blocks the Cache Manager is using as a cache at the moment the command is issued, and the current size of the cache. For example:

       AFS using 13709 of the cache's available 15000 1K byte blocks.
    

To edit the cacheinfo file

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Use a text editor to edit the /usr/vice/etc/cacheinfo file, which has three fields, separated by colons:

    The following example mounts the AFS filespace at the /afs directory, names /usr/vice/cache as the cache directory, and sets cache size to 50,000 KB:

       /afs:/usr/vice/cache:50000
    

To change the disk cache size without rebooting

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs setcachesize command to set a new disk cache size.
    Note:This command does not work for a memory cache.
       # fs setcachesize <size in 1K byte blocks (0 => reset)>
    

    where

    setca
    Is the shortest acceptable abbreviation of setcachesize.

    size in 1K byte blocks (0 => reset)
    Sets the number of kilobyte blocks to be used for the cache. Specify a positive integer (1024 equals 1 MB), or 0 (zero) to reset the cache size to the value specified in the cacheinfo file.

To reset the disk cache size to the default without rebooting

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs setcachesize command to reset the size of the local disk cache (the command does not work for a memory cache). Choose one of the two following options:

    where

    setca
    Is the shortest acceptable abbreviation of setcachesize.

    0
    Resets the disk cache size to the value in the third field of the /usr/vice/etc/cacheinfo file.

    -reset
    Resets the cache size to the value set at the last reboot.

How the Cache Manager Chooses Data to Discard

When the cache is full and application programs request more data from AFS, the Cache Manager must flush out cache chunks to make room for the data. The Cache Manager considers two factors:

  1. How recently an application last accessed the data.

  2. Whether the chunk is dirty. A dirty chunk contains changes to a file that have not yet been saved back to the permanent copy stored on a file server machine.

The Cache Manager first checks the least-recently used chunk. If it is not dirty, the Cache Manager discards the data in that chunk. If the chunk is dirty, the Cache Manager moves on to check the next least recently used chunk. It continues in this manner until it has created a sufficient number of empty chunks.

Chunks that contain data fetched from a read-only volume are by definition never dirty, so the Cache Manager can always discard them. Normally, the Cache Manager can also find chunks of data fetched from read/write volumes that are not dirty, but a small cache makes it difficult to find enough eligible data. If the Cache Manager cannot find any data to discard, it must return I/O errors to application programs that request more data from AFS. Application programs usually have a means for notifying the user of such errors, but not for revealing their cause.


Setting Other Cache Parameters with the afsd program

There are only three cache configuration parameters you must set: the mount directory for AFS, the location of the disk cache directory, and the cache size. They correspond to the three fields in the /usr/vice/etc/cacheinfo file, as discussed in Determining the Cache Type, Size, and Location. However, if you want to experiment with fine-tuning cache performance, you can use the arguments on the afsd command to control several other parameters. This section discusses a few of these parameters that have the most direct effect on cache performance. To learn more about the afsd command's arguments, see its reference page in the IBM AFS Administration Reference.

In addition, the AFS initialization script included in the AFS distribution for each system type includes several variables that set several afsd arguments in a way that is suitable for client machines of different sizes and usage patterns. For instructions on using the script most effectively, see the section on configuring the Cache Manager in the IBM AFS Quick Beginnings.

Setting Cache Configuration Parameters

The cache configuration parameters with the most direct effect on cache performance include the following:

For a description of how the Cache Manager determines defaults for number of chunks, chunk size, and number of dcache entries in a disk cache, see Configuring a Disk Cache; for a memory cache, see Controlling Memory Cache Configuration. The instructions also explain how to use the afsd command's arguments to override the defaults.

Configuring a Disk Cache

The default number of cache chunks (Vn files) in a disk cache is calculated by the afsd command to be the greatest of the following:

You can override this value by specifying a positive integer with the -files argument. Consider increasing this value if more than 75% of the Vn files are already used soon after the Cache Manager finishes initializing. Consider decreasing it if only a small percentage of the chunks are used at that point. In any case, never specify a value less than 100, because a smaller value can cause performance problems.

The following example sets the number of Vn files to 2,000:

   /usr/vice/etc/afsd -files 2000
Note:It is conventional to place the afsd command in a machine's AFS initialization file, rather than entering it in a command shell. Furthermore, the values specified in this section are examples only, and are not necessarily suitable for a specific machine.

The default chunk size for a disk cache is 64 KB. In general, the only reason to change it is to adjust to exceptionally slow or fast networks; see Setting Cache Configuration Parameters. You can use the -chunksize argument to override the default. Chunk size must be a power of 2, so provide an integer between 0 (zero) and 30 to be used as an exponent of 2. For example, a value of 10 sets chunk size to 1 KB (210 = 1024); a value of 16 equals the default for disk caches (216 = 64 KB). Specifying a value of 0 (zero) or greater than 30 returns chunk size to the default. Values less than 10 (1 KB) are not recommended. The following example sets chunk size to 16 KB (214):

   /usr/vice/etc/afsd -chunksize 14

For a disk cache, the default number of dcache entries duplicated in memory is one-half the number of chunks specified with the -files argument, to a maximum of 2,000 entries. You can use the -dcache argument to change the default, even exceeding 2,000 if you wish. Duplicating more than half the dcache entries in memory is not usually necessary, but sometimes improves performance slightly, because access to memory is faster than access to disk. The following example sets the number to 750:

   /usr/vice/etc/afsd -dcache 750

When configuring a disk cache, you can combine the afsd command's arguments in any way. The main reason for this flexibility is that the setting you specify for disk cache size (in the cacheinfo file or with the -blocks argument) is an absolute maximum limit. You cannot override it by specifying higher values for the -files or -chunksize arguments, alone or in combination. A related reason is that the Cache Manager does not have to reserve a set amount of memory on disk. Vn files (the chunks in a disk cache) are initially zero-length, but can expand up to the specified chunk size and shrink again, as needed. If you set the number of Vn files to such a large value that expanding all of them to the full allowable size exceeds the total cache size, they simply never grow to full size.

Controlling Memory Cache Configuration

Configuring a memory cache differs from configuring a disk cache in that not all combinations of the afsd command's arguments are allowed. This limitation results from the greater interaction between the configuration parameters in a memory cache than a disk cache. If all combinations are allowed, it is possible to set the parameters in an inconsistent way. A list of the acceptable and unacceptable combinations follows a discussion of default values.

The default chunk size for a memory cache is 8 KB. In general, the only reason to change it is to adjust to exceptionally slow or fast networks; see Setting Cache Configuration Parameters.

There is no predefined default for number of chunks in a memory cache. The Cache Manager instead calculates the correct number by dividing the total cache size by the chunk size. Recall that for a memory cache, all dcache entries must be in memory. This implies that the number of chunks equals the number of dcache entries in memory, and that there is no default for number of dcache entries (like the number of chunks, it is calculated by dividing the total size by the chunk size).

The following are acceptable combinations of the afsd command's arguments when configuring a memory cache:

The following arguments or combinations explicitly set the number of chunks and dcache entries. It is best not to use them, because they set the cache size indirectly, forcing you to perform a hand calculation to determine the size of the cache. Instead, set the -blocks and -chunksize arguments alone or in combination; in those cases, the Cache Manager determines the number of chunks and dcache entries itself. Because the following combinations are not recommended, no examples are included.

Do not use the following arguments for a memory cache:


Maintaining Knowledge of Database Server Machines

For the users of an AFS client machine to access a cell's AFS filespace and other services, the Cache Manager and other client-side agents must have an accurate list of the cell's database server machines. The affected functions include the following:

To enable a machine's users to access a cell, you must list the names and IP addresses of its database server machines in the /usr/vice/etc/CellServDB file on the machine's local disk. In addition to the machine's home cell, you can list any foreign cells that you want to enable users to access. (To enable access to a cell's filespace, you must also mount its root.cell volume in the local AFS filespace; the conventional location is just under the AFS root directory, /afs. For instructions, see the IBM AFS Quick Beginnings.)

How Clients Use the List of Database Server Machines

As the afsd program runs and initializes the Cache Manager, it reads the contents of the CellServDB file into kernel memory. The Cache Manager does not consult the file again until the machine next reboots. In contrast, the command interpreters for the AFS command suites (such as fs and pts) read the CellServDB file each time they need to contact a database server process.

When a cell's list of database server machines changes, you must change both the CellServDB file and the list in kernel memory to preserve consistent client performance; some commands probably fail if the two lists of machines disagree. One possible method for updating both the CellServDB file and kernel memory is to edit the file and reboot the machine. To avoid needing to reboot, you can instead perform both of the following steps:

  1. Issue the fs newcell command to alter the list in kernel memory directly, making the changes available to the Cache Manager.

  2. Edit the CellServDB file to make the changes available to command interpreters. For a description of the file's format, see The Format of the CellServDB file.

The consequences of missing or incorrect information in the CellServDB file or kernel memory are as follows:

The Format of the CellServDB file

When editing the /usr/vice/etc/CellServDB file, you must use the correct format for cell and machine entries. Each cell has a separate entry. The first line has the following format:

   >cell_name      #organization

where cell_name is the cell's complete Internet domain name (for example, abc.com) and organization is an optional field that follows any number of spaces and the number sign (#) and can name the organization to which the cell corresponds (for example, the ABC Corporation). After the first line comes a separate line for each database server machine. Each line has the following format:

   IP_address   #machine_name

where IP_address is the machine's IP address in dotted decimal format (for example, 192.12.105.3). Following any number of spaces and the number sign (#) is machine_name, the machine's fully-qualified hostname (for example, db1.abc.com). In this case, the number sign does not indicate a comment: machine_name is a required field.

The order in which the cells appear is not important, but it is convenient to put the client machine's home cell first. Do not include any blank lines in the file, not even after the last entry.

The following example shows entries for two cells, each of which has three database server machines:

   >abc.com       #ABC Corporation (home cell)
   192.12.105.3      #db1.abc.com
   192.12.105.4      #db2.abc.com
   192.12.105.55     #db3.abc.com
   >stateu.edu    #State University cell
   138.255.68.93     #serverA.stateu.edu
   138.255.68.72     #serverB.stateu.edu
   138.255.33.154    #serverC.stateu.edu

Maintaining the Client CellServDB File

Because a correct entry in the CellServDB file is vital for consistent client performance, you must also update the file on each client machine whenever a cell's list of database server machines changes (for instance, when you follow the instructions in the IBM AFS Quick Beginnings to add or remove a database server machine). To facilitate the client updates, you can use the package program, which copies files from a central source in AFS to the local disk of client machines. It is conventional to invoke the package program in a client machine's AFS initialization file so that it runs as the machine reboots, but you can also issue the package command at any time. For instructions, see Running the package program.

If you use the package program, the conventional location for your cell's central source CellServDB file is /afs/cell_name/common/etc/CellServDB, where cell_name is your cell name.

Creating a symbolic or hard link from /usr/vice/etc/CellServDB to a central source file in AFS is not a viable option. The afsd program reads the file into kernel memory before the Cache Manager is completely initialized and able to access AFS.

Because every client machine has its own copy of the CellServDB file, you can in theory make the set of accessible cells differ on various machines. In most cases, however, it is best to maintain consistency between the files on all client machines in the cell: differences between machines are particularly confusing if users commonly use a variety of machines rather than just one.

The AFS Product Support group maintains a central CellServDB file that includes all cells that have agreed to make their database server machines access to other AFS cells. It is advisable to check this file periodically for updated information. See Making Your Cell Visible to Others.

An entry in the local CellServDB is one of the two requirements for accessing a cell. The other is that the cell's root.cell volume is mounted in the local filespace, by convention as a subdirectory of the /afs directory. For instructions, see To create a cellular mount point.

Note:The /usr/vice/etc/CellServDB file on a client machine is not the same as the /usr/afs/etc/CellServDB file on the local disk of a file server machine. The server version lists only the database server machines in the server machine's home cell, because server processes never need to contact foreign cells. It is important to update both types of CellServDB file on all machines in the cell whenever there is a change to your cell's database server machines. For more information about maintaining the server version of the CellServDB file, see Maintaining the Server CellServDB File.

To display the /usr/vice/etc/CellServDB file

  1. Use a text editor or the cat command to display the contents of the /usr/vice/etc/CellServDB file. By default, the mode bits on the file permit anyone to read it.
       % cat /usr/vice/etc/CellServDB
    

To display the list of database server machines in kernel memory

  1. Issue the fs listcells command.
       % fs listcells [&] 
    

    where listc is the shortest acceptable abbreviation of listcells.

    To have your shell prompt return immediately, include the ampersand (&), which makes the command run in the background. It can take a while to generate the complete output because the kernel stores database server machines' IP addresses only, and the fs command interpreter has the cell's name resolution service (such as the Domain Name Service or a local host table) translate them into hostnames. You can halt the command at any time by issuing an interrupt signal such as Ctrl-c.

    The output includes a single line for each cell, in the following format:

       Cell cell_name on hosts list_of_hostnames.
    

    The name service sometimes returns hostnames in uppercase letters, and if it cannot resolve a name at all, it returns its IP address. The following example illustrates all three possibilities:

       % fs listcells
          .
          .
       Cell abc.com on hosts db1.abc.com db2.abc.com db3.abc.com
       Cell stateu.edu on hosts SERVERA.STATEU.EDU SERVERB.STATEU.EDU 
    			    SERVERC.STATEU.EDU
       Cell ghi.org on hosts 191.255.64.111 191.255.64.112
          .
          .
    

To change the list of a cell's database server machines in kernel memory

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. If you a use a central copy of the CellServDB file as a source for client machines, verify that its directory's ACL grants you the l (lookup), r (read), and w (write) permissions. The conventional directory is /afs/cell_name/common/etc. If necessary, issue the fs listacl command, which is fully described in Displaying ACLs.
       # fs listacl [<dir/file path>]
    

  3. Issue the fs newcell command to add or change a cell's entry in kernel memory. Repeat the command for each cell.
    Note:You cannot use this command to remove a cell's entry completely from kernel memory. In the rare cases when you urgently need to prevent access to a specific cell, you must edit the CellServDB file and reboot the machine.
       # fs newcell <cell name> <primary servers>+ \
                    [-linkedcell <linked cell name>]
    

    where

    n
    Is the shortest acceptable abbreviation of newcell.

    cell name
    Specifies the complete Internet domain name of the cell for which to record a new list of database server machines.

    primary servers
    Specifies the fully-qualified hostname or IP address in dotted-decimal format for each database server machine in the cell. The list you provide completely replaces the existing list.

    -linkedcell
    Specifies the complete Internet domain name of the AFS cell to link to a DCE cell for the purposes of DFS fileset location. You can use this argument if the machine's AFS users access DFS via the AFS/DFS Migration Toolkit Protocol Translator. For instructions, see the IBM AFS/DFS Migration Toolkit Administration Guide and Reference.

  4. Add or edit the cell's entry in the local /usr/vice/etc/CellServDB file, using one of the following three methods. In each case, be sure to obey the formatting requirements described in The Format of the CellServDB file.

Determining if a Client Can Run Setuid Programs

A setuid program is one whose binary file has the UNIX setuid mode bit turned on. While a setuid program runs, the user who initialized it assumes the local identity (UNIX UID) of the binary file's owner, and so is granted the permissions in the local file system that pertain to the owner. Most commonly, the issuer's assumed identity (often referred to as effective UID) is the local superuser root.

AFS does not recognize effective UID: if a setuid program accesses AFS files and directories, it uses the current AFS identity of the user who initialized the program, not of the program's owner. Nevertheless, it can be useful to store setuid programs in AFS for use on more than one client machine. AFS enables a client machine's administrator to determine whether the local Cache Manager allows setuid programs to run or not.

By default, the Cache Manager allows programs from its home cell to run with setuid permission, but denies setuid permission to programs from foreign cells. A program belongs to the same cell as the file server machine that houses the volume in which the file resides, as specified in the file server machine's /usr/afs/etc/ThisCell file. The Cache Manager determines its own home cell by reading the /usr/vice/etc/ThisCell file at initialization.

To change a cell's setuid status with respect to the local machine, become the local superuser root and issue the fs setcell command. To determine a cell's current setuid status, use the fs getcellstatus command.

When you issue the fs setcell command, you directly alter a cell's setuid status as recorded in kernel memory, so rebooting the machine is not necessary. However, nondefault settings do not persist across reboots of the machine unless you add the appropriate fs setcell command to the machine's AFS initialization file.

Only members of the system:administrators group can turn on the setuid mode bit on an AFS file or directory. When the setuid mode bit is turned on, the UNIX ls -l command displays the third user mode bit as an s instead of an x, but for an AFS file or directory, the s appears only if setuid permission is enabled for the cell in which the file resides.

To determine a cell's setuid status

  1. Issue the fs getcellstatus command to check the setuid status of each desired cell.
       % fs getcellstatus <cell name>
    

    where

    getce
    Is the shortest acceptable abbreviation of getcellstatus.

    cell name
    Names each cell for which to report setuid status. Provide the complete Internet domain name or a shortened form that distinguishes it from the other cells listed in the local /usr/vice/etc/CellServDB file.

The output reports the setuid status of each cell:

To change a cell's setuid status

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs setcell command to change the setuid status of the cell.
       # fs setcell <cell name>+ [-suid] [-nosuid]
    

    where

    setce
    Is the shortest acceptable abbreviation of setcell.

    cell name
    Names each cell for which to change setuid status as specified by the -suid or -nosuid flag. Provide each cell's complete Internet domain name or a shortened form that distinguishes it from the other cells listed in the local /usr/vice/etc/CellServDB file.

    -suid
    Enables programs from each specified cell to execute with setuid permission. Provide this flag or the -nosuid flag, or omit both to disable setuid permission for each cell.

    -nosuid
    Prevents programs from each specified cell from executing with setuid permission. Provide this flag or the -suid flag, or omit both to disable setuid permission for each cell.

Setting the File Server Probe Interval

The Cache Manager periodically sends a probe to server machines to verify that they are still accessible. Specifically, it probes the database server machines in its cell and those file servers that house data it has cached.

If a server process does not respond to a probe, the client machine assumes that it is inaccessible. By default, the interval between probes is three minutes, so it can take up to three minutes for a client to recognize that a server process is once again accessible after it was inaccessible.

To adjust the probe interval, include the -interval argument to the fs checkservers command while logged in as the local superuser root. The new interval setting persists until you again issue the command or reboot the machine, at which time the setting returns to the default. To preserve a nondefault setting across reboots, include the appropriate fs checkservers command in the machine's AFS initialization file.

To set a client's file server probe interval

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs checkservers command with the -interval argument.

       # fs checkservers -interval <seconds between probes>
    

    where

    checks
    Is the shortest acceptable abbreviation of checkservers.

    -interval
    Specifies the number of seconds between probes. Provide an integer value greater than zero.

Setting a Client Machine's Cell Membership

Each client machine belongs to a particular cell, as named in the /usr/vice/etc/ThisCell on its local disk. The machine's cell membership determines three defaults important to users of the machine:

To display a client machine's cell membership

  1. Use a text editor or the cat command to display the contents of the /usr/vice/etc/ThisCell file.
       % cat /usr/vice/etc/ThisCell
    

To set a client machine's cell membership

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Using a text editor, replace the cell name in the /usr/vice/etc/ThisCell file.

  3. (Optional.) Reboot the machine to enable the Cache Manager to use the new cell name immediately; the appropriate command depends on the machine's system type. The klog program, AFS-modified login utilities, and the AFS command interpreters use the new cell name the next time they are invoked; no reboot is necessary.
       # sync
       
       # shutdown
    

Forcing the Update of Cached Data

AFS's callback mechanism normally guarantees that the Cache Manager provides the most current version of a file or directory to the application programs running on its machine. However, you can force the Cache Manager to discard (flush) cached data so that the next time an application program requests it, the Cache Manager fetches the latest version available at the File Server.

You can control how many file system elements to flush at a time:

In addition to callbacks, the Cache Manager has a mechanism for tracking other kinds of possible changes, such as changes in a volume's location. If a volume moves and the Cache Manager has not accessed any data in it for a long time, the Cache Manager's volume location record can be wrong. To resynchronize it, use the fs checkvolumes command. When you issue the command, the Cache Manager creates a new table of mappings between volume names, ID numbers, and locations. This forces the Cache Manager to reference newly relocated and renamed volumes before it can provide data from them.

It is also possible for information about mount points to become corrupted in the cache. Symptoms of a corrupted mount point included garbled output from the fs lsmount command, and failed attempts to change directory to or list the contents of a mount point. Use the fs flushmount command to discard a corrupted mount point. The Cache Manager must refetch the mount point the next time it crosses it in a pathname. (The Cache Manager periodically refreshes cached mount points, but the only other way to discard them immediately is to reinitialize the Cache Manager by rebooting the machine.

To flush certain files or directories

  1. Issue the fs flush command.
       % fs flush [<dir/file path>+]
    

    where

    flush
    Must be typed in full.

    dir/file path
    Names each file or directory structure to flush from the cache. Omit this argument to flush the current working directory. Flushing a directory structure does not flush any files or subdirectories cached from it.

To flush all data from a volume

  1. Issue the fs flushvolume command.
      % fs flushvolume [<dir/file path>+]
    

    where

    flushv
    Is the shortest acceptable abbreviation of flushvolume.

    dir/file path
    Names a file or directory from each volume to flush from the cache. The Cache Manager flushes everything in the cache that it has fetched from the same volume. Omit this argument to flush all cached data fetched from the volume that contains the current working directory.

To force the Cache Manager to notice other volume changes

  1. Issue the fs checkvolumes command.
       % fs checkvolumes
    

    where checkv is the shortest acceptable abbreviation of checkvolumes.

The following command confirms that the command completed successfully:

   All volumeID/name mappings checked.

To flush one or more mount points

  1. Issue the fs flushmount command.
       % fs flush [<dir/file path>+]
    

    where

    flushm
    Is the shortest acceptable abbreviation of flushmount.

    dir/file path
    Names each mount point to flush from the cache. Omit this argument to flush the current working directory. Files or subdirectories cached from the associated volume are unaffected.

Maintaining Server Preference Ranks

As mentioned in the introduction to this chapter, AFS uses client-side data caching and callbacks to reduce the amount of network traffic in your cell. The Cache Manager also tries to make its use of the network as efficient as possible by assigning preference ranks to server machines based on their network proximity to the local machine. The ranks bias the Cache Manager to fetch information from the server machines that are on its own subnetwork or network rather than on other networks, if possible. Reducing the network distance that data travels between client and server machine tends to reduce network traffic and speed the Cache Manager's delivery of data to applications.

The Cache Manager stores two separate sets of preference ranks in kernel memory. The first set of ranks applies to machines that run the Volume Location (VL) Server process, hereafter referred to as VL Server machines. The second set of ranks applies to machines that run the File Server process, hereafter referred to as file server machines. This section explains how the Cache Manager sets default ranks, how to use the fs setserverprefs command to change the defaults or set new ranks, and how to use the fs getserverprefs command to display the current set of ranks.

How the Cache Manager Sets Default Ranks

As the afsd program initializes the Cache Manager, it assigns a preference rank of 10,000 to each of the VL Server machines listed in the local /usr/vice/etc/CellServDB file. It then randomizes the ranks by adding an integer randomly chosen from the range 0 (zero) to 126. It avoids assigning the same rank to machines in one cell, but it is possible for machines from different cells to have the same rank. This does not present a problem in use, because the Cache Manager compares the ranks of only one cell's database server machines at a time. Although AFS supports the use of multihomed database server machines, the Cache Manager only uses the single address listed for each database server machine in the local /usr/vice/etc/CellServDB file. Only Ubik can take advantage of a multihomed database server machine's multiple interfaces.

The Cache Manager assigns preference ranks to a file server machine when it obtains the server's VLDB record from the VL Server, the first time that it accesses a volume that resides on the machine. If the machine is multihomed, the Cache Manager assigns a distinct rank to each of its interfaces (up to the number of interfaces that the VLDB can store for each machine, which is specified in the IBM AFS Release Notes). The Cache Manager compares the interface's IP address to the local machine's address and applies the following algorithm:

If the client machine has only one interface, the Cache Manager compares it to the server interface's IP address and sets a rank according to the algorithm. If the client machine is multihomed, the Cache Manager compares each of the local interface addresses to the server interface, and assigns to the server interface the lowest rank that results from comparing it to all of the client interfaces.

After assigning a base rank to a file server machine interface, the Cache Manager adds to it a number randomly chosen from the range 0 (zero) to 15. As an example, a file server machine interface in the same subnetwork as the local machine receives a base rank of 20,000, but the Cache Manager records the actual rank as an integer between 20,000 and 20,015. This process reduces the number of interfaces that have exactly the same rank. As with VL Server machine ranks, it is possible for file server machine interfaces from foreign cells to have the same rank as interfaces in the local cell, but this does not present a problem. Only the relative ranks of the interfaces that house a specific volume are relevant, and AFS supports storage of a volume in only one cell at a time.

How the Cache Manager Uses Preference Ranks

Each preference rank pairs an interface's IP address with an integer that can range from 1 to 65,534. A lower rank (lower number) indicates a stronger preference. Once set, a rank persists until the machine reboots, or until you use the fs setserverprefs command to change it.

The Cache Manager uses VL Server machine ranks when it needs to fetch volume location information from a cell. It compares the ranks for the cell's VL Server machines and attempts to contact the VL Server process on the machine with the best (lowest integer) rank. If it cannot reach that VL Server, it tries to contact the VL Server with the next best rank, and so on. If all of a cell's VL Server machines are inaccessible, the Cache Manager cannot fetch data from the cell.

Similarly, when the Cache Manager needs to fetch data from a volume, it compares the ranks for the interfaces of machines that house the volume, and attempts to contact the interface that has the best rank. If it cannot reach the fileserver process via that interface, it tries to contact the interface with the next best integer rank, and so on. If it cannot reach any of the interfaces for machines that house the volume, it cannot fetch data from the volume.

Displaying and Setting Preference Ranks

To display the file server machine ranks that the Cache Manager is using, use the fs getserverprefs command. Include the -vlservers flag to display VL Server machine ranks instead. By default, the output appears on the standard output stream (stdout), but you can write it to a file instead by including the -file argument.

The Cache Manager stores IP addresses rather than hostnames in its kernel list of ranks, but by default the output identifies interfaces by hostname after calling a translation routine that refers to either the cell's name service (such as the Domain Name Server) or the local host table. If an IP address appears in this case, it is because the translation attempt failed. To bypass the translation step and display IP addresses rather than hostnames, include the -numeric flag. This can significantly speed up the output.

You can use the fs setserverprefs command to reset an existing preference rank, or to set the initial rank of a file server machine interface or VL Server machine for which the Cache Manager has no rank. The ranks you set persist until the machine reboots or until you issue the fs setserverprefs command again. To make a rank persist across a reboot, place the appropriate fs setserverprefs command in the machine's AFS initialization file.

As with default ranks, the Cache Manager adds a randomly chosen integer to each rank range that you assign. For file server machine interfaces, the randomizing number is from the range 0 (zero) to 15; for VL Server machines, it is from the range 0 (zero) to 126. For example, if you assign a rank of 15,000 to a file server machine interface, the Cache Manager stores an integer between 15,000 to 15,015.

To assign VL Server machine ranks, list them after the -vlserver argument to the fs setserverprefs command.

To assign file server machine ranks, use or more of the three possible methods:

  1. List them after the -servers argument on the command line.

  2. Record them in a file and name it with the -file argument. You can easily generate a file with the proper format by including the -file argument to the fs getserverprefs command.

  3. Provide them via the standard input stream, by including the -stdin flag. This enables you to feed in values directly from a command or script that generates preferences using an algorithm appropriate for your cell. It must generate them in the proper format, with one or more spaces between each pair and between the two parts of the pair. The AFS distribution does not include such a script, so you must write one if you want to use this method.

You can combine any of the -servers, -file, and -stdin options on the same command line if you wish. If more than one of them specifies a rank for the same interface, the one assigned with the -servers argument takes precedence. You can also provide the -vlservers argument on the same command line to set VL Server machine ranks at the same time as file server machine ranks.

The fs command interpreter does not verify hostnames or IP addresses, and so willingly stores ranks for hostnames and addresses that don't actually exist. The Cache Manager never uses such ranks unless the same VLDB record for a server machine records the same incorrect information.

To display server preference ranks

  1. Issue the fs getserverprefs command to display the Cache Manager's preference ranks for file server machines or VL Server machines.
       % fs getserverprefs [-file <output to named file>] [-numeric] [-vlservers]
    

    where

    gp
    Is an acceptable alias for getserverprefs (gets is the shortest acceptable abbreviation).

    -file
    Specifies the pathname of the file to which to write the list of ranks. Omit this argument to display the list on the standard output stream (stdout).

    -numeric
    Displays the IP address, rather than the hostname, of each ranked machine interface. Omit this flag to have the addresses translated into hostnames, which takes longer.

    -vlservers
    Displays ranks for VL Server machines rather than file server machines.

    The following example displays file server machine ranks. The -numeric flag is not used, so the appearance of an IP address indicates that is not currently possible to translate it to a hostname.

       % fs gp
       fs5.abc.com         20000
       fs1.abc.com         30014
       server1.stateu.edu  40011
       fs3.abc.com         20001
       fs4.abc.com         30001
       192.12.106.120      40002
       192.12.106.119      40001
          .   .   .   .   .     . .
    

To set server preference ranks

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs setserverprefs command to set the Cache Manager's preference ranks for one or more file server machines or VL Server machines.
       # fs setserverprefs [-servers <fileserver names and ranks>+]  \
                           [-vlservers <VL server names and ranks>+]  \
                           [-file <input from named file>] [-stdin]
    

    where

    sp
    Is an acceptable alias for setserverprefs (sets is the shortest acceptable abbreviation).

    -servers
    Specifies one or more pairs of file server machine interface and rank. Identify each interface by its fully-qualified hostname or IP address in dotted decimal format. Acceptable ranks are the integers from 1 to 65534. Separate the parts of a pair, and the pairs from one another, with one or more spaces.

    -vlservers
    Specifies one or more pairs of VL Server machine and rank. Identify each machine by its fully-qualified hostname or IP address in dotted decimal format. Acceptable ranks are the integers from 1 to 65534.

    -file
    Specifies the pathname of a file that contains one more pairs of file server machine interface and rank. Place each pair on its own line in the file. Use the same format for interfaces and ranks as with the -servers argument.

    -stdin
    Indicates that pairs of file server machine interface and rank are being provided via the standard input stream (stdin). The program or script that generates the pairs must format them in the same manner as for the -servers argument.

Managing Multihomed Client Machines

The File Server can choose the interface to which to send a message when it initiates communication with the Cache Manager on a multihomed client machine (one with more than one network interface and IP address). If that interface is inaccessible, it automatically switches to an alternate. This improves AFS performance, because it means that the outage of an interface does not interrupt communication between File Server and Cache Manager.

The File Server can choose the client interface when it sends two types of messages:

(The File Server does not choose which client interface to respond to when filling a Cache Manager's request for AFS data. In that case, it always responds to the client interface via which the Cache Manager sent the request.)

The Cache Manager compiles the list of eligible interfaces on its client machine automatically as it initializes, and records them in kernel memory. When the Cache Manager first establishes a connection with the File Server, it sends along the list of interface addresses. The File Server records the addresses, and uses the one at the top of the list when it needs to break a callback or send a ping to the Cache Manager. If that interface is inaccessible, the File Server simultaneously sends a message to all of the other interfaces in the list. Whichever interface replies first is the one to which the File Server sends future messages.

You can control which addresses the Cache Manager registers with File Servers by listing them in two files in the /usr/vice/etc directory on the client machine's local disk: NetInfo and NetRestrict. If the NetInfo file exists when the Cache Manager initializes, the Cache Manager uses its contents as the basis for the list of interfaces. Otherwise, the Cache Manager uses the list of interfaces configured with the operating system. It then removes from the list any addresses that appear in the /usr/vice/etc/NetRestrict file, if it exists. The Cache Manager records the resulting list in kernel memory.

You can also use the fs setclientaddrs command to change the list of addresses stored in the Cache Manager's kernel memory, without rebooting the client machine. The list of addresses you provide on the command line completely replaces the current list in kernel memory. The changes you make persist only until the client machine reboots, however. To preserve the revised list across reboots, list the interfaces in the NetInfo file (and if appropriate, the NetRestrict file) in the local /usr/vice/etc directory. (You can also place the appropriate fs setclientaddrs command in the machine's AFS initialization script, but that is less efficient: by the time the Cache Manager reads the command in the script, it has already compiled a list of interfaces.)

To display the list of addresses that the Cache Manager is currently registering with File Servers, use the fs getclientaddrs command.

Keep the following in mind when you change the NetInfo or NetRestrict file, or issue the fs getclientaddrs or fs setclientaddrs commands:

To create or edit the client NetInfo file

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Using a text editor, open the /usr/vice/etc/NetInfo file. Place one IP address in dotted decimal format (for example, 192.12.107.33) on each line. On the first line, put the address that you want each File Server to use initially. The order of the remaining machines does not matter, because if an RPC to the first interface fails, the File Server simultaneously sends RPCs to all of the other interfaces in the list. Whichever interface replies first is the one to which the File Server then sends pings and RPCs to break callbacks.

  3. If you want the Cache Manager to start using the revised list immediately, either reboot the machine, or use the fs setclientaddrs command to create the same list of addresses in kernel memory directly.

To create or edit the client NetRestrict file

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Using a text editor, open the /usr/vice/etc/NetRestrict file. Place one IP address in dotted decimal format on each line. The order of the addresses is not significant. Use the value 255 as a wildcard that represents all possible addresses in that field. For example, the entry 192.12.105.255 indicates that the Cache Manager does not register any of the addresses in the 192.12.105 subnet.

  3. If you want the Cache Manager to start using the revised list immediately, either reboot the machine, or use the fs setclientaddrs command to set a list of addresses that does not included the prohibited ones.

To display the list of addresses from kernel memory

  1. Issue the fs getclientaddrs command.
       % fs getclientaddrs 
    

    where gc is an acceptable alias for getclientaddrs (getcl is the shortest acceptable abbreviation).

The output lists each IP address on its own line, in dotted decimal format.

To set the list of addresses in kernel memory

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs setclientaddrs command to replace the list of addresses currently in kernel memory with a new list.
       # fs setclientaddrs [-address <client network interfaces>+]
    

    where

    sc
    Is an acceptable alias for setclientaddrs (setcl is the shortest acceptable abbreviation).

    -address
    Specifies one or more IP addresses in dotted decimal format (hostnames are not acceptable). Separate each address with one or more spaces.

Controlling the Display of Warning and Informational Messages

By default, the Cache Manager generates two types of warning and informational messages:

You can use the fs messages command to control whether the Cache Manager displays either type of message, both types, or neither. It is best not to disable messages completely, because they provide useful information.

If you want to monitor Cache Manager status and performance more actively, you can use the afsmonitor program to collect an extensive set of statistics (it also gathers File Server statistics). If you experience performance problems, you can use fstrace suite of commands to gather a low-level trace of Cache Manager operations, which the AFS Support and Development groups can analyze to help solve your problem. To learn about both utilities, see Monitoring and Auditing AFS Performance.

To control the display of warning and status messages

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs messages command, using the -show argument to specify the type of messages to be displayed.
       # fs messages -show <user|console|all|none>
    

    where

    me
    Is the shortest acceptable abbreviation of messages.

    -show
    Specifies the types of messages to display. Choose one of the following values:

    user
    Sends user messages to user screens.

    console
    Sends console messages to the console.

    all
    Sends user messages to user screens and console messages to the console (the default if the -show argument is omitted).

    none
    Disables messages completely.

Displaying and Setting the System Type Name

The Cache Manager stores the system type name of the local client machine in kernel memory. It reads in the default value from a hardcoded definition in the AFS client software.

The Cache Manager uses the system name as a substitute for the @sys variable in AFS pathnames. The variable is useful when creating a symbolic link from the local disk to an AFS directory that houses binaries for the client machine's system type. Because the @sys variable automatically steers the Cache Manager to the appropriate directory, you can create the same symbolic link on client machines of different system types. (You can even automate the creation operation by using the package utility described in Configuring Client Machines with the package Program.) The link also remains valid when you upgrade the machine to a new system type.

Configuration is simplest if you use the system type names that AFS assigns. For a list, see the IBM AFS Release Notes.

To display the system name stored in kernel memory, use the sys or fs sysname command. To change the name, add the latter command's -newsys argument.

To display the system type name

  1. Issue the fs sysname or sys command.
       % fs sysname 
       
       % sys
    

The output of the fs sysname command has the following format:

   Current sysname is 'system_name'

The sys command displays the system_name string with no other text.

To change the system type name

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs sysname command, using the -newsys argument to specify the new name.
       # fs sysname <new sysname>
    

    where

    sys
    Is the shortest acceptable abbreviation of sysname.

    new sysname
    Specifies the new system type name.

Enabling Asynchronous Writes

By default, the Cache Manager writes all data to the File Server immediately and synchronously when an application program closes a file. That is, the close system call does not return until the Cache Manager has actually written all of the cached data from the file back to the File Server. You can enable the Cache Manager to write files asynchronously by specifying the number of kilobytes of a file that can remain to be written to the File Server when the Cache Manager returns control to the application.

Enabling asynchronous writes can be helpful to users who commonly work with very large files, because it usually means that the application appears to perform faster. However, it introduces some complications. It is best not to enable asynchronous writes unless the machine's users are sophisticated enough to understand the potential problems and how to avoid them. The complications include the following:

When you enable asynchronous writes by issuing the fs storebehind command, you set the number of kilobytes of a file that can still remain to be written to the File Server when the Cache Manager returns control to the application program. You can apply the setting either to all files manipulated by applications running on the machine, or only to certain files:

To set the default store asynchrony

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the fs storebehind command with the -allfiles argument.
       # fs storebehind -allfiles  <new default (KB)> [-verbose]
    

    where

    st
    Is the shortest acceptable abbreviation of storebehind.

    -allfiles
    Sets the number of kilobytes of data that can remain to be written to the File Server when the Cache Manager returns control to the application that closed a file.

    -verbose
    Produces a message that confirms the new setting.

To set the store asynchrony for one or more files

  1. Verify that you have the w (write) permission on the access control list (ACL) of each file for which you are setting the store asynchrony, by issuing the fs listacl command, which is described fully in Displaying ACLs.
       % fs listacl dir/file path
    

    Alternatively, become the local superuser root on the client machine, if you are not already, by issuing the su command.

       % su root
       Password: root_password
    

  2. Issue the fs storebehind command with the -kbytes and -files arguments.
       # fs storebehind -kbytes <asynchrony for specified names> \
                        -files <specific pathnames>+  \
                        [-verbose]
    

    where

    st
    Is the shortest acceptable abbreviation of storebehind.

    -kbytes
    Sets the number of kilobytes of data that can remain to be written to the File Server when the Cache Manager returns control to the application that closed a file named by the -files argument.

    -files
    Specifies each file for which to set a store asynchrony that overrides the default. Partial pathnames are interpreted relative to the current working directory.

    -verbose
    Produces a message that confirms that new setting.

To display the default store asynchrony

  1. Issue the fs storebehind command with no arguments, or with the -verbose flag only.
       % fs storebehind  [-verbose]
    

    where

    st
    Is the shortest acceptable abbreviation of storebehind.

    -verbose
    Produces output that reports the default store asynchrony.

To display the store asynchrony for one or more files

  1. Issue the fs storebehind command with the -files argument only.
       % fs storebehind -files <specific pathnames>+ 
    

    where

    st
    Is the shortest acceptable abbreviation of storebehind.

    -files
    Specifies each file for which to display the store asynchrony. Partial pathnames are interpreted relative to the current working directory.

The output lists each file separately. If a value has previously been set for the specified files, the output reports the following:

   Will store up to y kbytes of file asynchronously.
   Default store asynchrony is x kbytes.

If the default store asynchrony applies to a file (because you have not set a -kbytes value for it), the output reports the following:

   Will store file according to default.
   Default store asynchrony is x kbytes.

[Return to Library] [Contents] [Previous Topic] [Top of Topic] [Next Topic] [Index]



© IBM Corporation 2000. All Rights Reserved