Administration Guide


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic] [Next Topic] [Index]


Issues in Cell Configuration and Administration

This chapter discusses many of the issues to consider when configuring and administering a cell, and directs you to detailed related information available elsewhere in this guide. It is assumed you are already familiar with the material in An Overview of AFS Administration.

It is best to read this chapter before installing your cell's first file server machine or performing any other administrative task.


Differences between AFS and UNIX: A Summary

AFS behaves like a standard UNIX file system in most respects, while also making file sharing easy within and between cells. This section describes some differences between AFS and the UNIX file system, referring you to more detailed information as appropriate.

Differences in File and Directory Protection

AFS augments the standard UNIX file protection mechanism in two ways: it associates an access control list (ACL) with each directory, and it enables users to define a large number of their own groups, which can be placed on ACLs.

AFS uses ACLs to protect files and directories, rather than relying exclusively on the mode bits. This has several implications, which are discussed further in the indicated sections:

AFS enables users to define the groups of other users. Placing these groups on ACLs extends the same permissions to a number of exactly specified users at the same time, which is much more convenient than placing the individuals on the ACLs directly. See Administering the Protection Database.

There are also system-defined groups, system:anyuser and system:authuser, whose presence on an ACL extends access to a wide range of users at once. See The System Groups and Using Groups on ACLs.

Differences in Authentication

Just as the AFS filespace is distinct from each machine's local file system, AFS authentication is separate from local login. This has two practical implications, which are discussed further in Using an AFS-modified login Utility.

Differences in the Semantics of Standard UNIX Commands

This section summarizes how AFS modifies the functionality of some UNIX commands.

The chmod command
Only members of the system:administrators group can use this command to turn on the setuid, setgid or sticky mode bits on AFS files. For more information, see Determining if a Client Can Run Setuid Programs.

The chown command
Only members of the system:administrators group can issue this command on AFS files.

The chgrp command
Only members of the system:administrators can issue this command on AFS files and directories.

The ftpd daemon
The AFS-modified version of this daemon attempts to authenticate remote issuers of the ftp command with the local AFS authentication service. See Using UNIX Remote Services in the AFS Environment.

The groups command
If the user's AFS tokens are associated with a process authentication group (PAG), the output of this command sometimes includes two large numbers. To learn about PAGs, see Identifying AFS Tokens by PAG.

The inetd daemon
The AFS-modified version of this daemon authenticates remote issuers of the AFS-modified rcp and rsh commands with the local AFS authentication service. See Using UNIX Remote Services in the AFS Environment.

The login utility
AFS-modified login utilities both log the issuer into the local file system and authenticate the user with the AFS authentication service. See Using an AFS-modified login Utility.

The ln command
This command cannot create hard links between files in different AFS directories. See Creating Hard Links.

The rcp command
The AFS-modified version of this command enables the issuer to access files on the remote machine as an authenticated AFS user. See Using UNIX Remote Services in the AFS Environment.

The rlogind daemon
The AFS-modified version of this daemon authenticates remote issuers of the rlogin command with the local AFS authentication service. See Using UNIX Remote Services in the AFS Environment.

The AFS distribution for some system types possibly does not include a modified rlogind program. See the IBM AFS Release Notes.

The remsh or rsh command
The AFS-modified version of this command enables the issuer to execute commands on the remote machine as an authenticated AFS user. See Using UNIX Remote Services in the AFS Environment.

The AFS version of the fsck Command

Never run the standard UNIX fsck command on an AFS file server machine. It does not understand how the File Server organizes volume data on disk, and so moves all AFS data into the lost+found directory on the partition.

Instead, use the version of the fsck program that is included in the AFS distribution. The IBM AFS Quick Beginnings explains how to replace the vendor-supplied fsck program with the AFS version as you install each server machine.

The AFS version functions like the standard fsck program on data stored on both UFS and AFS partitions. The appearance of a banner like the following as the fsck program initializes confirms that you are running the correct one:

   --- AFS (R) version fsck---

where version is the AFS version. For correct results, it must match the AFS version of the server binaries in use on the machine.

If you ever accidentally run the standard version of the program, contact AFS Product Support immediately. It is sometimes possible to recover volume data from the lost+found directory.

Creating Hard Links

AFS does not allow hard links (created with the UNIX ln command) between files that reside in different directories, because in that case it is unclear which of the directory's ACLs to associate with the link.

AFS also does not allow hard links to directories, in order to keep the file system organized as a tree.

It is possible to create symbolic links (with the UNIX ln -s command) between elements in two different AFS directories, or even between an element in AFS and one in a machine's local UNIX file system. Do not create a symbolic link to a file whose name begins with either a number sign (#) or a percent sign (%), however. The Cache Manager interprets such links as a mount point to a regular or read/write volume, respectively.

AFS Implements Save on Close

When an application issues the UNIX close system call on a file, the Cache Manager performs a synchronous write of the data to the File Server that maintains the central copy of the file. It does not return control to the application until the File Server has acknowledged receipt of the data. For the fsync system call, control does not return to the application until the File Server indicates that it has written the data to non-volatile storage on the file server machine.

When an application issues the UNIX write system call, the Cache Manager writes modifications to the local AFS client cache only. If the local machine crashes or an application program exits without issuing the close system call, it is possible that the modifications are not recorded in the central copy of the file maintained by the File Server. The Cache Manager does sometimes write this type of modified data from the cache to the File Server without receiving the close or fsync system call, for example if it needs to free cache chunks for new data. However, it is not generally possible to predict when the Cache Manager transfers modified data to the File Server in this way.

The implication is that if an application's Save option invokes the write system call rather than close or fsync, the changes are not necessarily stored permanently on the File Server machine. Most application programs issue the close system call for save operations, as well as when they finish handling a file and when they exit.

Setuid Programs

Set the UNIX setuid bit only for the local superuser root; this does not present an automatic security risk: the local superuser has no special privilege in AFS, but only in the local machine's UNIX file system and kernel.

Any file can be marked with the setuid bit, but only members of the system:administrators group can issue the chown system call or the /etc/chown command.

The fs setcell command determines whether setuid programs that originate in a foreign cell can run on a given client machine. See Determining if a Client Can Run Setuid Programs.


Choosing a Cell Name

This section explains how to choose a cell name and explains why choosing an appropriate cell name is important.

Your cell name must distinguish your cell from all others in the AFS global namespace. By conventions, the cell name is the second element in any AFS pathname; therefore, a unique cell name guarantees that every AFS pathname uniquely identifies a file, even if cells use the same directory names at lower levels in their local AFS filespace. For example, both the ABC Corporation cell and the State University cell can have a home directory for the user pat, because the pathnames are distinct: /afs/abc.com/usr/pat and /afs/stateu.edu/usr/pat.

By convention, cell names follow the ARPA Internet Domain System conventions for site names. If you are already an Internet site, then it is simplest to choose your Internet domain name as the cellname.

If you are not an Internet site, it is best to choose a unique Internet-style name, particularly if you plan to connect to the Internet in the future. AFS Product Support is available for help in selecting an appropriate name. There are a few constraints on AFS cell names:

Other suffixes are available if none of these are appropriate. You can learn about suffixes by calling the Defense Data Network [Internet] Network Information Center in the United States at (800) 235-3155. The NIC can also provide you with the forms necessary for registering your cell name as an Internet domain name. Registering your name prevents another Internet site from adopting the name later.

How to Set the Cell Name

The cell name is recorded in two files on the local disk of each file server and client machine. Among other functions, these files define the machine's cell membership and so affect how programs and processes run on the machine; see Why Choosing the Appropriate Cell Name is Important. The procedure for setting the cell name is different for the two types of machines.

For file server machines, the two files that record the cell name are the /usr/afs/etc/ThisCell and /usr/afs/etc/CellServDB files. As described more explicitly in the IBM AFS Quick Beginnings, you set the cell name in both by issuing the bos setcellname command on the first file server machine you install in your cell. It is not usually necessary to issue the command again. If you run the United States edition of AFS and use the Update Server, it distributes its copy of the ThisCell and CellServDB files to additional server machines that you install. If you use the international edition of AFS, the IBM AFS Quick Beginnings explains how to copy the files manually.

For client machines, the two files that record the cell name are the /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB files. You create these files on a per-client basis, either with a text editor or by copying them onto the machine from a central source in AFS. See Maintaining Knowledge of Database Server Machines for details.

Change the cell name in these files only when you want to transfer the machine to a different cell (it can only belong to one cell at a time). If the machine is a file server, follow the complete set of instructions in the IBM AFS Quick Beginnings for configuring a new cell. If the machine is a client, all you need to do is change the files appropriately and reboot the machine. The next section explains further the negative consequences of changing the name of an existing cell.

To set the default cell name used by most AFS commands without changing the local /usr/vice/etc/ThisCell file, set the AFSCELL environment variable in the command shell. It is worth setting this variable if you need to complete significant administrative work in a foreign cell.

Note:The fs checkservers and fs mkmount commands do not use the AFSCELL variable. The fs checkservers command always defaults to the cell named in the ThisCell file, unless the -cell argument is used. The fs mkmount command defaults to the cell in which the parent directory of the new mount point resides.

Why Choosing the Appropriate Cell Name is Important

Take care to select a cell name that is suitable for long-term use. Changing a cell name later is complicated. An appropriate cell name is important because it is the second element in the pathname of all files in a cell's file tree. Because each cell name is unique, its presence in an AFS pathname makes the pathname unique in the AFS global namespace, even if multiple cells use similar filespace organization at lower levels. For instance, it means that every cell can have a home directory called /afs/cellname/usr/pat without causing a conflict. The presence of the cell name in pathnames also means that users in every cell use the same pathname to access a file, whether the file resides in their local cell or in a foreign cell.

Another reason to choose the correct cell name early in the process of installing your cell is that the cell membership defined in each machine's ThisCell file affects the performance of many programs and processes running on the machine. For instance, AFS commands (fs, kas, pts and vos commands) by default execute in the cell of the machine on which they are issued. The command interpreters check the ThisCell file on the local disk and then contact the database server machines listed in the CellServDB file for the indicated cell (the bos commands work differently because the issuer always has to name of the machine on which to run the command).

The ThisCell file also determines the cell for which a user receives an AFS token when he or she logs in to a machine. The cell name also plays a role in security. As it converts a user password into an encryption key for storage in the Authentication Database, the Authentication Server combines the password with the cell name found in the ThisCell file. AFS-modified login utilities use the same algorithm to convert the user's password into an encryption key before contacting the Authentication Server to obtain a token for the user. (For a description of how AFS's security system uses encryption keys, see A More Detailed Look at Mutual Authentication.)

This method of converting passwords into encryption keys means that the same password results in different keys in different cells. Even if a user uses the same password in multiple cells, obtaining a user's token from one cell does not enable unauthorized access to the user's account in another cell.

If you change the cell name, you must change the ThisCell and CellServDB files on every server and client machine. Failure to change them all can prevent login, because the encryption keys produced by the login utility do not match the keys stored in the Authentication Database. In addition, many commands from the AFS suites do not work as expected.


Participating in the AFS Global Namespace

Participating in the AFS global namespace makes your cell's local file tree visible to AFS users in foreign cells and makes other cells' file trees visible to your local users. It makes file sharing across cells just as easy as sharing within a cell. This section outlines the procedures necessary for participating in the global namespace.

What the Global Namespace Looks Like

The AFS global namespace appears the same to all AFS cells that participate in it, because they all agree to follow a small set of conventions in constructing pathnames.

The first convention is that all AFS pathnames begin with the string /afs to indicate that they belong to the AFS global namespace.

The second convention is that the cell name is the second element in an AFS pathname; it indicates where the file resides (that is, the cell in which a file server machine houses the file). As noted, the presence of a cell name in pathnames makes the global namespace possible, because it guarantees that all AFS pathnames are unique even if cells use the same directory names at lower levels in their AFS filespace.

What appears at the third and lower levels in an AFS pathname depends on how a cell has chosen to arrange its filespace. There are some suggested conventional directories at the third level; see The Third Level.

Making Your Cell Visible to Others

You make your cell visible to others by advertising your cell name and database server machines. Just like client machines in the local cell, the Cache Manager on machines in foreign cells use the information to reach your cell's Volume Location (VL) Servers when they need volume and file location information. Similarly, client-side authentication programs running in foreign cells use the information to contact your cell's authentication service.

There are two places you can make this information available:

Update the files whenever you change the identity of your cell's database server machines. Also update the copies of the CellServDB files on all of your server machines (in the /usr/afs/etc directory) and client machines (in the /usr/vice/etc directory). For instructions, see Maintaining the Server CellServDB File and Maintaining Knowledge of Database Server Machines.

Once you have advertised your database server machines, it can be difficult to make your cell invisible again. You can remove the CellServDB.local file and ask AFS Product Support to remove your entry from the global CellServDB file, but other cells probably have an entry for your cell in their local CellServDB files already. To make those entries invalid, you must change the names or IP addresses of your database server machines.

Your cell does not have to be invisible to be inaccessible, however. To make your cell completely inaccessible to foreign users, remove the system:anyuser group from all ACLs at the top three levels of your filespace; see Granting and Denying Foreign Users Access to Your Cell.

Making Other Cells Visible in Your Cell

To make a foreign cell's filespace visible on a client machine in your cell, perform the following three steps:

  1. Mount the cell's root.cell volume at the second level in your cell's filespace just below the /afs directory. Use the fs mkmount command with the -cell argument as instructed in To create a cellular mount point.

  2. Mount AFS at the /afs directory on the client machine. The afsd program, which initializes the Cache Manager, performs the mount automatically at the directory named in the first field of the local /usr/vice/etc/cacheinfo file or by the command's -mountdir argument. Mounting AFS at an alternate location makes it impossible to reach the filespace of any cell that mounts its root.afs and root.cell volumes at the conventional locations. See Displaying and Setting the Cache Size and Location.

  3. Create an entry for the cell in the list of database server machines which the Cache Manager maintains in kernel memory.

    The /usr/vice/etc/CellServDB file on every client machine's local disk lists the database server machines for the local and foreign cells. The afsd program reads the contents of the CellServDB file into kernel memory as it initializes the Cache Manager. You can also use the fs newcell command to add or alter entries in kernel memory directly between reboots of the machine. See Maintaining Knowledge of Database Server Machines.

Note that making a foreign cell visible to client machines does not guarantee that your users can access its filespace. The ACLs in the foreign cell must also grant them the necessary permissions.

Granting and Denying Foreign Users Access to Your Cell

Making your cell visible in the AFS global namespace does not take away your control over the way in which users from foreign cells access your file tree.

By default, foreign users access your cell as the user anonymous, which means they have only the permissions granted to the system:anyuser group on each directory's ACL. Normally these permissions are limited to the l (lookup) and r (read) permissions.

There are two ways to grant wider access to foreign users:


Configuring Your AFS Filespace

This section summarizes the issues to consider when configuring your AFS filespace. For a discussion of creating volumes that correspond most efficiently to the filespace's directory structure, see Creating Volumes to Simplify Administration.

Note for Windows users: Windows uses a backslash ( \ ) rather than a forward slash ( / ) to separate the elements in a pathname. The hierarchical organization of the filespace is however the same as on a UNIX machine.

AFS pathnames must follow a few conventions so the AFS global namespace looks the same from any AFS client machine. There are corresponding conventions to follow in building your file tree, not just because pathnames reflect the structure of a file tree, but also because the AFS Cache Manager expects a certain configuration.

The Top /afs Level

The first convention is that the top level in your file tree be called the /afs directory. If you name it something else, then you must use the -mountdir argument with the afsd program to get Cache Managers to mount AFS properly. You cannot participate in the AFS global namespace in that case.

The Second (Cellname) Level

The second convention is that just below the /afs directory you place directories corresponding to each cell whose file tree is visible and accessible from the local cell. Minimally, there must be a directory for the local cell. Each such directory is a mount point to the indicated cell's root.cell volume. For example, in the ABC Corporation cell, /afs/abc.com is a mount point for the cell's own root.cell volume and stateu.edu is a mount point for the State University cell's root.cell volume. The fs lsmount command displays the mount points.

   % fs lsmount /afs/abc.com 
   '/afs/abc.com' is a mount point for volume '#root.cell'
   % fs lsmount /afs/stateu.edu
   '/afs/stateu.edu' is a mount point for volume '#stateu.edu:root.cell'

To reduce the amount of typing necessary in pathnames, you can create a symbolic link with an abbreviated name to the mount point of each cell your users frequently access (particularly the home cell). In the ABC Corporation cell, for instance, /afs/abc is a symbolic link to the /afs/abc.com mount point, as the fs lsmount command reveals.

   % fs lsmount /afs/abc
   '/afs/abc' is a symbolic link, leading to a mount point for volume '#root.cell'

The Third Level

You can organize the third level of your cell's file tree any way you wish. The following list describes directories that appear at this level in the conventional configuration:

common
This directory contains programs and files needed by users working on machines of all system types, such as text editors, online documentation files, and so on. Its /etc subdirectory is a logical place to keep the central update sources for files used on all of your cell's client machines, such as the ThisCell and CellServDB files.

public
A directory accessible to anyone who can access your filespace, because its ACL grants the l (lookup) and r (read) permissions to the system:anyuser group. It is useful if you want to enable your users to make selected information available to everyone, but do not want to grant foreign users access to the contents of the usr directory which houses user home directories ( and is also at this level). It is conventional to create a subdirectory for each of your cell's users.

service
This directory contains files and subdirectories that help cells coordinate resource sharing. For a list of the proposed standard files and subdirectories to create, call or write to AFS Product Support.

As an example, files that other cells expect to find in this directory's etc subdirectory can include the following:

sys_type
A separate directory for storing the server and client binaries for each system type you use in the cell. Configuration is simplest if you use the system type names assigned in the AFS distribution, particularly if you wish to use the @sys variable in pathnames (see Using the @sys Variable in Pathnames). The IBM AFS Release Notes lists the conventional name for each supported system type.

Within each such directory, create directories named bin, etc, usr, and so on, to store the programs normally kept in the /bin, /etc and /usr directories on a local disk. Then create symbolic links from the local directories on client machines into AFS; see Configuring the Local Disk. Even if you do not choose to use symbolic links in this way, it can be convenient to have central copies of system binaries in AFS. If binaries are accidentally removed from a machine, you can recopy them onto the local disk from AFS rather than having to recover them from tape

usr
This directory contains home directories for your local users. As discussed in the previous entry for the public directory, it is often practical to protect this directory so that only locally authenticated users can access it. This keeps the contents of your user's home directories as secure as possible.

If your cell is quite large, directory lookup can be slowed if you put all home directories in a single usr directory. For suggestions on distributing user home directories among multiple grouping directories, see Grouping Home Directories.

wsadmin
This directory contains prototype, configuration and library files for use with the package program. See Configuring Client Machines with the package Program.

Creating Volumes to Simplify Administration

This section discusses how to create volumes in ways that make administering your system easier.

At the top levels of your file tree (at least through the third level), each directory generally corresponds to a separate volume. Some cells also configure the subdirectories of some third level directories as separate volumes. Common examples are the /afs/cellname/common and /afs/cellname/usr directories.

You do not have to create a separate volume for every directory level in a tree, but the advantage is that each volume tends to be smaller and easier to move for load balancing. The overhead for a mount point is no greater than for a standard directory, nor does the volume structure itself require much disk space. Most cells find that below the fourth level in the tree, using a separate volume for each directory is no longer efficient. For instance, while each user's home directory (at the fourth level in the tree) corresponds to a separate volume, all of the subdirectories in the home directory normally reside in the same volume.

Keep in mind that only one volume can be mounted at a given directory location in the tree. In contrast, a volume can be mounted at several locations, though this is not recommended because it distorts the hierarchical nature of the file tree, potentially causing confusion.

Assigning Volume Names

You can name your volumes anything you choose, subject to a few restrictions:

It is best to assign volume names that indicate the type of data they contain, and to use similar names for volumes with similar contents. It is also helpful if the volume name is similar to (or at least has elements in common with) the name of the directory at which it is mounted. Understanding the pattern then enables you accurately to guess what a volume contains and where it is mounted.

Many cells find that the most effective volume naming scheme puts a common prefix on the names of all related volumes. Table 1 describes the recommended prefixing scheme.

Table 1. Suggested volume prefixes

Prefix Contents Example Name Example Mount Point
common. popular programs and files common.etc /afs/cellname/common/etc
src. source code src.afs /afs/cellname/src/afs
proj. project data proj.portafs /afs/cellname/proj/portafs
test. testing or other temporary data test.smith /afs/cellname/usr/smith/test
user. user home directory data user.terry /afs/cellname/usr/terry
sys_type. programs compiled for an operating system type rs_aix42.bin /afs/cellname/rs_aix42/bin

Table 2 is a more specific example for a cell's rs_aix42 system volumes and directories:

Table 2. Example volume-prefixing scheme

Example Name Example Mount Point
rs_aix42.bin /afs/cellname/rs_aix42/bin/afs/cell/rs_aix42/bin
rs_aix42.etc /afs/cellname/rs_aix42/etc
rs_aix42.usr /afs/cellname/rs_aix42/usr
rs_aix42.usr.afsws /afs/cellname/rs_aix42/usr/afsws
rs_aix42.usr.lib /afs/cellname/rs_aix42/usr/lib
rs_aix42.usr.bin /afs/cellname/rs_aix42/usr/bin
rs_aix42.usr.etc /afs/cellname/rs_aix42/usr/etc
rs_aix42.usr.inc /afs/cellname/rs_aix42/usr/inc
rs_aix42.usr.man /afs/cellname/rs_aix42/usr/man
rs_aix42.usr.sys /afs/cellname/rs_aix42/usr/sys
rs_aix42.usr.local /afs/cellname/rs_aix42/usr/local

There are several advantages to this scheme:

Grouping Related Volumes on a Partition

If your cell is large enough to make it practical, consider grouping related volumes together on a partition. In general, you need at least three file server machines for volume grouping to be effective. Grouping has several advantages, which are most obvious when the file server machine becomes inaccessible:

The advantages of grouping related volumes on a partition do not necessarily extend to the grouping of all related volumes on one file server machine. For instance, it is probably unwise in a cell with two file server machines to put all system volumes on one machine and all user volumes on the other. An outage of either machine probably affects everyone.

Admittedly, the need to move volumes for load balancing purposes can limit the practicality of grouping related volumes. You need to weigh the complementary advantages case by case.

When to Replicate Volumes

As discussed in Replication, replication refers to making a copy, or clone, of a read/write source volume and then placing the copy on one or more additional file server machines. Replicating a volume can increase the availability of the contents. If one file server machine housing the volume becomes inaccessible, users can still access the copy of the volume stored on a different machine. No one machine is likely to become overburdened with requests for a popular file, either, because the file is available from several machines.

However, replication is not appropriate for all cells. If a cell does not have much disk space, replication can be unduly expensive, because each clone not on the same partition as the read/write source takes up as much disk space as its source volume did at the time the clone was made. Also, if you have only one file server machine, replication uses up disk space without increasing availability.

Replication is also not appropriate for volumes that change frequently. You must issue the vos release command every time you need to update a read-only volume to reflect changes in its read/write source.

For both of these reasons, replication is appropriate only for popular volumes whose contents do not change very often, such as system binaries and other volumes mounted at the upper levels of your filespace. User volumes usually exist only in a read/write version since they change so often.

If you are replicating any volumes, you must replicate the root.afs and root.cell volumes, preferably at two or three sites each (even if your cell only has two or three file server machines). The Cache Manager needs to pass through the directories corresponding to the root.afs and root.cell volumes as it interprets any pathname. The unavailability of these volumes makes all other volumes unavailable too, even if the file server machines storing the other volumes are still functioning.

Another reason to replicate the root.afs volume is that it can lessen the load on the File Server machine. The Cache Manager has a bias to access a read-only version of the root.afs volume if it is replicate, which puts the Cache Manager onto the read-only path through the AFS filespace. While on the read-only path, the Cache Manager attempts to access a read-only copy of replicated volumes. The File Server needs to track only one callback per Cache Manager for all of the data in a read-only volume, rather than the one callback per file it must track for read/write volumes. Fewer callbacks translate into a smaller load on the File Server.

If the root.afs volume is not replicated, the Cache Manager follows a read/write path through the filespace, accessing the read/write version of each volume. The File Server distributes and tracks a separate callback for each file in a read/write volume, imposing a greater load on it.

For more on read/write and read-only paths, see The Rules of Mount Point Traversal.

It also makes sense to replicate system binary volumes in many cases, as well as the volume corresponding to the /afs/cellname/usr directory and the volumes corresponding to the /afs/cellname/common directory and its subdirectories.

It is a good idea to place a replica on the same partition as the read/write source. In this case, the read-only volume is a clone (like a backup volume): it is a copy of the source volume's vnode index, rather than a full copy of the volume contents. Only if the read/write volume moves to another partition or changes substantially does the read-only volume consume significant disk space. Read-only volumes kept on other partitions always consume the full amount of disk space that the read/write source consumed when the read-only volume was created.

The Default Quota and ACL on a New Volume

Every AFS volume has associated with it a quota that limits the amount of disk space the volume is allowed to use. To set and change quota, use the commands described in Setting and Displaying Volume Quota and Current Size.

By default, every new volume is assigned a space quota of 5000 KB blocks unless you include the -maxquota argument to the vos create command. Also by default, the ACL on the root directory of every new volume grants all permissions to the members of the system:administrators group. To learn how to change these values when creating an account with individual commands, see To create one user account with individual commands. When using uss commands to create accounts, you can specify alternate ACL and quota values in the template file's V instruction; see Creating a Volume with the V Instruction.


Configuring Server Machines

This section discusses some issues to consider when configuring server machines, which store AFS data, transfer it to client machines on request, and house the AFS administrative databases. To learn about client machines, see Configuring Client Machines.

If your cell has more than one AFS server machine, you can configure them to perform specialized functions. A machine can assume one or more of the roles described in the following list. For more details, see The Four Roles for File Server Machines.

The IBM AFS Quick Beginnings explains how to configure your cell's first file server machine to assume all four roles. The IBM AFS Quick Beginnings chapter on installing additional server machines also explains how to configure them to perform one or more roles.

Replicating the AFS Administrative Databases

The AFS administrative databases are housed on database server machines and store information that is crucial for correct cell functioning. Both server processes and Cache Managers access the information frequently:

Maintaining your cell is simplest if the first machine has the lowest IP address of any machine you plan to use as a database server machine. If you later decide to use a machine with a lower IP address as a database server machine, you must update the CellServDB file on all clients before introducing the new machine.

If your cell has more than one server machine, it is best to run more than one as a database server machine (but more than three are rarely necessary). Replicating the administrative databases in this way yields the same benefits as replicating volumes: increased availability and reliability. If one database server machine or process stops functioning, the information in the database is still available from others. The load of requests for database information is spread across multiple machines, preventing any one from becoming overloaded.

Unlike replicated volumes, however, replicated databases do change frequently. Consistent system performance demands that all copies of the database always be identical, so it is not acceptable to record changes in only some of them. To synchronize the copies of a database, the database server processes use AFS's distributed database technology, Ubik. See Replicating the AFS Administrative Databases.

If your cell has only one file server machine, it must also serve as a database server machine. If you cell has two file server machines, it is not always advantageous to run both as database server machines. If a server, process, or network failure interrupts communications between the database server processes on the two machines, it can become impossible to update the information in the database because neither of them can alone elect itself as the synchronization site.

AFS Files on the Local Disk

It is generally simplest to store the binaries for all AFS server processes in the /usr/afs/bin directory on every file server machine, even if some processes do not actively run on the machine. This makes it easier to reconfigure a machine to fill a new role.

For security reasons, the /usr/afs directory on a file server machine and all of its subdirectories and files must be owned by the local superuser root and have only the first w (write) mode bit turned on. Some files even have only the first r (read) mode bit turned on (for example, the /usr/afs/etc/KeyFile file, which lists the AFS server encryption keys). Each time the BOS Server starts, it checks that the mode bits on certain files and directories match the expected values. For a list, see the IBM AFS Quick Beginnings section about protecting sensitive AFS directories, or the discussion of the output from the bos status command in To display the status of server processes and their BosConfig entries.

For a description of the contents of all AFS directories on a file server machine's local disk, see Administering Server Machines.

Configuring Partitions to Store AFS Data

The partitions that house AFS volumes on a file server machine must be mounted at directories named

/vicepindex

where index is one or two lowercase letters. By convention, the first AFS partition created is mounted at the /vicepa directory, the second at the /vicepb directory, and so on through the /vicepz directory. The names then continue with /vicepaa through /vicepaz, /vicepba through /vicepbz, and so on, up to the maximum supported number of server partitions, which is specified in the IBM AFS Release Notes.

Each /vicepx directory must correspond to an entire partition or logical volume, and must be a subdirectory of the root directory ( / ). It is not acceptable to configure part of (for example) the /usr partition as an AFS server partition and mount it on a directory called /usr/vicepa.

Also, do not store non-AFS files on AFS server partitions. The File Server and Volume Server expect to have available all of the space on the partition. Sharing space also creates competition between AFS and the local UNIX file system for access to the partition, particularly if the UNIX files are frequently used.

Monitoring, Rebooting and Automatic Process Restarts

AFS provides several tools for monitoring the File Server, including the scout and afsmonitor programs. You can configure them to alert you when certain threshold values are exceeded, for example when a server partition is more than 95% full. See Monitoring and Auditing AFS Performance.

Rebooting a file server machine requires shutting down the AFS processes and so inevitably causes a service outage. Reboot file server machines as infrequently as possible. For instructions, see Rebooting a Server Machine.

By default, the BOS Server on each file server machine stops and immediately restarts all AFS server processes on the machine (including itself) once a week, at 4:00 a.m. on Sunday. This reduces the potential for the core leaks that can develop as any process runs for an extended time.

The BOS Server also checks each morning at 5:00 a.m. for any newly installed binary files in the /usr/afs/bin directory. It compares the timestamp on each binary file to the time at which the corresponding process last restarted. If the timestamp on the binary is later, the BOS Server restarts the corresponding process to start using it.

The default times are in the early morning hours when the outage that results from restarting a process is likely to disturb the fewest number of people. You can display the restart times for each machine with the bos getrestart command, and set them with the bos setrestart command. The latter command enables you to disable automatic restarts entirely, by setting the time to never. See Setting the BOS Server's Restart Times.


Configuring Client Machines

This section summarizes issues to consider as you install and configure client machines in your cell.

Configuring the Local Disk

You can often free up significant amounts of local disk space on AFS client machines by storing standard UNIX files in AFS and creating symbolic links to them from the local disk. The @sys pathname variable can be useful in links to system-specific files; see Using the @sys Variable in Pathnames.

There are two types of files that must actually reside on the local disk: boot sequence files needed before the afsd program is invoked, and files that can be helpful during file server machine outages.

During a reboot, AFS is inaccessible until the afsd program executes and initializes the Cache Manager. (In the conventional configuration, the AFS initialization file is included in the machine's initialization sequence and invokes the afsd program.) Files needed during reboot prior to that point must reside on the local disk. They include the following, but this list is not necessarily exhaustive.

The other type of files and programs to retain on the local disk are those you need when diagnosing and fixing problems caused by a file server outage, because the outage can make inaccessible the copies stored in AFS. Examples include the binaries for a text editor (such as ed or vi) and for the fs and bos commands. Store copies of AFS command binaries in the /usr/vice/etc directory as well as including them in the /usr/afsws directory, which is normally a link into AFS. Then place the /usr/afsws directory before the /usr/vice/etc directory in users' PATH environment variable definition. When AFS is functioning normally, users access the copy in the /usr/afsws directory, which is more likely to be current than a local copy.

You can automate the configuration of client machine local disks by using the package program, which updates the contents of the local disk to match a configuration file. See Configuring Client Machines with the package Program.

Enabling Access to Foreign Cells

As detailed in Making Other Cells Visible in Your Cell, you enable the Cache Manager to access a cell's AFS filespace by storing a list of the cell's database server machines in the local /usr/vice/etc/CellServDB file. The Cache Manager reads the list into kernel memory at reboot for faster retrieval. You can change the list in kernel memory between reboots by using the fs newcell command. It is often practical to store a central version of the CellServDB file in AFS and use the package program periodically to update each client's version with the source copy. See Maintaining Knowledge of Database Server Machines.

Because each client machine maintains its own copy of the CellServDB file, you can in theory enable access to different foreign cells on different client machines. This is not usually practical, however, especially if users do not always work on the same machine.

Using the @sys Variable in Pathnames

When creating symbolic links into AFS on the local disk, it is often practical to use the @sys variable in pathnames. The Cache Manager automatically substitutes the local machine's AFS system name (CPU/operating system type) for the @sys variable. This means you can place the same links on machines of various system types and still have each machine access the binaries for its system type. For example, the Cache Manager on a machine running AIX 4.2 converts /afs/abc.com/@sys to /afs/abc.com/rs_aix42, whereas a machine running Solaris 7 converts it to /afs/abc.com/sun4x_57.

If you want to use the @sys variable, it is simplest to use the conventional AFS system type names as specified in the IBM AFS Release Notes. The Cache Manager records the local machine's system type name in kernel memory during initialization. If you do not use the conventional names, you must use the fs sysname command to change the value in kernel memory from its default just after Cache Manager initialization, on every client machine of the relevant system type. The fs sysname command also displays the current value; see Displaying and Setting the System Type Name.

In pathnames in the AFS filespace itself, use the @sys variable carefully and sparingly, because it can lead to unexpected results. It is generally best to restrict its use to only one level in the filespace. The third level is a common choice, because that is where many cells store the binaries for different machine types.

Multiple instances of the @sys variable in a pathname are especially dangerous to people who must explicitly change directories (with the cd command, for example) into directories that store binaries for system types other than the machine on which they are working, such as administrators or developers who maintain those directories. After changing directories, it is recommended that such people verify they are in the desired directory.

Setting Server Preferences

The Cache Manager stores a table of preferences for file server machines in kernel memory. A preference rank pairs a file server machine interface's IP address with an integer in the range from 1 to 65,534. When it needs to access a file, the Cache Manager compares the ranks for the interfaces of all machines that house the file, and first attempts to access the file via the interface with the best rank. As it initializes, the Cache Manager sets default ranks that bias it to access files via interfaces that are close to it in terms of network topology. You can adjust the preference ranks to improve performance if you wish.

The Cache Manager also uses similar preferences for Volume Location (VL) Server machines. Use the fs getserverprefs command to display preference ranks and the fs setserverprefs command to set them. See Maintaining Server Preference Ranks.


Configuring AFS User Accounts

This section discusses some of the issues to consider when configuring AFS user accounts. Because AFS is separate from the UNIX file system, a user's AFS account is separate from her UNIX account.

The preferred method for creating a user account is with the uss suite of commands. With a single command, you can create all the components of one or many accounts, after you have prepared a template file that guides the account creation. See Creating and Deleting User Accounts with the uss Command Suite.

Alternatively, you can issue the individual commands that create each component of an account. For instructions, along with instructions for removing user accounts and changing user passwords, user volume quotas and usernames, see Administering User Accounts.

When users leave your system, it is often good policy to remove their accounts. Instructions appear in Deleting Individual Accounts with the uss delete Command and Removing a User Account.

An AFS user account consists of the following components, which are described in greater detail in The Components of an AFS User Account.

By creating some components but not others, you can create accounts at different levels of functionality, using either uss commands as described in Creating and Deleting User Accounts with the uss Command Suite or individual commands as described in Administering User Accounts. The levels of functionality include the following

If your users have UNIX user accounts that predate the introduction of AFS in the cell, you possibly want to convert them into AFS accounts. There are three main issues to consider:

For further discussion, see Converting Existing UNIX Accounts with uss or Converting Existing UNIX Accounts.

Choosing Usernames and Naming Other Account Components

This section suggests schemes for choosing usernames, AFS UIDs, user volume names and mount point names, and also outlines some restrictions on your choices.

Usernames

AFS imposes very few restrictions on the form of usernames. It is best to keep usernames short, both because many utilities and applications can handle usernames of no more than eight characters and because by convention many components of and AFS account incorporate the name. These include the entries in the Protection and Authentication Databases, the volume, and the mount point. Depending on your electronic mail delivery system, the username can become part of the user's mailing address. The username is also the string that the user types when logging in to a client machine.

Some common choices for usernames are last names, first names, initials, or a combination, with numbers sometimes added. It is also best to avoid using the following characters, many of which have special meanings to the command shell.

AFS UIDs and UNIX UIDs

AFS associates a unique identification number, the AFS UID, with every username, recording the mapping in the user's Protection Database entry. The AFS UID functions within AFS much as the UNIX UID does in the local file system: the AFS server processes and the Cache Manager use it internally to identify a user, rather than the username.

Every AFS user also must have a UNIX UID recorded in the local password file (/etc/passwd or equivalent) of each client machine they log onto. Both administration and a user's AFS access are simplest if the AFS UID and UNIX UID match. One important consequence of matching UIDs is that the owner reported by the ls -l command matches the AFS username.

It is usually best to allow the Protection Server to allocate the AFS UID as it creates the Protection Database entry. However, both the pts createuser command and the uss commands that create user accounts enable you to assign AFS UIDs explicitly. This is appropriate in two cases:

After the Protection Server initializes for the first time on a cell's first file server machine, it starts assigning AFS UIDs at a default value. To change the default before creating any user accounts, or at any time, use the pts setmax command to reset the max user id counter. To display the counter, use the pts listmax command. See Displaying and Setting the AFS UID and GID Counters.

AFS reserves one AFS UID, 32766, for the user anonymous. The AFS server processes assign this identity and AFS UID to any user who does not possess a token for the local cell. Do not assign this AFS UID to any other user or hardcode its current value into any programs or a file's owner field, because it is subject to change in future releases.

User Volume Names

Like any volume name, a user volume's base (read/write) name cannot exceed 22 characters in length or include the .readonly or .backup extension. See Creating Volumes to Simplify Administration. By convention, user volume names have the format user.username. Using the user. prefix not only makes it easy to identify the volume's contents, but also to create a backup version of all user volumes by issuing a single vos backupsys command.

Mount Point Names

By convention, the mount point for a user's volume is named after the username. Many cells follow the convention of mounting user volumes in the /afs/cellname/usr directory, as discussed in The Third Level. Very large cells sometimes find that mounting all user volumes in the same directory slows directory lookup, however; for suggested alternatives, see the following section.

Grouping Home Directories

Mounting user volumes in the /afs/cellname/usr directory is an AFS-appropriate variation on the standard UNIX practice of putting user home directories under the /usr subdirectory. However, cells with more than a few hundred users sometimes find that mounting all user volumes in a single directory results in slow directory lookup. The solution is to distribute user volume mount points into several directories; there are a number of alternative methods to accomplish this.

For instructions on how to implement the various schemes when using the uss program to create user accounts, see Evenly Distributing User Home Directories with the G Instruction and Creating a Volume with the V Instruction.

Making a Backup Version of User Volumes Available

Mounting the backup version of a user's volume is a simple way to enable users themselves to restore data they have accidentally removed or deleted. It is conventional to mount the backup version at a subdirectory of the user's home directory (called perhaps the OldFiles subdirectory), but other schemes are possible. Once per day you create a new backup version to capture the changes made that day, overwriting the previous day's backup version with the new one. Users can always retrieve the previous day's copy of a file without your assistance, freeing you to deal with more pressing tasks.

Users sometimes want to delete the mount point to their backup volume, because they erroneously believe that the backup volume's contents count against their quota. Remind them that the backup volume is separate, so the only space it uses in the user volume is the amount needed for the mount point.

For further discussion of backup volumes, see Backing Up AFS Data and Creating Backup Volumes.

Creating Standard Files in New AFS Accounts

From your experience as a UNIX administrator, you are probably familiar with the use of login and shell initialization files (such as the .login and .cshrc files) to make an account easier to use.

It is often practical to add some AFS-specific directories to the definition of the user's PATH environment variable, including the following:

If you are not using an AFS-modified login utility, it can be helpful to users to invoke the klog command in their .login file so that they obtain AFS tokens as part of logging in. In the following example command sequence, the first line echoes the string klog to the standard output stream, so that the user understands the purpose of the Password: prompt that appears when the second line is executed. The -setpag flag associates the new tokens with a process authentication group (PAG), which is discussed further in Identifying AFS Tokens by PAG.

   echo -n "klog "
   klog -setpag

The following sequence of commands has a similar effect, except that the pagsh command forks a new shell with which the PAG and tokens are associated.

   pagsh
   echo -n "klog "
   klog

If you use an AFS-modified login utility, this sequence is not necessary, because such utilities both log a user in locally and obtain AFS tokens.


Using AFS Protection Groups

AFS enables users to define their own groups of other users or machines. The groups are placed on ACLs to grant the same permissions to many users without listing each user individually. For group creation instructions, see Administering the Protection Database.

Groups have AFS ID numbers, just as users do, but an AFS group ID (GID) is a negative integer whereas a user's AFS UID is a positive integer. By default, the Protection Server allocates a new group's AFS GID automatically, but members of the system:administrators group can assign a GID when issuing the pts creategroup command. Before explicitly assigning a GID, it is best to verify that it is not already in use.

A group cannot belong to another group, but it can own another group or even itself as long as it (the owning group) has at least one member. The current owner of a group can transfer ownership of the group to another user or group, even without the new owner's permission. At that point the former owner loses administrative control over the group.

By default, each user can create 20 groups. A system administrator can increase or decrease this group creation quota with the pts setfields command.

Each Protection Database entry (group or user) is protected by a set of five privacy flagswhich limit who can administer the entry and what they can do. The default privacy flags are fairly restrictive, especially for user entries. See Setting the Privacy Flags on Database Entries.

The Three System Groups

As the Protection Server initializes for the first time on a cell's first database server machine, it automatically creates three group entries: the system:anyuser, system:authuser, and system:administrators groups.

The first two system groups are unlike any other groups in the Protection Database in that they do not have a stable membership:

Because the groups do not have a stable membership, the pts membership command produces no output for them. Similarly, they do not appear in the list of groups to which a user belongs.

The system:administrators group does have a stable membership, consisting of the cell's privileged administrators. Members of this group can issue any pts command, and are the only ones who can issue several other restricted commands (such as the chown command on AFS files). By default, they also implicitly have the a (administer) and l (lookup) permissions on every ACL in the filespace. For information about changing this default, see Administering the system:administrators Group.

For a discussion of how to use system groups effectively on ACLs, see Using Groups on ACLs.

The Two Types of User-Defined Groups

All users can create regular groups. A regular group name has two fields separated by a colon, the first of which must indicate the group's ownership. The Protection Server refuses to create or change the name of a group if the result does not accurately indicate the ownership.

Members of the system:administrators group can create prefix-less groups whose names do not have the first field that indicates ownership. For suggestions on using the two types of groups effectively, see Using Groups Effectively.


Login and Authentication in AFS

As explained in Differences in Authentication, AFS authentication is separate from UNIX authentication because the two file systems are separate. The separation has two practical implications:

When a user successfully authenticates, the AFS authentication service passes a token to the user's Cache Manager. The token is a small collection of data that certifies that the user has correctly provided the password associated with a particular AFS identity. The Cache Manager presents the token to AFS server processes along with service requests, as proof that the user is genuine. To learn about the mutual authentication procedure they use to establish identity, see A More Detailed Look at Mutual Authentication.

The Cache Manager stores tokens in the user's credential structure in kernel memory. To distinguish one user's credential structure from another's, the Cache Manager identifies each one either by the user's UNIX UID or by a process authentication group (PAG), which is an identification number guaranteed to be unique in the cell. For further discussion, see Identifying AFS Tokens by PAG.

A user can have only one token per cell in each separately identified credential structure. To obtain a second token for the same cell, the user must either log into a different machine or obtain another credential structure with a different identifier than any existing credential structure, which is most easily accomplished by issuing the pagsh command (see Identifying AFS Tokens by PAG). In a single credential structure, a user can have one token for each of many cells at the same time. As this implies, authentication status on one machine or PAG is independent of authentication status on another machine or PAG, which can be very useful to a user or system administrator.

The AFS distribution includes library files that enable each system type's login utility to authenticate users with AFS and log them into the local file system in one step. If you do not configure an AFS-modified login utility on a client machine, its users must issue the klog command to authenticate with AFS after logging in.

Note:The AFS-modified libraries do not necessarily support all features available in an operating system's proprietary login utility. In some cases, it is not possible to support a utility at all. For more information about the supported utilities in each AFS version, see the IBM AFS Release Notes.

Identifying AFS Tokens by PAG

As noted, the Cache Manager identifies user credential structures either by UNIX UID or by PAG. Using a PAG is preferable because it guaranteed to be unique: the Cache Manager allocates it based on a counter that increments with each use. In contrast, multiple users on a machine can share or assume the same UNIX UID, which creates potential security problems. The following are two common such situations:

Yet another advantage of PAGs over UIDs is that processes spawned by the user inherit the PAG and so share the token; thus they gain access to AFS as the authenticated user. In many environments, for example, printer and other daemons run under identities (such as the local superuser root) that the AFS server processes recognize only as the anonymous user. Unless PAGs are used, such daemons cannot access files for which the system:anyuser group does not have the necessary ACL permissions.

Once a user has a PAG, any new tokens the user obtains are associated with the PAG. The PAG expires two hours after any associated tokens expire or are discarded. If the user issues the klog command before the PAG expires, the new token is associated with the existing PAG (the PAG is said to be recycled in this case).

AFS-modified login utilities automatically generate a PAG, as described in the following section. If you use a standard login utility, your users must issue the pagsh command before the klog command, or include the latter command's -setpag flag. For instructions, see Using Two-Step Login and Authentication.

Users can also use either command at any time to create a new PAG. The difference between the two commands is that the klog command replaces the PAG associated with the current command shell and tokens. The pagsh command initializes a new command shell before creating a new PAG. If the user already had a PAG, any running processes or jobs continue to use the tokens associated with the old PAG whereas any new jobs or processes use the new PAG and its associated tokens. When you exit the new shell (by pressing <Ctrl-d>, for example), you return to the original PAG and shell. By default, the pagsh command initializes a Bourne shell, but you can include the -c argument to initialize a C shell (the /bin/csh program on many system types) or Korn shell (the /bin/ksh program) instead.

Using an AFS-modified login Utility

As previously mentioned, an AFS-modified login utility simultaneously obtains an AFS token and logs the user into the local file system. This section outlines the login and authentication process and its interaction with the value in the password field of the local password file.

An AFS-modified login utility performs a sequence of steps similar to the following; details can vary for different operating systems:

  1. It checks the user's entry in the local password file (the /etc/passwd file or equivalent).

  2. If no entry exists, or if an asterisk ( * ) appears in the entry's password field, the login attempt fails. If the entry exists, the attempt proceeds to the next step.

  3. The utility obtains a PAG.

  4. The utility converts the password provided by the user into an encryption key and encrypts a packet of data with the key. It sends the packet to the AFS authentication service (the AFS Authentication Server in the conventional configuration).

  5. The authentication service decrypts the packet and, depending on the success of the decryption, judges the password to be correct or incorrect. (For more details, see A More Detailed Look at Mutual Authentication.)

  6. If no AFS token was granted in Step 4, the login utility attempts to log the user into the local file system, by comparing the password provided to the local password file.

As indicated, when you use an AFS-modified login utility, the password field in the local password file is no longer the primary gate for access to your system. If the user provides the correct AFS password, then the program never consults the local password file. However, you can still use the password field to control access, in the following way:

Systems that use a Pluggable Authentication Module (PAM) for login and AFS authentication do not necessarily consult the local password file at all, in which case they do not use the password field to control authentication and login attempts. Instead, instructions in the PAM configuration file (on many system types, /etc/pam.conf) fill the same function. See the instructions in the IBM AFS Quick Beginnings for installing AFS-modified login utilities.

Using Two-Step Login and Authentication

In cells that do not use an AFS-modified login utility, users must issue separate commands to login and authenticate, as detailed in the IBM AFS User Guide:

  1. They use the standard login program to login to the local file system, providing the password listed in the local password file (the /etc/passwd file or equivalent).

  2. They must issue the klog command to authenticate with the AFS authentication service, including its -setpag flag to associate the new tokens with a process authentication group (PAG).

As mentioned in Creating Standard Files in New AFS Accounts, you can invoke the klog -setpag command in a user's .login file (or equivalent) so that the user does not have to remember to issue the command after logging in. The user still must type a password twice, once at the prompt generated by the login utility and once at the klog command's prompt. This implies that the two passwords can differ, but it is less confusing if they do not.

Another effect of not using an AFS-modified login utility is that the AFS servers recognize the standard login program as the anonymous user. If the login program needs to access any AFS files (such as the .login file in a user's home directory), then the ACL that protects the file must include an entry granting the l (lookup) and r (read) permissions to the system:anyuser group.

When you do not use an AFS-modified login utility, an actual (scrambled) password must appear in the local password file for each user. Use the /bin/passwd file to insert or change these passwords. It is simpler if the password in the local password file matches the AFS password, but it is not required.

Obtaining, Displaying, and Discarding Tokens

Once logged in, a user can obtain a token at any time with the klog command. If a valid token already exists, the new one overwrites it. If a PAG already exists, the new token is associated with it.

By default, the klog command authenticates the issuer using the identity currently logged in to the local file system. To authenticate as a different identity, use the -principal argument. To obtain a token for a foreign cell, use the -cell argument (it can be combined with the -principal argument). See the IBM AFS User Guide and the entry for the klog command in the IBM AFS Administration Reference.

To discard either all tokens or the token for a particular cell, issue the unlog command. The command affects only the tokens associated with the current command shell. See the IBM AFS User Guideand the entry for the unlog command in the IBM AFS Administration Reference.

To display the tokens associated with the current command shell, issue the tokens command. The following examples illustrate its output in various situations.

If the issuer is not authenticated in any cell:

   % tokens
   Tokens held by the Cache Manager:
          --End of list--

The following shows the output for a user with AFS UID 1000 in the ABC Corporation cell:

   % tokens
   Tokens held by the Cache Manager: 
   
   User's (AFS ID 1000) tokens for afs@abc.com  [Expires Jun  2 10:00]
       --End of list--

The following shows the output for a user who is authenticated in ABC Corporation cell, the State University cell and the DEF Company cell. The user has different AFS UIDs in the three cells. Tokens for the last cell are expired:

   % tokens
   Tokens held by the Cache Manager:
    
   User's (AFS ID 1000) tokens for afs@abc.com  [Expires Jun  2 10:00]
   User's (AFS ID 4286) tokens for afs@stateu.edu  [Expires Jun  3 1:34]
   User's (AFS ID 22) tokens for afs@def.com  [>>Expired<<]
       --End of list--

The Kerberos version of the tokens command (the tokens.krb command), also reports information on the ticket-granting ticket, including the ticket's owner, the ticket-granting service, and the expiration date, as in the following example. Also see Support for Kerberos Authentication.

   % tokens.krb
   Tokens held by the Cache Manager:
   User's (AFS ID 1000) tokens for afs@abc.com [Expires Jun  2 10:00]
   User smith's tokens for krbtgt.ABC.COM@abc.com [Expires Jun  2 10:00]
     --End of list--

Setting Default Token Lifetimes for Users

The maximum lifetime of a user token is the smallest of the ticket lifetimes recorded in the following three Authentication Database entries. The kas examine command reports the lifetime as Max ticket lifetime. Administrators who have the ADMIN flag on their Authentication Database entry can use the -lifetime argument to the kas setfields command to set an entry's ticket lifetime.

Note:An AFS-modified login utility always grants a token with a lifetime calculated from the previously described three values. When issuing the klog command, a user can request a lifetime shorter than the default by using the -lifetime argument. For further information, see the IBM AFS User Guide and the klog reference page in the IBM AFS Administration Reference.

Changing Passwords

Regular AFS users can change their own passwords by using either the kpasswd or kas setpassword command. The commands prompt for the current password and then twice for the new password, to screen out typing errors.

Administrators who have the ADMIN flag on their Authentication Database entries can change any user's password, either by using the kpasswd command (which requires knowing the current password) or the kas setpassword command.

If your cell does not use an AFS-modified login utility, remember also to change the local password, using the operating system's password-changing command. For more instructions on changing passwords, see Changing AFS Passwords.

Imposing Restrictions on Passwords and Authentication Attempts

You can help to make your cell more secure by imposing restrictions on user passwords and authentication attempts. To impose the restrictions as you create an account, use the A instruction in the uss template file as described in Increasing Account Security with the A Instruction. To set or change the values on an existing account, use the kas setfields command as described in Improving Password and Authentication Security.

By default, AFS passwords never expire. Limiting password lifetime can help improve security by decreasing the time the password is subject to cracking attempts. You can choose an lifetime from 1 to 254 days after the password was last changed. It automatically applies to each new password as it is set. When the user changes passwords, you can also insist that the new password is not similar to any of the 20 passwords previously used.

Unscrupulous users can try to gain access to your AFS cell by guessing an authorized user's password. To protect against this type of attack, you can limit the number of times that a user can consecutively fail to provide the correct password. When the limit is exceeded, the authentication service refuses further authentication attempts for a specified period of time (the lockout time). To reenable authentication attempts before the lockout time expires, an administrator must issue the kas unlock command.

In addition to settings on user's authentication accounts, you can improve security by automatically checking the quality of new user passwords. The kpasswd and kas setpassword commands pass the proposed password to a program or script called kpwvalid, if it exists. The kpwvalid performs quality checks and returns a code to indicate whether the password is acceptable. You can create your own program or modified the sample program included in the AFS distribution. See the kpwvalid reference page in the IBM AFS Administration Reference.

There are several types of quality checks that can improve password quality.

Support for Kerberos Authentication

If your site is using standard Kerberos authentication rather than the AFS Authentication Server, use the modified versions of the klog, pagsh, and tokens commands that support Kerberos authentication. The binaries for the modified version of these commands have the same name as the standard binaries with the addition of a .krb extension.

Use either the Kerberos version or the standard command throughout the cell; do not mix the two versions. AFS Product Support can provide instructions on installing the Kerberos version of these four commands. For information on the differences between the two versions of these commands, see the IBM AFS Administration Reference.


Security and Authorization in AFS

AFS incorporates several features to ensure that only authorized users gain access to data. This section summarizes the most important of them and suggests methods for improving security in your cell.

Some Important Security Features

ACLs on Directories

Files in AFS are protected by the access control list (ACL) associated with their parent directory. The ACL defines which users or groups can access the data in the directory, and in what way. See Managing Access Control Lists.

Mutual Authentication Between Client and Server

When an AFS client and server process communicate, each requires the other to prove its identity during mutual authentication, which involves the exchange of encrypted information that only valid parties can decrypt and respond to. For a detailed description of the mutual authentication process, see A More Detailed Look at Mutual Authentication.

AFS server processes mutually authenticate both with one another and with processes that represent human users. After mutual authentication is complete, the server and client have established an authenticated connection, across which they can communicate repeatedly without having to authenticate again until the connection expires or one of the parties closes it. Authenticated connections have varying lifetimes.

Tokens

In order to access AFS files, users must prove their identities to the AFS authentication service by providing the correct AFS password. If the password is correct, the authentication service sends the user a token as evidence of authenticated status. See Login and Authentication in AFS.

Servers assign the user identity anonymous to users and processes that do not have a valid token. The anonymous identity has only the access granted to the system:anyuser group on ACLs.

Authorization Checking

Mutual authentication establishes that two parties communicating with one another are actually who they claim to be. For many functions, AFS server processes also check that the client whose identity they have verified is also authorized to make the request. Different requests require different kinds of privilege. See Three Types of Privilege.

Encrypted Network Communications

The AFS server processes encrypt particularly sensitive information before sending it back to clients. Even if an unauthorized party is able to eavesdrop on an authenticated connection, they cannot decipher encrypted data without the proper key.

The following AFS commands encrypt data because they involve server encryption keys and passwords:

In addition, the United States edition of the Update Server encrypts sensitive information (such as the contents of KeyFile) when distributing it. Other commands in the bos suite and the commands in the fs, pts and vos suites do not encrypt data before transmitting it.

Three Types of Privilege

AFS uses three separate types of privilege for the reasons discussed in The Reason for Separate Privileges.

Authorization Checking versus Authentication

AFS distinguishes between authentication and authorization checking. Authentication refers to the process of proving identity. Authorization checking refers to the process of verifying that an authenticated identity is allowed to perform a certain action.

AFS implements authentication at the level of connections. Each time two parties establish a new connection, they mutually authenticate. In general, each issue of an AFS command establishes a new connection between AFS server process and client.

AFS implements authorization checking at the level of server machines. If authorization checking is enabled on a server machine, then all of the server processes running on it provide services only to authorized users. If authorization checking is disabled on a server machine, then all of the server processes perform any action for anyone. Obviously, disabling authorization checking is an extreme security exposure. For more information, see Managing Authentication and Authorization Requirements.

Improving Security in Your Cell

You can improve the level of security in your cell by configuring user accounts, server machines, and system administrator accounts in the indicated way.

User Accounts

Server Machines

System Administrators

A More Detailed Look at Mutual Authentication

As in any file system, security is a prime concern in AFS. A file system that makes file sharing easy is not useful if it makes file sharing mandatory, so AFS incorporates several features that prevent unauthorized users from accessing data. Security in a networked environment is difficult because almost all procedures require transmission of information across wires that almost anyone can tap into. Also, many machines on networks are powerful enough that unscrupulous users can monitor transactions or even intercept transmissions and fake the identity of one of the participants.

The most effective precaution against eavesdropping and information theft or fakery is for servers and clients to accept the claimed identity of the other party only with sufficient proof. In other words, the nature of the network forces all parties on the network to assume that the other party in a transaction is not genuine until proven so. Mutual authentication is the means through which parties prove their genuineness.

Because the measures needed to prevent fakery must be quite sophisticated, the implementation of mutual authentication procedures is complex. The underlying concept is simple, however: parties prove their identities by demonstrating knowledge of a shared secret. A shared secret is a piece of information known only to the parties who are mutually authenticating (they can sometimes learn it in the first place from a trusted third party or some other source). The party who originates the transaction presents the shared secret and refuses to accept the other party as valid until it shows that it knows the secret too.

The most common form of shared secret in AFS transactions is the encryption key, also referred to simply as a key. The two parties use their shared key to encrypt the packets of information they send and to decrypt the ones they receive. Encryption using keys actually serves two related purposes. First, it protects messages as they cross the network, preventing anyone who does not know the key from eavesdropping. Second, ability to encrypt and decrypt messages successfully indicates that the parties are using the key (it is their shared secret). If they are using different keys, messages remain scrambled and unintelligible after decryption.

The following sections describe AFS's mutual authentication procedures in more detail. Feel free to skip these sections if you are not interested in the mutual authentication process.

Simple Mutual Authentication

Simple mutual authentication involves only one encryption key and two parties, generally a client and server. The client contacts the server by sending a challenge message encrypted with a key known only to the two of them. The server decrypts the message using its key, which is the same as the client's if they really do share the same secret. The server responds to the challenge and uses its key to encrypt its response. The client uses its key to decrypt the server's response, and if it is correct, then the client can be sure that the server is genuine: only someone who knows the same key as the client can decrypt the challenge and answer it correctly. On its side, the server concludes that the client is genuine because the challenge message made sense when the server decrypted it.

AFS uses simple mutual authentication to verify user identities during the first part of the login procedure. In that case, the key is based on the user's password.

Complex Mutual Authentication

Complex mutual authentication involves three encryption keys and three parties. All secure AFS transactions (except the first part of the login process) employ complex mutual authentication.

When a client wishes to communicate with a server, it first contacts a third party called a ticket-granter. The ticket-granter and the client mutually authenticate using the simple procedure. When they finish, the ticket-granter gives the client a server ticket (or simply ticket) as proof that it (the ticket-granter) has preverified the identity of the client. The ticket-granter encrypts the ticket with the first of the three keys, called the server encryption key because it is known only to the ticket-granter and the server the client wants to contact. The client does not know this key.

The ticket-granter sends several other pieces of information along with the ticket. They enable the client to use the ticket effectively despite being unable to decrypt the ticket itself. Along with the ticket, the items constitute a token:

The ticket-granter seals the entire token with the third key involved in complex mutual authentication--the key known only to it (the ticket-granter) and the client. In some cases, this third key is derived from the password of the human user whom the client represents.

Now that the client has a valid server ticket, it is ready to contact the server. It sends the server two things:

At this point, the server does not know the session key, because the ticket-granter just created it. However, the ticket-granter put a copy of the session key inside the ticket. The server uses the server encryption key to decrypts the ticket and learns the session key. It then uses the session key to decrypt the client's request message. It generates a response and sends it to the client. It encrypts the response with the session key to protect it as it crosses the network.

This step is the heart of mutual authentication between client and server, because it proves to both parties that they know the same secret:


Backing Up AFS Data

AFS provides two related facilities that help the administrator back up AFS data: backup volumes and the AFS Backup System.

Backup Volumes

The first facility is the backup volume, which you create by cloning a read/write volume. The backup volume is read-only and so preserves the state of the read/write volume at the time the clone is made.

Backup volumes can ease administration if you mount them in the file system and make their contents available to users. For example, it often makes sense to mount the backup version of each user volume as a subdirectory of the user's home directory. A conventional name for this mount point is OldFiles. Create a new version of the backup volume (that is, reclone the read/write) once a day to capture any changes that were made since the previous backup. If a user accidentally removes or changes data, the user can restore it from the backup volume, rather than having to ask you to restore it.

The IBM AFS User Guide does not mention backup volumes, so regular users do not know about them if you decide not to use them. This implies that if you do make backup versions of user volumes, you need to tell your users about how the backup works and where you have mounted it.

Users are often concerned that the data in a backup volume counts against their volume quota and some of them even want to remove the OldFiles mount point. It does not, because the backup volume is a separate volume. The only amount of space it uses in the user's volume is the amount needed for the mount point, which is about the same as the amount needed for a standard directory element.

Backup volumes are discussed in detail in Creating Backup Volumes.

The AFS Backup System

Backup volumes can reduce restoration requests, but they reside on disk and so do not protect data from loss due to hardware failure. Like any file system, AFS is vulnerable to this sort of data loss.

To protect your cell's users from permanent loss of data, you are strongly urged to back up your file system to tape on a regular and frequent schedule. The AFS Backup System is available to ease the administration and performance of backups. For detailed information about the AFS Backup System, see Configuring the AFS Backup System and Backing Up and Restoring AFS Data.


Using UNIX Remote Services in the AFS Environment

The AFS distribution includes modified versions of several standard UNIX commands, daemons and programs that provide remote services, including the following:

These modifications enable the commands to handle AFS authentication information (tokens). This enables issuers to be recognized on the remote machine as an authenticated AFS user.

Replacing the standard versions of these programs in your file tree with the AFS-modified versions is optional. It is likely that AFS's transparent access reduces the need for some of the programs anyway, especially those involved in transferring files from machine to machine, like the ftpd and rcp programs.

If you decide to use the AFS versions of these commands, be aware that several of them are interdependent. For example, the passing of AFS authentication information works correctly with the rcp command only if you are using the AFS version of both the rcp and inetd commands.

The conventional installation location for the modified remote commands are the /usr/afsws/bin and /usr/afsws/etc directories. To learn more about commands' functionality, see their reference pages in the IBM AFS Administration Reference.


Accessing AFS through NFS

Users of NFS client machines can access the AFS filespace by mounting the /afs directory of an AFS client machine that is running the NFS/AFS Translator. This is a particular advantage in cells already running NFS who want to access AFS using client machines for which AFS is not available. See Appendix A, Managing the NFS/AFS Translator.


[Return to Library] [Contents] [Previous Topic] [Top of Topic] [Next Topic] [Index]



© IBM Corporation 2000. All Rights Reserved