Release Notes


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic]


AFS 3.6 Release Notes

This file documents new features, upgrade procedures, and remaining limitations associated with the initial General Availability (GA) release of AFS(R) 3.6 (build level afs3.6 2.0).

Note:This document includes all product information available at the time the document was produced. For additional information that became available later, see the README.txt file included on the AFS CD-ROM.

Summary of New Features

AFS 3.6 includes the following new features.


Supported System Types

AFS supports the following system types.

alpha_dux40 DEC AXP system with one or more processors running Digital UNIX 4.0d, 4.0e, or 4.0f
hp_ux110 Hewlett-Packard system with one or more processors running the 32-bit or 64-bit version of HP-UX 11.0
i386_linux22 IBM-compatible PC with one or more processors running Linux kernel version 2.2.5-15 (the version in Red Hat Linux 6.0), 2.2.10, 2.2.12, 2.2.12-20 (the version in Red Hat Linux 6.1), 2.2.13, or 2.2.14
rs_aix42 IBM RS/6000 with one or more 32-bit or 64-bit processors running AIX 4.2, 4.2.1, 4.3, 4.3.1, 4.3.2, or 4.3.3
sgi_65 Silicon Graphics system with one or more processors running IRIX 6.5 or 6.5.4. Support is provided for the following CPU board types, as reported by the IRIX uname -m command: IP19, IP20, IP21, IP22, IP25, IP26, IP27, IP28, IP30, IP32
sun4x_56 Sun SPARCstation with one or more processors running Solaris 2.6
sun4x_57 Sun SPARCstation with one or more processors running the 32-bit or 64-bit version of Solaris 7

Hardware and Software Requirements

For a list of requirements for both server and client machines, see the chapter titled Installation Overview in the IBM AFS Quick Beginnings document.


Accessing the AFS Binary Distribution and Documentation

The AFS Binary Distribution includes a separate CD-ROM for each supported operating system, containing all AFS binaries and files for both server and client machines, plus the documentation set in multiple formats. At the top level of the CD-ROM is a directory called Documentation plus a directory containing the system-specific AFS binaries, named using the values listed in Supported System Types. The CD-ROM for some operating systems has more than one system-specific directory; for example, the Solaris CD-ROM has sun4x_56 and sun4x_57.

The instructions in Upgrading Server and Client Machines to AFS 3.6 specify when to mount the CD-ROM and which files or directories to copy to the local disk or into an AFS volume.

The documents are also available online at <A HREF="http://www.transarc.com/Library/documentation/afs_doc.html">http://www.transarc.com/Library/documentation/afs_doc.html</A>. The documentation set includes the following documents:

Documents are provided in the following formats:

If you do not already have the Acrobat Reader program, you can download it for free at <A HREF="http://www.adobe.com/products/acrobat/readstep.html">http://www.adobe.com/products/acrobat/readstep.html</A>.

Adobe provides only an English-language version of Acrobat Reader for UNIX platforms. The program can display PDF files written in any language. It is the program interface (menus, messages, and so on) that is available in English only.

To make Reader's interface display properly in non-English language locales, use one of two methods to set the program's language environment to English:


Product Notes

The following sections summarize limitations and requirements that pertain to all system types and to individual system types, and describe revisions to the AFS documents:

Product Notes for All System Types

Product Notes for AIX Systems

Product Notes for Digital UNIX Systems

Product Notes for HP-UX Systems

Product Notes for IRIX Systems

Product Notes for Linux Systems

Product Notes for Solaris Systems

Documentation Notes


Changes to AFS Commands, Files, and Functionality

This section briefly describes commands, command options, and functionality that are new or changed in AFS 3.6. Unless otherwise noted, the IBM AFS Administration Guide and IBM AFS Administration Reference include complete documentation of these items.

A New Command

AFS 3.6 includes the new fs flushmount command. The command's intended use is to discard information about mount points that has become corrupted in the cache. The next time an application accesses the mount point, the Cache Manager must fetch the most current version of it from a File Server. Data cached from files or directories in the volume is not affected. The only other way to discard the information is to reinitialize the Cache Manager by rebooting the machine.

Symptoms of a corrupted mount point included garbled output from the fs lsmount command, and failed attempts to change directory to or list the contents of the volume root directory represented by the mount point.

New File or Command Functionality

AFS 3.6 adds the following new options and functionality to existing commands and files.


Support for Backup to TSM

AFS 3.6 introduces support for backing up AFS data to media managed by the Tivoli Storage Manager (TSM), a third-party backup program which implements the Open Group's Backup Service application programming interface (API), also called XBSA. TSM was formerly called the ADSTAR Distributed Storage Manager, or ADSM. It is assumed that the administrator is familiar with TSM; explaining TSM or XBSA concepts or terminology is beyond the scope of this document.

See the following subsections:

New Command and File Features that Support TSM

The AFS 3.6 version of the following commands and configuration files include new options or instructions to support backup to TSM.

Product Notes for Use of TSM

Configuring the Backup System and TSM

Perform the following steps to configure TSM and the AFS Backup System for interoperation.

Note:You possibly need to perform additional TSM configuration procedures unrelated to AFS. See the TSM documentation.

  1. Become the local superuser root, if you are not already.

       % su root
       Password: root_password   
    

  2. Install version 3.7.1 of the TSM client API on the local disk of the Tape Coordinator machine. If you do not already have the API, you can use the following instructions to download it using the UNIX File Transfer Protocol (ftp).

    1. Verify that there is enough free space on the local disk to accommodate the API package:

      • On AIX systems, 4 MB on the disk that houses the /usr/tivoli directory

      • On Solaris systems, 13 MB on the disk that houses the /opt/tivoli directory

    2. Connect to the ftp server called ftp.software.ibm.com, logging in as anonymous and providing your electronic mail address as the password.

    3. Switch to binary mode.
         ftp> bin   
      

    4. Change directory as indicated:
         ftp> cd storage/tivoli-storage-management-maintenance/client/v3r7   
      

    5. Change to the appropriate directory and retrieve the API file.

      • On an AIX 4.3 system:
           ftp> cd AIX/v371
           ftp> get tivoli.tsm.client.api.aix43.32bit   
        

      • On a Solaris 2.6 or 7 system:
           ftp> cd Solaris/v371
           ftp> get IP21804.tar.Z   
        

    6. Use the appropriate tool to install the TSM API package locally:

      • On AIX machines, use smit, which installs the files in the /usr/tivoli/tsm/client/api/bin directory

      • On Solaris machines, use the following command, which installs the files in the /opt/tivoli/tsm/client/api/bin directory:
           # uncompress IP21804.tar.Z | tar xvf -   
        

  3. Set the following TSM environment variables as indicated. If you do not set them, you must use the default values specified in the TSM documentation.

    DSMI_DIR
    Specifies the pathname of the directory that contains the TSM client system options file, dsm.sys. The directory must have a subdirectory (which can be a symbolic link) called en_US that contains the dsmclientV3.cat catalog file.

    Do not put a final slash ( / ) on the directory name. Examples of appropriate values are /opt/tivoli/tsm/client/api/bin on Solaris machines and /usr/tivoli/tsm/client/api/bin on AIX machines.

    DSMI_CONFIG
    Specifies the pathname of the directory that contains the TSM client user options file, dsm.opt. The value can be the same as for the DSMI_DIR variable. Do not put a final slash ( / ) on the directory name.

    DSMI_LOG
    Specifies the full pathname (including the filename) of the log file for error messages from the API. An appropriate value is /usr/afs/backup/butc.TSMAPI.log.

  4. Verify that the dsm.sys file includes the following instructions. For a description of the fields, see the TSM documentation.
       ServerName           machine_name
          CommMethod        tcpip
          TCPPort           TSM_port
          TCPServerAddress  full_machine_name
          PasswordAccess    prompt
          Compression       yes
    

    The following is an example of appropriate values:

       ServerName tsm3
          CommMethod tcpip
          TCPPort 1500
          TCPServerAddress  tsm3.abc.com
          PasswordAccess  prompt
          Compression  yes   
    

  5. Verify that the dsm.opt file includes the following instructions. For a description of the fields, see the TSM documentation.
       ServerName        machine_name
          tapeprompt     no
          compressalways yes   
    

  6. Create a Backup Database entry for each Tape Coordinator that is to communicate with the TSM server. Multiple Tape Coordinators can interact with the same TSM server if the server has sufficient capacity.
       # backup addhost <tape machine name> <TC port offset>
    

    where

    tape machine name
    Specifies the fully qualified hostname of the Tape Coordinator machine.

    TC port offset
    Specifies the Tape Coordinator's port offset number. Acceptable values are integers in the range from 0 (zero) through 58510.

  7. Create a device configuration file for the Tape Coordinator called /usr/afs/backup/CFG_tcid, where tcid is the Tape Coordinator's port offset number as defined in Step 6. The file must include the following instructions:

    For more detailed descriptions of the instructions, and of other instructions you can include in the configuration file, see CFG_tcid.


Upgrading Server and Client Machines to AFS 3.6

This section explains how to upgrade server and client machines from AFS 3.5 or AFS 3.6 Beta to AFS 3.6. Before performing an upgrade, please read all of the introductory material in this section.

If you are installing AFS for the first time, skip this chapter and refer to the IBM AFS Quick Beginnings document for AFS 3.6.

AFS provides backward compatibility to the previous release only: AFS 3.6 is certified to be compatible with AFS 3.5 but not necessarily with earlier versions.

Note:This document does not provide instructions for upgrading from AFS 3.4a or earlier directly to AFS 3.6. A file system conversion is required on some system types. See the AFS Release Notes for AFS 3.5 and contact your AFS product support representative for assistance.

Prerequisites for Upgrading

You must meet the following requirements to upgrade successfully to AFS 3.6:

Obtaining the Binary Distribution

Use one of the following methods to obtain the AFS distribution of each system type for which you are licensed.

Storing Binaries in AFS

It is conventional to store many of the programs and files from the AFS binary distribution in a separate volume for each system type, mounted in your AFS filespace at /afs/cellname/sysname/usr/afsws. These instructions rename the volume currently mounted at this location and create a new volume for AFS 3.6 binaries.

Repeat the instructions for each system type.

  1. Authenticate as an administrator listed in the /usr/afs/etc/UserList file.

  2. Issue the vos create command to create a new volume for AFS 3.6 binaries called sysname.3.6. Set an unlimited quota on the volume to avoid running out of space as you copy files from the distribution.
       % vos create <machine name> <partition name> sysname.3.6  -maxquota  0    
    

  3. Issue the fs mkmount command to mount the volume at a temporary location.
       % fs mkmount  /afs/.cellname/temp  sysname.3.6    
    

  4. Prepare to access the files using the method you have selected:

  5. Copy files from the distribution into the sysname.3.6 volume.
       % cp -rp  bin  /afs/.cellname/temp  
       
       % cp -rp  etc  /afs/.cellname/temp  
          
       % cp -rp  include  /afs/.cellname/temp  
       
       % cp -rp  lib  /afs/.cellname/temp   
    

  6. (Optional) By convention, the contents of the distribution's root.client directory are not stored in AFS. However, if you are upgrading client functionality on many machines, it can be simpler to copy the client files from your local AFS space than from the CD-ROM or from IBM's Electronic Software Distribution system. If you wish to store the contents of the root.client directory in AFS temporarily, copy them now.
       % cp -rp  root.client  /afs/.cellname/temp  
    

  7. Issue the vos rename command to change the name of the volume currently mounted at the /afs/cellname/sysname/usr/afsws directory. A possible value for the extension reflects the AFS version and build level (for example: 3.5-bld3.32).

    If you do not plan to retain the old volume, you can substitute the vos remove command in this step.

       %  vos rename sysname.usr.afsws  sysname.usr.afsws.extension    
    

  8. Issue the vos rename command to change the name of the sysname.3.6 volume to sysname.usr.afsws.
       %  vos rename sysname.3.6  sysname.usr.afsws    
    

  9. Issue the fs rmmount command to remove the temporary mount point for the sysname.3.6 volume.
        % fs rmmount  /afs/.cellname/temp   
    

Upgrading the Operating System

AFS 3.6 supports the 64-bit version of HP-UX 11.0 and Solaris 7. To upgrade from the 32-bit version, you possibly need to reinstall the operating system completely before installing AFS 3.6. When performing any operating system upgrade, you must take several actions to preserve AFS functionality, including the following:

Distributing Binaries to Server Machines

The instructions in this section explain how to use the Update Server to distribute server binaries from a binary distribution machine of each system type.

Repeat the steps on each binary distribution machine in your cell. If you do not use the Update Server, repeat the steps on every server machine in your cell. If you are copying files from the AFS product tree, the server machine must also be configured as an AFS client machine.

  1. Become the local superuser root, if you are not already.

       % su root
       Password: root_password   
    

  2. Create a temporary subdirectory of the /usr/afs/bin directory to store the AFS 3.6 server binaries.
       # mkdir /usr/afs/bin.36    
    

  3. Prepare to access server files using the method you have selected from those listed in Obtaining the Binary Distribution:

  4. Copy the server binaries from the distribution into the /usr/afs/bin.36 directory.
       # cp -p  *  /usr/afs/bin.36   
    

  5. Rename the current /usr/afs/bin directory to /usr/afs/bin.old and the /usr/afs/bin.36 directory to the standard location.
       # cd /usr/afs
       
       # mv  bin  bin.old
          
       # mv  bin.36  bin   
    

Upgrading Server Machines

Repeat the following instructions on each server machine. Perform them first on the database server machine with the lowest IP address, next on the other database server machines, and finally on other server machines.

The AFS data stored on a server machine is inaccessible to client machines during the upgrade process, so it is best to perform it at the time and in the manner that disturbs your users least.

  1. If you have just followed the steps in Distributing Binaries to Server Machines to install the server binaries on binary distribution machines, wait the required interval (by default, five minutes) for the local upclientbin process to retrieve the binaries.

    If you do not use binary distribution machines, perform the instructions in Distributing Binaries to Server Machines on this machine.

  2. Become the local superuser root, if you are not already, by issuing the su command.
       % su root
       Password: root_password   
    

  3. If the machine also functions as a client machine, prepare to access client files using the method you have selected from those listed in Obtaining the Binary Distribution:

  4. If the machine also functions as a client machine, copy the AFS 3.6 version of the afsd binary and other files to the /usr/vice/etc directory.
    Note:Some files in the /usr/vice/etc directory, such as the AFS initialization file (called afs.rc on many system types), do not necessarily need to change for a new release. It is a good policy to compare the contents of the distribution directory and the /usr/vice/etc directory before performing the copying operation. If there are files in the /usr/vice/etc directory that you created for AFS 3.5 or 3.6 Beta and that you want to retain, either move them to a safe location before performing the following instructions, or alter the following instructions to copy over only the appropriate files.
       # cp  -p  usr/vice/etc/*   /usr/vice/etc   
       
       # cp  -rp  usr/vice/etc/C  /usr/vice/etc   
    

    If you have not yet incorporated AFS into the machine's authentication system, perform the instructions in the section titled Enabling AFS Login for this system type in the IBM AFS Quick Beginnings chapter about configuring client machines. If this machine was running the same operating system revision with AFS 3.5 or AFS 3.6 Beta, you presumably already incorporated AFS into its authentication system.

  5. AFS performance is most dependable if the AFS release version of the kernel extensions and server processes is the same. Therefore, it is best to incorporate the AFS 3.6 kernel extensions into the kernel at this point.

    First issue the following command to shut down the server processes, preventing them from restarting accidently before you incorporate the AFS 3.6 extensions into the kernel.

       # bos shutdown <machine name> -localauth -wait
    

    Then perform the instructions in Incorporating AFS into the Kernel and Enabling the AFS Initialization Script, which have you reboot the machine. Assuming that the machine's AFS initialization script is configured to invoke the bosserver command as specified in IBM AFS Quick Beginnings, the BOS Server starts itself and then the other AFS server processes listed in its local /usr/afs/local/BosConfig file.

    There are two circumstances in which you must incorporate the kernel extensions and reboot now rather than later:

    In any other circumstances, you can choose to upgrade the kernel extensions later. Choose one of the following options:

  6. Once you are satisfied that the machine is functioning correctly at AFS 3.6, there is no need to retain previous versions of the server binaries in the /usr/afs/bin directory. (You can always use the bos install command to reinstall them if it becomes necessary to downgrade). If you use the Update Server, the upclientbin process renamed them with a .old extension in Step 1. To reclaim the disk space occupied in the /usr/afs/bin directory by .bak and .old files, you can use the following command:
       # bos prune <machine name> -bak -old -localauth
    

    Step 5 of Distributing Binaries to Server Machines had you move the previous version of the binaries to the /usr/afs/bin.old directory. You can also remove that directory on any machine where you created it.

       # rm -rf  /usr/afs/bin.old   
    

Upgrading Client Machines

  1. Become the local superuser root, if you are not already, by issuing the su command.
       % su root
       Password: root_password   
    

  2. Prepare to access client files using the method you have selected from those listed in Obtaining the Binary Distribution:

  3. Copy the AFS 3.6 version of the afsd binary and other files to the /usr/vice/etc directory.
    Note:Some files in the /usr/vice/etc directory, such as the AFS initialization file (called afs.rc on many system types), do not necessarily need to change for a new release. It is a good policy to compare the contents of the distribution directory and the /usr/vice/etc directory before performing the copying operation. If there are files in the /usr/vice/etc directory that you created for AFS 3.5 or 3.6 Beta and that you want to retain, either move them to a safe location before performing the following instructions, or alter the following instructions to copy over only the appropriate files.
       # cp  -p  usr/vice/etc/*   /usr/vice/etc   
       
       # cp  -rp  usr/vice/etc/C  /usr/vice/etc   
    

    If you have not yet incorporated AFS into the machine's authentication system, perform the instructions in the section titled Enabling AFS Login for this system type in the IBM AFS Quick Beginnings chapter about configuring client machines. If this machine was running the same operating system revision with AFS 3.5 or AFS 3.6 Beta, you presumably already incorporated AFS into its authentication system.

  4. Perform the instructions in Incorporating AFS into the Kernel and Enabling the AFS Initialization Script to incorporate AFS extensions into the kernel. The instructions conclude with a reboot of the machine, which starts the new Cache Manager.

Incorporating AFS into the Kernel and Enabling the AFS Initialization Script

As part of upgrading a machine to AFS 3.6, you must incorporate AFS 3.6 extensions into its kernel and verify that the AFS initialization script is included in the machine's startup sequence. Proceed to the instructions for your system type:

Loading AFS into the AIX Kernel

The AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation. AIX does not support incorporation of AFS modifications during a kernel build.

For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the conventional location and edit it to select the appropriate options depending on whether NFS is also to run.

After editing the script, you verify that there is an entry in the AIX inittab file that invokes it, then reboot the machine to incorporate the new AFS extensions into the kernel and restart the Cache Manager.

  1. Access the AFS distribution by changing directory as indicated. Substitute rs_aix42 for the sysname variable.

  2. Copy the AFS kernel library files to the local /usr/vice/etc/dkload directory.
       # cd  usr/vice/etc
       
       # cp -rp  dkload  /usr/vice/etc   
    

  3. Because you ran AFS 3.5 on this machine, the appropriate AFS initialization file possibly already exists as /etc/rc.afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now.

       # cp -p  rc.afs  /etc/rc.afs    
    

  4. Edit the /etc/rc.afs script, setting the NFS variable if it is not already.

  5. Place the following line in the AIX initialization file, /etc/inittab, if it is not already. It invokes the AFS initialization script and needs to appear just after the line that starts NFS daemons.
       rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services   
    

  6. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       # cd  /usr/vice/etc
       
       # rm  rc.afs
      
       # ln -s  /etc/rc.afs   
    

  7. Reboot the machine.
          # shutdown -r now   
    

  8. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Building AFS into the Digital UNIX Kernel

On Digital UNIX machines, you must build AFS modifications into a new static kernel; Digital UNIX does not support dynamic loading. If the machine's hardware and software configuration exactly matches another Digital UNIX machine on which AFS 3.6 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

If the machine was running a version of Digital UNIX 4.0 with a previous version of AFS, the configuration changes specified in Step 1 through Step 4 are presumably already in place.

  1. Create a copy called AFS of the basic kernel configuration file included in the Digital UNIX distribution as /usr/sys/conf/machine_name, where machine_name is the machine's hostname in all uppercase letters.
       # cd /usr/sys/conf
       
       # cp machine_name AFS   
    

  2. Add AFS to the list of options in the configuration file you created in the previous step, so that the result looks like the following:
              .                   .
              .                   .
           options               UFS
           options               NFS
           options               AFS
              .                   .
              .                   .   
    

  3. Add an entry for AFS to two places in the /usr/sys/conf/files file.

  4. Add an entry for AFS to two places in the /usr/sys/vfs/vfs_conf.c file.

  5. Access the AFS distribution by changing directory as indicated. Substitute alpha_dux40 for the sysname variable.

  6. Because you ran AFS 3.5 on this machine, the appropriate AFS initialization file possibly already exists as /sbin/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

        # cp  -p  usr/vice/etc/afs.rc  /sbin/init.d/afs   
    

  7. Copy the AFS kernel module to the local /usr/sys/BINARY directory.

    The AFS 3.6 distribution includes only the libafs.nonfs.o version of the library, because Digital UNIX machines are not supported as NFS/AFS Translator machines.

       # cp  -p  bin/libafs.nonfs.o  /usr/sys/BINARY/afs.mod   
    

  8. Configure and build the kernel. Respond to any prompts by pressing <Return>. The resulting kernel is in the file /sys/AFS/vmunix.
       # doconfig -c AFS   
    

  9. Rename the existing kernel file and copy the new, AFS-modified file to the standard location.
       # mv  /vmunix  /vmunix_orig
       
       # cp  -p  /sys/AFS/vmunix  /vmunix   
    

  10. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the Digital UNIX startup and shutdown sequence. If necessary, issue the commands to create the links.
       # ln -s  ../init.d/afs  /sbin/rc3.d/S67afs
       
       # ln -s  ../init.d/afs  /sbin/rc0.d/K66afs   
    

  11. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       # cd  /usr/vice/etc
       
       # rm  afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc   
    

  12. Reboot the machine.
       # shutdown -r now   
    

  13. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Building AFS into the HP-UX Kernel

On HP-UX machines, you must build AFS modifications into a new kernel; HP-UX does not support dynamic loading. If the machine's hardware and software configuration exactly matches another HP-UX machine on which AFS 3.6 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Move the existing kernel-related files to a safe location.
       # cp -p  /stand/vmunix  /stand/vmunix.noafs
       
       # cp -p  /stand/system  /stand/system.noafs   
    

  2. Access the AFS distribution by changing directory as indicated. Substitute hp_ux110 for the sysname variable.

  3. Because you ran AFS 3.5 on this machine, the appropriate AFS initialization file possibly already exists as /sbin/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

       # cp  -p  usr/vice/etc/afs.rc  /sbin/init.d/afs   
    

  4. Copy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you do so.
       # cp  -p  usr/vice/etc/afs.driver  /usr/conf/master.d/afs   
    

  5. Copy the AFS kernel module to the local /usr/conf/lib directory.

    HP-UX machines are not supported as NFS/AFS Translator machines, so AFS 3.6 includes only libraries called libafs.nonfs.a (for the 32-bit version of HP-UX) and libafs64.nonfs.a (for the 64-bit version of HP-UX). Change the library's name to libafs.a as you copy it.

    For the 32-bit version of HP-UX:

       # cp  -p   bin/libafs.nonfs.a   /usr/conf/lib/libafs.a
    

    For the 64-bit version of HP-UX:

       # cp  -p  bin/libafs64.nonfs.a   /usr/conf/lib/libafs.a   
    

  6. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the HP-UX startup and shutdown sequence. If necessary, issue the commands to create the links.
       # ln -s ../init.d/afs /sbin/rc2.d/S460afs
      
       # ln -s ../init.d/afs /sbin/rc2.d/K800afs   
    

  7. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc   
    

  8. Incorporate the AFS driver into the kernel, either using the SAM program or a series of individual commands. Both methods reboot the machine, which loads the new kernel and starts the Cache Manager.

  9. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Incorporating AFS into the IRIX Kernel

To incorporate AFS into the kernel on IRIX machines, choose one of two methods:

Loading AFS into the IRIX Kernel

The ml program is the dynamic kernel loader provided by SGI for IRIX systems. If you use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. Therefore, the AFS initialization script (included on the AFS CD-ROM) invokes it automatically when the afsml configuration variable is activated. In this section you activate the variable and run the script.

  1. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in Supported System Types.
       # uname -m   
    

  2. Access the AFS distribution by changing directory as indicated. Substitute sgi_65 for the sysname variable.

  3. Copy the appropriate AFS kernel library file to the local /usr/vice/etc/sgiload directory; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.

    You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.

       # cd  usr/vice/etc/sgiload    
    

    If the machine is not to act as an NFS/AFS translator:

       # cp -p  libafs.IPxx.nonfs.o  /usr/vice/etc/sgiload   
    

    If the machine is to act as an NFS/AFS translator, in which case its kernel must support NFS server functionality:

       # cp -p   libafs.IPxx.o   /usr/vice/etc/sgiload   
    

  4. Proceed to Enabling the AFS Initialization Script on IRIX Systems.

Building AFS into the IRIX Kernel

If you prefer to build a kernel, and the machine's hardware and software configuration exactly matches another IRIX machine on which AFS 3.6 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Access the AFS distribution by changing directory as indicated. Substitute sgi_65 for the sysname variable.

  2. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in the IBM AFS Release Notes for the current version of AFS.
       # uname -m    
    

  3. Copy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.
       # cd  bin   
    

    If the machine is not to act as an NFS/AFS translator:

       # cp -p  libafs.IPxx.nonfs.a   /var/sysgen/boot/afs.a   
    

    If the machine is to act as an NFS/AFS translator, in which case its kernel must support NFS server functionality:

       # cp -p   libafs.IPxx.a   /var/sysgen/boot/afs.a   
    

  4. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to the local /var/sysgen/master.d directory.
       # cp -p  afs.sm  /var/sysgen/system
       
       # cp -p  afs  /var/sysgen/master.d   
    

  5. Copy the existing kernel file, /unix, to a safe location and compile the new kernel. It is created as /unix.install, and overwrites the existing /unix file when the machine reboots.
       # cp -p  /unix  /unix_orig
       
       # autoconfig   
    

  6. Proceed to Enabling the AFS Initialization Script on IRIX Systems.

Enabling the AFS Initialization Script on IRIX Systems

  1. Because you ran AFS 3.5 on this machine, the appropriate AFS initialization file possibly already exists as /etc/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. If the machine is configured as a client machine, you already copied the script to the local /usr/vice/etc directory. Otherwise, change directory as indicated, substituting sgi_65 for the sysname variable.

    Now copy the script. Note the removal of the .rc extension as you copy.

       # cp -p  script_location/afs.rc  /etc/init.d/afs   
    

  2. If the afsml configuration variable is not already set appropriately, issue the chkconfig command.

    If you are using the ml program:

       # /etc/chkconfig -f afsml on   
    

    If you built AFS into a static kernel:

       # /etc/chkconfig -f afsml off   
    

    If the machine is to function as an NFS/AFS Translator, the kernel supports NFS server functionality, and the afsxnfs variable is not already set appropriately, set it now.

       # /etc/chkconfig -f afsxnfs on   
    

  3. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the IRIX startup and shutdown sequence. If necessary, issue the commands to create the links.
       # ln -s ../init.d/afs /etc/rc2.d/S35afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K35afs   
    

  4. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc   
    

  5. Reboot the machine.

       # shutdown -i6 -g0 -y   
    

  6. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Loading AFS into the Linux Kernel

The insmod program is the dynamic kernel loader for Linux. Linux does not support incorporation of AFS modifications during a kernel build.

For AFS to function correctly, the insmod program must run each time the machine reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. The script also includes commands that select the appropriate AFS library file automatically. In this section you run the script.

  1. Access the AFS distribution by changing directory as indicated. Substitute i386_linux22 for the sysname variable.

  2. Copy the AFS kernel library files to the local /usr/vice/etc/modload directory. The filenames for the libraries have the format libafs-version.o, where version indicates the kernel build level. The string .mp in the version indicates that the file is appropriate for use with symmetric multiprocessor (SMP) kernels.
       # cd  usr/vice/etc
       
       # cp -rp  modload  /usr/vice/etc   
    

  3. The AFS 3.6 distribution includes a new AFS initialization file that can select automatically from the kernel extensions included in AFS 3.6. Copy it to the /etc/rc.d/init.d directory, removing the .rc extension as you do.
       # cp -p   afs.rc  /etc/rc.d/init.d/afs    
    

    The afsd options file possibly already exists as /etc/sysconfig/afs from running a previous version of AFS on this machine. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If the options file is not already in place, copy it now. Note the removal of the .conf extension as you copy.

       # cp  -p  afs.conf  /etc/sysconfig/afs    
    

    If necessary, edit the options file to invoke the desired arguments on the afsd command in the initialization script. For further information, see the section titled Configuring the Cache Manager in the IBM AFS Quick Beginnings chapter about configuring client machines.

  4. Issue the chkconfig command to activate the afs configuration variable, if it is not already. Based on the instruction in the AFS initialization file that begins with the string #chkconfig, the command automatically creates the symbolic links that incorporate the script into the Linux startup and shutdown sequence.
       # /sbin/chkconfig  --add afs   
    

  5. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS distribution if necessary.
       # cd /usr/vice/etc
       
       # rm afs.rc afs.conf
        
       # ln -s  /etc/rc.d/init.d/afs  afs.rc
       
       # ln -s  /etc/sysconfig/afs  afs.conf   
    

  6. Reboot the machine.
       # shutdown -r now     
    

  7. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Loading AFS into the Solaris Kernel

The modload program is the dynamic kernel loader provided by Sun Microsystems for Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build.

For AFS to function correctly, the modload program must run each time the machine reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the appropriate AFS library file to the location where the modload program accesses it and then run the script.

  1. Access the AFS distribution by changing directory as indicated. Substitute sun4x_56 or sun4x_57 for the sysname variable.

  2. If this machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and ran that operating system with AFS 3.5, the appropriate AFS initialization file possibly already exists as /etc/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.6 distribution to see if any changes are needed.

    If this machine is running the 64-bit version of Solaris 7, the AFS initialization file differs from the AFS 3.5 version. Copy it from the AFS 3.6 distribution.

    Note the removal of the .rc extension as you copy.

       # cd  usr/vice/etc
        
       # cp  -p  afs.rc  /etc/init.d/afs   
    

  3. Copy the appropriate AFS kernel library file to the appropriate file in a subdirectory of the local /kernel/fs directory.

    If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7 and is not to act as an NFS/AFS translator:

       # cp  -p  modload/libafs.nonfs.o  /kernel/fs/afs   
    

    If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7 and is to act as an NFS/AFS translator, in which case its kernel must support NFS server functionality and the nfsd process must be running:

       # cp  -p  modload/libafs.o  /kernel/fs/afs   
    

    If the machine is running the 64-bit version of Solaris 7 and is not to act as an NFS/AFS translator:

       # cp  -p  modload/libafs64.nonfs.o  /kernel/fs/sparcv9/afs   
    

    If the machine is running the 64-bit version of Solaris 7 and is to act as an NFS/AFS translator, in which case its kernel must support NFS server functionality and the nfsd process must be running:

       # cp  -p  modload/libafs64.o  /kernel/fs/sparcv9/afs      
    

  4. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the Solaris startup and shutdown sequence. If necessary, issue the commands to create the links.
       # ln -s ../init.d/afs /etc/rc3.d/S99afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K66afs   
    

  5. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc   
    

  6. Reboot the machine.
       # shutdown -i6 -g0 -y      
    

  7. If you are upgrading a server machine, login again as the local superuser root, then return to Step 6 in Upgrading Server Machines.
       login: root
       Password: root_password     
    

Storing AFS Documents in AFS

This section explains how to create and mount a volume to house AFS documents. The recommended mount point for the volume is /afs/cellname/afsdoc. If you ran AFS 3.5, the volume possibly already exists. You can choose to overwrite its contents with the AFS 3.6 version of documents, or can create a new volume for the AFS 3.6 documents and mount it at /afs/cellname/afsdoc instead of the volume of AFS 3.5 documents. Alter the following instructions as necessary.

If you wish, you can create a link to the mount point on each client machine's local disk, called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home directory. You can also choose to permit users to access only certain documents (most probably, the IBM AFS User Guide) by creating different mount points or setting different ACLs on different document directories.

To create a new volume for storing AFS documents:

  1. Issue the vos create command to create a volume for storing the AFS documentation. Include the -maxquota argument to set an unlimited quota on the volume.

    If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota value that is slightly larger.

       % vos create <machine name> <partition name>  afsdoc  -maxquota 0     
    

  2. Issue the fs mkmount command to mount the new volume. If your root.cell volume is replicated, you must precede the cellname with a period to specify the read/write mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them.
       % fs mkmount -dir /afs/.cellname/afsdoc -vol afsdoc
       
       % vos release root.cell
       
       % fs checkvolumes    
    

  3. Issue the fs setacl command to grant the rl permissions to the system:anyuser group on the new directory's ACL.
       % cd /afs/.cellname/afsdoc 
        
       % fs setacl  .  system:anyuser rl   
    

  4. Access the documents via one of the sources listed in Accessing the AFS Binary Distribution and Documentation. Copy the documents in one more formats from a source_format directory into subdirectories of the /afs/cellname/afsdoc directory. Repeat the commands for each format. Suggested substitutions for the format_name variable are HTML and PDF.
       # mkdir format_name
       
       # cd  format_name
       
       # cp -rp /cdrom/Documentation/language_code/source_format  .      
    

    If you copy the HTML version of the documents, note that in addition to a subdirectory for each document there are several files with a .gif extension, which enable readers to move easily between sections of a document. The file called index.htm is an introductory HTML page that has a hyperlink to the documents. For HTML viewing to work properly, these files must remain in the top-level HTML directory (the one named, for example, /afs/cellname/afsdoc/Html).

  5. (Optional) If you believe it is helpful to your users to access AFS documents via a local disk directory, create /usr/afsdoc on the local disk as a symbolic link to the directory housing the desired format (probably HTML or PDF).
       # ln -s /afs/cellname/afsdoc/format_name  /usr/afsdoc
    

    An alternative is to create a link in each user's home directory to the documentation directory in AFS.


Reference Pages

Following are reference pages that include new information not included in IBM AFS Administration Reference.

CFG_tcid

Purpose

Defines Tape Coordinator configuration instructions for automated tape devices, backup data files, or XBSA server programs

Description

A CFG_tcid file includes instructions that configure a Tape Coordinator for more automated operation and for transferring AFS data to and from a certain type of backup media:

The configuration file is in ASCII-format and must reside in the /usr/afs/backup directory on the Tape Coordinator machine. Each Tape Coordinator has its own configuration file (multiple Tape Coordinators cannot use the same file), and only a single Tape Coordinator in a cell can write to a given tape device or backup data file. Multiple Tape Coordinators can interact with the same XBSA server if the server has sufficient capacity, and in this case the configuration file for each Tape Coordinator refers to the same XBSA server.

The Tape Coordinator for a tape device or backup data file must also have an entry in the Backup Database and in the /usr/afs/backup/tapeconfig file on the Tape Coordinator machine. The Tape Coordinator for an XBSA server has only an entry in the Backup Database, not in the tapeconfig file.

Naming the Configuration File

For a Tape Coordinator that communicates with an XBSA server, the tcid portion of the configuration file's name is the Tape Coordinator's port offset number as defined in the Backup Database. An example filename is CFG_22.

For the Tape Coordinator for a tape device or backup data file, there are two possible types of values for the tcid portion of the filename. The Tape Coordinator first attempts to open a file with a tcid portion that is the Tape Coordinator's port offset number as defined in the Backup Database and tapeconfig file. If there is no such file, the Tape Coordinator attempts to access a file with a tcid portion that is based on the tape device's device name the backup data file's filename. To enable the Tape Coordinator to locate the file, construct the tcid portion of the filename as follows:

Summary of Instructions

The following list briefly describes the instructions that can appear in a configuration file. Each instruction appears on its own line, in any order. Unless otherwise noted, the instructions apply to all backup media (automated tape device, backup data file, and XBSA server). A more detailed description of each instruction follows the list.

ASK
Controls whether the Tape Coordinator prompts for guidance when it encounters error conditions.

AUTOQUERY
Controls whether the Tape Coordinator prompts for the first tape. Does not apply to XBSA servers.

BUFFERSIZE
Sets the size of the memory buffer the Tape Coordinator uses when dumping data to or restoring data from a backup medium.

CENTRALLOG
Names a log file in which to record a status message as each dump or restore operation completes. The Tape Coordinator also writes to its standard log and error files.

FILE
Determines whether the Tape Coordinator uses a backup data file as the backup medium.

GROUPID
Sets an identification number recorded in the Backup Database for all dumps performed by the Tape Coordinator.

LASTLOG
Controls whether the Tape Coordinator creates and writes to a separate log file during its final pass through the set of volumes to be included in a dump.

MAXPASS
Specifies how many times the Tape Coordinator attempts to access a volume during a dump operation if the volume is inaccessible on the first attempt (which is included in the count).

MGMTCLASS
Specifies which of an XBSA server's management classes to use, which often indicates the type of backup medium the XBSA server uses. Applies only to XBSA servers.

MOUNT
Identifies the file that contains routines for inserting tapes into a tape device or controlling how the Tape Coordinator handles a backup data file. Does not apply to XBSA servers.

NAME_CHECK
Controls whether the Tape Coordinator verifies that a tape or backup data file has the expected name. Does not apply to XBSA servers.

NODE
Names which node associated with an XBSA server to use. Applies only to XBSA servers.

PASSFILE
Names the file that contains the password or security code for the Tape Coordinator to pass to an XBSA server. Applies only to XBSA servers.

PASSWORD
Specifies the password or security code for the Tape Coordinator to pass to an XBSA server. Applies only to XBSA servers.

SERVER
Names the XBSA server machine with which the Tape Coordinator communicates. Applies only to XBSA servers.

STATUS
Controls how often the Tape Coordinator writes a status message in its window during an operation.

TYPE
Defines which XBSA-compliant program (third-party backup utility) is running on the XBSA server. Applies only to XBSA servers.

UNMOUNT
Identifies the file that contains routines for removing tapes from a tape device or controlling how the Tape Coordinator handles a backup data file. Does not apply to XBSA servers.

The ASK Instruction

The ASK instruction takes a boolean value as its argument, in the following format:

   ASK {YES | NO}   

When the value is YES, the Tape Coordinator generates a prompt in its window, requesting a response to the error cases described in the following list. This is the default behavior if the ASK instruction does not appear in the CFG_tcid file.

When the value is NO, the Tape Coordinator does not prompt in error cases, but instead uses the automatic default responses described in the following list. The Tape Coordinator also logs the error in its /usr/afs/backup/TE_tcid file. Suppressing the prompts enables the Tape Coordinator to run unattended, though it still prompts for insertion of tapes unless the MOUNT instruction is used.

The error cases controlled by this instruction are the following:

The AUTOQUERY Instruction

The AUTOQUERY instruction takes a boolean value as its argument, in the following format:

   AUTOQUERY {YES | NO}   

When the value is YES, the Tape Coordinator checks for the MOUNT instruction in the configuration file when it needs to read the first tape involved in an operation. As described for that instruction, it then either prompts for the tape or invokes the specified routine to mount the tape. This is the default behavior if the AUTOQUERY instruction does not appear in the configuration file.

When the value is NO, the Tape Coordinator assumes that the first tape required for an operation is already in the drive. It does not prompt the operator or invoke the MOUNT routine unless there is an error in accessing the first tape. This setting is equivalent in effect to including the -noautoquery flag to the butc command.

Note that the setting of the AUTOQUERY instruction controls the Tape Coordinator's behavior only with respect to the first tape required for an operation. For subsequent tapes, the Tape Coordinator always checks for the MOUNT instruction. It also refers to the MOUNT instruction if it encounters an error while attempting to access the first tape. The instruction does not apply to XBSA servers.

The BUFFERSIZE Instruction

The BUFFERSIZE instruction takes an integer or decimal value, and optionally units, in the following format:

   BUFFERSIZE size[{k | K | m | M | g | G | t | T}]   

where size specifies the amount of memory the Tape Coordinator allocates to use as a buffer during both dump and restore operations. If size is a decimal number, the number of digits after the decimal point must not translate to fractions of bytes. The default unit is bytes, but use k or K to specify kilobytes, m or M for megabytes, g or G for gigabytes, and t or T for terabytes. There is no space between the size value and the units letter.

As the Tape Coordinator receives volume data from the Volume Server during a dump operation, it gathers the specified amount of data in the buffer before transferring the entire amount to the backup medium. Similarly, during a restore operation the Tape Coordinator by default buffers data from the backup medium before transferring the entire amount to the Volume Server for restoration into the file system.

The default buffer size is 16 KB, which is usually large enough to promote tape streaming in a normal network configuration. If the network connection between the Tape Coordinator machine and file server machines is slow, it can help to increase the buffer size.

For XBSA servers, the range of acceptable values is 1K through 64K. For tape devices and backup data files, the minimum acceptable value is 16K, and if the specified value is not a multiple of 16 KB, the Tape Coordinator automatically rounds it up to the next such multiple.

The CENTRALLOG Instruction

The CENTRALLOG instruction takes a pathname as its argument, in the following format:

   CENTRALLOG  filename   

where filename is the full pathname of a local disk file in which to record a status message as each dump or restore operation completes. It is acceptable to have multiple Tape Coordinators write to the same log file. Each Tape Coordinator also writes to its own standard error and log files (the TE_tcid and TL_tcid files in the /usr/afs/backup directory). This instruction is always optional.

The line for each dump operation has the following format:

   task_ID   start_time   complete_time   duration  volume_set  \
         success of total volumes dumped (data_dumped KB)   

The line for each restore operation has the following format:

   task_ID   start_time   complete_time   duration  success of total volumes restored 

where

task_ID
Is the task identification number assigned to the operation by the Tape Coordinator. The first digits in the number are the Tape Coordinator's port offset number.

start_time
The time at which the operation started, in the format month/day/year hours:minutes:seconds.

complete_time
Is the time at which the operation completed, in the same format as the start_time field.

duration
Is the amount of time it took to complete the operation, in the format hours:minutes:seconds.

volume_set
Is the name of the volume set being dumped during this operation (for dump operations only).

success
Is the number of volumes successfully dumped or restored.

total
Is the total number of volumes the Tape Coordinator attempted to dump or restore.

data_dumped
Is the number of kilobytes of data transferred to the backup medium (for dump operations only).

The FILE Instruction

The FILE instruction takes a boolean value as its argument, in the following format:

   FILE {NO | YES}   

When the value is NO and the SERVER instruction does not appear in the configuration file, the Tape Coordinator uses a tape device as the backup medium. If the SERVER instruction does appear, the Tape Coordinator communicates with the XBSA server that it names. This is the default behavior if the FILE instruction does not appear in the file.

When the value is YES, the Tape Coordinator uses a backup data file on the local disk as the backup medium. If the file does not exist when the Tape Coordinator attempts to write a dump, the Tape Coordinator creates it. For a restore operation to succeed, the file must exist and contain volume data previously written to it by a backup dump operation.

When the value is YES, the backup data file's complete pathname must appear (instead of a tape drive device name) in the third field of the corresponding port offset entry in the local /usr/afs/backup/tapeconfig file. If the field instead refers to a tape device, dump operations appear to succeed but are inoperative. It is not possible to restore data that is accidently dumped to a tape device while the FILE instruction is set to YES. (In the same way, if the FILE instruction is set to NO and there is no SERVER instruction, the tapeconfig entry must refer to an actual tape device.)

Rather than put an actual file pathname in the third field of the tapeconfig file, however, the recommended configuration is to create a symbolic link in the /dev directory that points to the actual file pathname, and record the symbolic link's name in this field. This configuration has a couple of advantages:

If the third field in the tapeconfig file names the actual file, there is no way to recover from exhausting the space on the partition that houses the backup data file. It is not possible to change the tapeconfig file in the middle of an operation.

When writing to a backup data file, the Tape Coordinator writes data at 16 KB offsets. If a given block of data (such as the marker that signals the beginning or end of a volume) does not fill the entire 16 KB, the Tape Coordinator still skips to the next offset before writing the next block. In the output of a backup dumpinfo command issued with the -id option, the value in the Pos column is the ordinal of the 16-KB offset at which the volume data begins, and so is not generally only one higher than the position number on the previous line, as it is for dumps to tape.

The GROUPID Instruction

The GROUPID instruction takes an integer as its argument, in the following format:

   GROUPID integer   

where integer is in the range from 1 through 2147483647 (one less than 2 GB). The value is recorded in the Backup Database record for each dump created by this Tape Coordinator. It appears in the Group id field in the output from the backup dumpinfo command when the command's -verbose and -id options are provided. It can be specified as the value of the -groupid argument to the backup deletedump command to delete only records marked with the group ID. This instruction is always optional.

The LASTLOG Instruction

The LASTLOG instruction takes a boolean value as its argument, in the following format:

   LASTLOG  {YES | NO}   

When the value is YES, the Tape Coordinator creates and writes to a separate log file during the final pass through the volumes to be included in a dump operation. The log file name is /usr/afs/backup/TL_tcid.lp, where tcid is either the Tape Coordinator's port offset number or a value derived from the device name or backup data filename.

When the value is NO, the Tape Coordinator writes to its standard log files (the TE_tcid and TL_tcid files in the /usr/afs/backup directory) for all passes. This is the behavior if the instruction does not appear in the file.

The MAXPASS Instruction

The MAXPASS instruction takes an integer as its argument, in the following format:

   MAXPASS integer   

where integer specifies how many times the Tape Coordinator attempts to access a volume during a dump operation if the volume is inaccessible on the first attempt (which is included in the count). Acceptable values are in the range from 1 through 10. The default value is 2 if this instruction does not appear in the file.

The MGMTCLASS Instruction

The MGMTCLASS instruction takes a character string as its argument, in the following format:

   MGMTCLASS class_name   

where class_name is the XBSA server's management class, which often indicates the type of backup medium it is using. For a list of the possible management classes, see the XBSA server documentation. This instruction applies only to XBSA servers and is always optional; there is no default value if it is omitted.

The MOUNT Instruction

The MOUNT instruction takes a pathname as its argument, in the following format:

   MOUNT filename   

where filename is the full pathname of an executable file on the local disk that contains a shell script or program (for clarity, the following discussion refers to scripts only). If the configuration file is for an automated tape device, the script invokes the routine or command provided by the device's manufacturer for mounting a tape (inserting it into the tape reader). If the configuration file is for a backup data file, it can instruct the Tape Coordinator to switch automatically to another backup data file when the current one becomes full; for further discussion, see the preceding description of the FILE instruction. This instruction does not apply to XBSA servers.

The administrator must write the script, including the appropriate routines and logic. The AFS distribution does not include any scripts, although an example appears in the following Examples section. The command or routines invoked by the script inherit the local identity (UNIX UID) and AFS tokens of the butc command's issuer.

When the Tape Coordinator needs to mount a tape or access another backup data file, it checks the configuration file for a MOUNT instruction. If there is no instruction, the Tape Coordinator prompts the operator to insert a tape before it attempts to open the tape device. If there is a MOUNT instruction, the Tape Coordinator executes the routine in the referenced script.

There is an exception to this sequence: if the AUTOQUERY NO instruction appears in the configuration file, or the -noautoquery flag was included on the butc command, then the Tape Coordinator assumes that the operator has already inserted the first tape needed for a given operation. It attempts to read the tape immediately, and only checks for the MOUNT instruction or prompts the operator if the tape is missing or is not the required one.

The Tape Coordinator passes the following parameters to the script indicated by the MOUNT instruction, in the indicated order:

  1. The tape device or backup data file's pathname, as recorded in the /usr/afs/backup/tapeconfig file.

  2. The tape operation, which generally matches the backup command operation code used to initiate the operation (the following list notes the exceptional cases) :

  3. The number of times the Tape Coordinator has attempted to open the tape device or backup data file. If the open attempt returns an error, the Tape Coordinator increments this value by one and again invokes the MOUNT instruction.

  4. The tape name. For some operations, the Tape Coordinator passes the string none, because it does not know the tape name (when running the backup scantape or backup readlabel, for example), or because the tape does not necessarily have a name (when running the backup labeltape command, for example).

  5. The tape ID recorded in the Backup Database. As with the tape name, the Backup System passes the string none for operations where it does not know the tape ID or the tape does not necessarily have an ID.

The routine invoked by the MOUNT instruction must return an exit code to the Tape Coordinator:

If the backup command was issued in interactive mode and the operator issues the (backup) kill command while the MOUNT routine is running, the Tape Coordinator passes the termination signal to the routine; the entire operation terminates.

The NAME_CHECK Instruction

The NAME_CHECK instruction takes a boolean value as its argument, in the following format:

   NAME_CHECK {YES | NO}   

When the value is YES and there is no permanent name on the label of the tape or backup data file, the Tape Coordinator checks the AFS tape name on the label when dumping a volume in response to the backup dump command. The AFS tape name must be <NULL> or match the name that the backup dump operation constructs based on the volume set and dump level names. This is the default behavior if the NAME_CHECK instruction does not appear in the configuration file.

When the value is NO, the Tape Coordinator does not check the AFS tape name before writing to the tape.

The Tape Coordinator always checks that all dumps on the tape are expired, and refuses to write to a tape that contains unexpired dumps. This instruction does not apply to XBSA servers.

The NODE Instruction

The NODE instruction takes a character string as its argument, in the following format:

   NODE node_name   

where node_name names the node associated with the XBSA server named by the SERVER instruction. To determine if the XBSA server uses nodes, see its documentation. This instruction applies only to XBSA servers, and there is no default if it is omitted. However, TSM requires that a NODENAME instruction appear in its dsm.sys configuration file in that case.

The PASSFILE Instruction

The PASSFILE instruction takes a pathname as its argument, in the following format:

   PASSFILE filename   

where filename is the full pathname of a file on the local disk that records the password for the Tape Coordinator to use when communicating with the XBSA server. The password string must appear on the first line in the file, and have a newline character only at the end. The mode bits on the file must enable the Tape Coordinator to read it.

This instruction applies only to XBSA servers, and either it or the PASSWORD instruction must be provided along with the SERVER instruction. (If both this instruction and the PASSWORD instruction are included, the Tape Coordinator uses only the one that appears first in the file.)

The PASSWORD Instruction

The PASSWORD instruction takes a character string as its argument, in the following format:

   PASSWORD string   

where string is the password for the Tape Coordinator to use when communicating with the XBSA server. It must appear on the first line in the file, and have a newline character only at the end.

This instruction applies only to XBSA servers, and either it or the PASSFILE instruction must be provided along with the SERVER instruction. (If both this instruction and the PASSFILE instruction are included, the Tape Coordinator uses only the one that appears first in the file.)

The SERVER Instruction

The SERVER instruction takes a character string as its argument, in the following format:

   SERVER machine_name   

where machine_name is the fully qualified hostname of the machine where an XBSA server is running. This instruction is required for XBSA servers, and applies only to them.

The STATUS Instruction

The STATUS instruction takes an integer as its argument, in the following format:

   STATUS integer   

where integer expresses how often the Tape Coordinator writes a status message to its window during an operation, in terms of the number of buffers of data that have been dumped or restored. Acceptable values range from 1 through 8192. The size of the buffers is determined by the BUFFERSIZE instruction if it is included.

As an example, the value 512 means that the Tape Coordinator writes a status message after each 512 buffers of data. It also writes a status message as it completes the dump of each volume.

The message has the following format:

   time_stamp: Task task_ID: total KB: volume: volume_total B

where

time_stamp
Records the time at which the message is printed, in the format hours:minutes:seconds.

task_ID
Is the task identification number assigned to the operation by the Tape Coordinator. The first digits in the number are the Tape Coordinator's port offset number.

total
Is the total number of kilobytes transferred to the backup medium during the current dump operation.

volume
Names the volume being dumped as the message is written.

volume_total
Is the total number of bytes dumped so far from the volume named in the volume field.

This instruction is intended for use with XBSA servers. For tape devices and backup data files, the value in the volume_total field is not necessarily as expected. It does not include certain kinds of Backup System metadata (markers at the beginning and end of each volume, for example), so summing together the final volume_total value for each volume does not necessarily equal the running total in the total field. Also, the Tape Coordinator does not write a message at all if it is dumping metadata rather than actual volume data as it reaches the end of the last buffer in each set of integer buffers.

The TYPE Instruction

The TYPE instruction takes a character string as its argument, in the following format:

   TYPE program_name   

where program_name names the XBSA server program that is running on the machine named by the SERVER instruction. This instruction is mandatory when the SERVER instruction appears in the file. The acceptable values depend on which XBSA servers are supported in the current AFS release. In the General Availability release of AFS 3.6, the only acceptable value is tsm.

The UNMOUNT Instruction

The UNMOUNT instruction takes a pathname as its argument, in the following format:

   UNMOUNT filename   

where filename is the full pathname of an executable file on the local disk that contains a shell script or program (for clarity, the following discussion refers to scripts only). If the configuration file is for an automated tape device, the script invokes the routine or command provided by the device's manufacturer for unmounting a tape (removing it from the tape reader). If the configuration file is for a backup data file, it can instruct the Tape Coordinator to perform additional actions after closing the backup data file. This instruction does not apply to XBSA servers.

The administrator must write the script, including the appropriate routines and logic. The AFS distribution does not include any scripts, although an example appears in the following Examples section. The command or routines invoked by the script inherit the local identity (UNIX UID) and AFS tokens of the butc command's issuer.

After closing a tape device or backup data file, the Tape Coordinator checks the configuration file for an UNMOUNT instruction, whether or not the close operation succeeds. If there is no UNMOUNT instruction, the Tape Coordinator takes no action, in which case the operator must take the action necessary to remove the current tape from the drive before another can be inserted. If there is an UNMOUNT instruction, the Tape Coordinator executes the referenced file. It invokes the routine only once, passing in the following parameters:

Privilege Required

The file is protected by UNIX mode bits. Creating the file requires the w (write) and x (execute) permissions on the /usr/afs/backup directory. Editing the file requires the w (write) permission on the file.

Examples

The following example configuration files demonstrate one way to structure a configuration file for a stacker or backup dump file. The examples are not necessarily appropriate for a specific cell; if using them as models, be sure to adapt them to the cell's needs and equipment.

Example CFG_tcid File for Stackers

In this example, the administrator creates the following entry for a tape stacker called stacker0.1 in the /usr/afs/backup/tapeconfig file. It has port offset 0.

   2G   5K   /dev/stacker0.1   0   

The administrator includes the following five lines in the /usr/afs/backup/CFG_stacker0.1 file. To review the meaning of each instruction, see the preceding Description section.

   MOUNT /usr/afs/backup/stacker0.1
   UNMOUNT /usr/afs/backup/stacker0.1
   AUTOQUERY NO
   ASK NO
   NAME_CHECK NO

Finally, the administrator writes the following executable routine in the /usr/afs/backup/stacker0.1 file referenced by the MOUNT and UNMOUNT instructions in the CFG_stacker0.1 file.

   #! /bin/csh -f
     
   set devicefile = $1
   set operation = $2
   set tries = $3
   set tapename = $4
   set tapeid = $5
     
   set exit_continue = 0
   set exit_abort = 1
   set exit_interactive = 2
    
   #--------------------------------------------
     
   if (${tries} > 1) then
      echo "Too many tries"
      exit ${exit_interactive}
   endif
     
   if (${operation} == "unmount") then
      echo "UnMount: Will leave tape in drive"
      exit ${exit_continue}
   endif
     
   if ((${operation} == "dump")     |\
       (${operation} == "appenddump")     |\
       (${operation} == "savedb"))  then
     
       stackerCmd_NextTape ${devicefile}
       if (${status} != 0)exit${exit_interactive}
       echo "Will continue"
       exit ${exit_continue}
   endif
     
   if ((${operation} == "labeltape")    |\
       (${operation} == "readlabel")) then
      echo "Will continue"
      exit ${exit_continue}
   endif
     
   echo "Prompt for tape"
   exit ${exit_interactive}   

This routine uses two of the parameters passed to it by the Backup System: tries and operation. It follows the recommended practice of prompting for a tape if the value of the tries parameter exceeds one, because that implies that the stacker is out of tapes.

For a backup dump or backup savedb operation, the routine calls the example stackerCmd_NextTape function provided by the stacker's manufacturer. Note that the final lines in the file return the exit code that prompts the operator to insert a tape; these lines are invoked when either the stacker cannot load a tape or the operation being performed is not one of those explicitly mentioned in the file (such as a restore operation).

Example CFG_tcid File for Dumping to a Backup Data File

In this example, the administrator creates the following entry for a backup data file called HSM_device in the /usr/afs/backup/tapeconfig file. It has port offset 20.

   1G   0K   /dev/HSM_device   20   

The administrator chooses to name the configuration file /usr/afs/backup/CFG_20, using the port offset number rather than deriving the tcid portion of the name from the backup data file's name. She includes the following lines in the file. To review the meaning of each instruction, see the preceding Description section.

   MOUNT /usr/afs/backup/file
   FILE YES
   ASK NO   

Finally, the administrator writes the following executable routine in the /usr/afs/backup/file file referenced by the MOUNT instruction in the CFG_HSM_device file, to control how the Tape Coordinator handles the file.

   #! /bin/csh -f
   set devicefile = $1
   set operation = $2
   set tries = $3
   set tapename = $4
   set tapeid = $5
     
   set exit_continue = 0
   set exit_abort = 1
   set exit_interactive = 2
     
   #--------------------------------------------
     
   if (${tries} > 1) then
      echo "Too many tries"
      exit ${exit_interactive}
   endif
     
   if (${operation} == "labeltape") then
      echo "Won't label a tape/file"
      exit ${exit_abort}
   endif
     
   if ((${operation} == "dump")   |\
       (${operation} == "appenddump")   |\
       (${operation} == "restore")   |\
       (${operation} == "savedb")    |\
       (${operation} == "restoredb")) then
     
      /bin/rm -f ${devicefile}
      /bin/ln -s /hsm/${tapename}_${tapeid} ${devicefile}
      if (${status} != 0) exit ${exit_abort}
   endif
     
   exit ${exit_continue}   

Like the example routine for a tape stacker, this routine uses the tries and operation parameters passed to it by the Backup System. The tries parameter tracks how many times the Tape Coordinator has attempted to access the file. A value greater than one indicates that the Tape Coordinator cannot access it, and the routine returns exit code 2 (exit_interactive), which results in a prompt for the operator to load a tape. The operator can use this opportunity to change the name of the backup data file specified in the tapeconfig file.

The primary function of this routine is to establish a link between the device file and the file to be dumped or restored. When the Tape Coordinator is executing a backup dump, backup restore, backup savedb, or backup restoredb operation, the routine invokes the UNIX ln -s command to create a symbolic link from the backup data file named in the tapeconfig file to the actual file to use (this is the recommended method). It uses the value of the tapename and tapeid parameters to construct the file name.

Example CFG_tcid File for an XBSA Server

The following is an example of a configuration file called /usr/afs/backup/CFG_22, for a Tape Coordinator with port offset 22 that communicates with an Tivoli Storage Management (TSM) server. The combination of BUFFERSIZE and STATUS instructions results in a status message after each 16 MB of data are dumped. To review the meaning of the other instructions, see the preceding Description section.

   SERVER tsmserver1.abc.com
   TYPE tsm
   PASSWORD  TESTPASS
   NODE testnode
   MGMTCLASS standard
   MAXPASS 1
   GROUPID 1000
   CENTRALLOG /usr/afs/backup/centrallog
   BUFFERSIZE 16K
   STATUS 1024

Related Information

tapeconfig

backup deletedump

backup diskrestore

backup dump

backup dumpinfo

backup restoredb

backup savedb

backup volrestore

backup volsetrestore

NetRestrict (client version)

Purpose

Defines client interfaces not to register with the File Server

Description

The NetRestrict file, if present in a client machine's /usr/vice/etc directory, defines the IP addresses of the interfaces that the local Cache Manager does not register with a File Server when first establishing a connection to it. For an explanation of how the File Server uses the registered interfaces, see the reference page for the client version of the NetInfo file.

As it initializes, the Cache Manager constructs a list of interfaces to register, from the /usr/vice/etc/NetInfo file if it exists, or from the list of interfaces configured with the operating system otherwise. The Cache Manager then removes from the list any addresses that appear in the NetRestrict file, if it exists. The Cache Manager records the resulting list in kernel memory.

The NetRestrict file is in ASCII format. One IP address appears on each line, in dotted decimal format. The order of the addresses is not significant.

To display the addresses the Cache Manager is currently registering with File Servers, use the fs getclientaddrs command.

Related Information

NetInfo (client version)

fs getclientaddrs

NetRestrict (server version)

Purpose

Defines interfaces that File Server does not register in VLDB and Ubik does not use for database server machines

Description

The NetRestrict file, if present in the /usr/afs/local directory, defines the following:

As it initializes, the File Server constructs a list of interfaces to register, from the /usr/afs/local/NetInfo file if it exists, or from the list of interfaces configured with the operating system otherwise. The File Server then removes from the list any addresses that appear in the NetRestrict file, if it exists. The File Server records the resulting list in the /usr/afs/local/sysid file and registers the interfaces in the VLDB. The database server processes use a similar procedure when initializing, to determine which interfaces to use for communication with the peer processes on other database machines in the cell.

The NetRestrict file is in ASCII format. One IP address appears on each line, in dotted decimal format. The order of the addresses is not significant.

To display the File Server interface addresses registered in the VLDB, use the vos listaddrs command.

Related Information

NetInfo (server version)

sysid

vldb.DB0 and vldb.DBSYS1

fileserver

vos listaddrs

backup deletedump

Purpose

Deletes one or more dump records from the Backup Database

Synopsis

backup deletedump [-dumpid <dump id>+]  [-from <date time>+]  [-to <date time>+]
                  [-port <TC port offset>]  [-groupid <group ID>]  
                  [-dbonly]  [-force]  [-noexecute]
                  [-localauth]  [-cell <cell name>]  [-help]
  
backup dele [-du <dump id>+]  [-fr <date time>+]  [-t <date time>+]
            [-p <TC port offset>]  [-g <group ID>]  [-db]  [-fo]  [-n]  
            [-l]  [-c <cell name>]  [-h]

Description

The backup deletedump command deletes one or more dump records from the Backup Database. Using this command is appropriate when dump records are incorrect (possibly because a dump operation was interrupted or failed), or when they represent dumps that are expired or otherwise no longer needed.

To specify the records to delete, use one of the following arguments or combinations of arguments:

The command can also delete dump records maintained by an XBSA server at the same time as the corresponding Backup Database records. (An XBSA server is a third-party backup utility that implements the Open Group's Backup Service API [XBSA].) Include the -port argument to identify the Tape Coordinator that communicates with the XBSA server. To delete the Backup Database records without attempting to delete the records at the XBSA server, include the -dbonly flag. To delete the Backup Database records even if an attempt to delete the records at the XBSA server fails, include the -force flag.

Cautions

The only way to remove the dump record for an appended dump is to remove the record for its initial dump, and doing so removes the records for all dumps appended to the initial dump.

The only way to remove the record for a Backup Database dump (created with the backup savedb command) is to specify its dump ID number with the -dumpid argument. Using the -from and -to arguments never removes database dump records.

Removing a dump's record makes it impossible to restore data from it or from any dump that refers to the deleted dump as its parent, directly or indirectly. That is, restore operations must begin with a full dump and continue with each incremental dump in order. If the records for a specific dump are removed, it is not possible to restore data from later incremental dumps. If necessary, use the -dbadd flag to the backup scantape command to regenerate a dump record so that the dump can act as a parent again.

If a dump set contains any dumps that were created outside the time range specified by the -from and -to arguments, the command does not delete any of the records associated with the dump set, even if some of them represent dumps created during the time range.

Options

-dumpid
Specifies the dump ID of each dump record to delete. The corresponding dumps must be initial dumps; it is not possible to delete appended dump records directly, but only by deleting the record of their associated initial dump. Using this argument is the only way to delete records of Backup Database dumps (created with the backup savedb command).

Provide either this argument, the -to (and optionally -from) argument, or the -groupid argument.

-from
Specifies the beginning of a range of dates; the record for any dump created during the indicated period of time is deleted.

Omit this argument to indicate the default of midnight (00:00 hours) on 1 January 1970 (UNIX time zero), or provide a date value in the format mm/dd/yyyy [hh:MM]. The month (mm), day (dd), and year (yyyy) are required. The hour and minutes (hh:MM) are optional, but if provided must be in 24-hour format (for example, the value 14:36 represents 2:36 p.m.). If omitted, the time defaults to midnight (00:00 hours).

The -to argument must be provided along with this one.

Note:A plus sign follows this argument in the command's syntax statement because it accepts a multiword value which does not need to be enclosed in double quotes or other delimiters, not because it accepts multiple dates. Provide only one date (and optionally, time) definition.

-to
Specifies the end of a range of dates; the record of any regular dump created during the range is deleted from the Backup Database.

Provide either the value NOW to indicate the current date and time, or a date value in the same format as for the -from argument. Valid values for the year (yyyy) range from 1970 to 2037; higher values are not valid because the latest possible date in the standard UNIX representation is in February 2038. The command interpreter automatically reduces any later date to the maximum value.

If the time portion (hh:MM) is omitted, it defaults to 59 seconds after midnight (00:00:59 hours). Similarly, the backup command interpreter automatically adds 59 seconds to any time value provided. In both cases, adding 59 seconds compensates for how the Backup Database and backup dumpinfo command represent dump creation times in hours and minutes only. For example, the Database records a creation timestamp of 20:55 for any dump operation that begins between 20:55:00 and 20:55:59. Automatically adding 59 seconds to a time thus includes the records for all dumps created during that minute.

Provide either this argument, the -dumpid argument, or the -groupid argument, or combine this argument and the -groupid argument. This argument is required if the -from argument is provided.

Caution: Specifying the value NOW for this argument when the -from argument is omitted deletes all dump records from the Backup Database (except for Backup Database dump records created with the backup savedb command).

Note:A plus sign follows this argument in the command's syntax statement because it accepts a multiword value which does not need to be enclosed in double quotes or other delimiters, not because it accepts multiple dates. Provide only one date (and optionally, time) definition.

-port
Specifies the port offset number of the Tape Coordinator that communicates with the XBSA server that maintains the records to delete. It must be the Tape Coordinator that transferred AFS data to the XBSA server when the dump was created. The corresponding records in the Backup Database are also deleted.

This argument is meaningful only when deleting records maintained by an XBSA server. Do not combine it with the -dbonly flag. If this argument is omitted when other options pertinent to an XBSA server are included, the Tape Coordinator with port offset 0 (zero) is used.

-groupid
Specifies the group ID number that is associated with the records to delete. The Tape Coordinator ignores group IDs if this argument is omitted.

Provide either this argument, the -dumpid argument, or the -to argument, or combine this argument and the -to argument with any options other than the -dumpid argument.

-dbonly
Deletes records from the Backup Database without attempting to delete the corresponding records maintained by an XBSA server. Do not combine this flag with the -port argument or the -force flag.

-force
Deletes the specified records from the Backup Database even when the attempt to delete the corresponding records maintained by an XBSA server fails. Do not combine this flag with the -dbonly flag. To identify the Tape Coordinator when this argument is used, either provide the -port argument or omit it to specify the Tape Coordinator with port offset 0 (zero).

-noexecute
Displays a list of the dump records to be deleted, without actually deleting them. Combine it with the options to be included on the actual command.

-localauth
Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The backup command interpreter presents it to the Backup Server, Volume Server and VL Server during mutual authentication. Do not combine this flag with the -cell argument. For more details, see the introductory backup reference page.

-cell
Names the cell in which to run the command. Do not combine this argument with the -localauth flag. For more details, see the introductory backup reference page.

-help
Prints the online help for this command. All other valid options are ignored.

Output

If the -noexecute flag is not included, the output generated at the conclusion of processing lists the dump IDs of all deleted dump records, in the following format:

   The following dumps were deleted:
        dump ID 1
        dump ID 2
        etc.   

If the -noexecute flag is included, the output instead lists the dump IDs of all dump records to be deleted, in the following format:

   The following dumps would have been deleted:
        dump ID 1
        dump ID 2
        etc.   

The notation Appended Dump after a dump ID indicates that the dump is to be deleted because it is appended to an initial dump that also appears in the list, even if the appended dump's dump ID or group ID number was not specified on the command line. For more about deleting appended dumps, see the preceding Cautions section of this reference page.

Examples

The following command deletes the dump record with dump ID 653777462, and for any appended dumps associated with it:

   % backup deletedump -dumpid 653777462
   The following dumps were deleted:
        653777462   

The following command deletes the Backup Database record of all dumps created between midnight on 1 January 1999 and 23:59:59 hours on 31 December 1999:

   % backup deletedump -from 01/01/1999 -to 12/31/1999
   The following dumps were deleted:
        598324045
        598346873
           ...
           ...
        653777523
        653779648      

Privilege Required

The issuer must be listed in the /usr/afs/etc/UserList file on every machine where the Backup Server is running, or must be logged onto a server machine as the local superuser root if the -localauth flag is included.

Related Information

CFG_tcid

backup

backup dumpinfo

backup scantape

backup dumpinfo

Purpose

Displays a dump record from the Backup Database

Synopsis

backup dumpinfo [-ndumps <no. of dumps>]  [-id <dump id>]
                [-verbose]  [-localauth]  [-cell <cell name>]  [-help ]
   
backup dumpi [-n <no. of dumps>]  [-i <dump id>]
             [-v]  [-l]  [-c <cell name>]  [-h]

Description

The backup dumpinfo command formats and displays the Backup Database record for the specified dumps. To specify how many of the most recent dumps to display, starting with the newest one and going back in time, use the -ndumps argument. To display more detailed information about a single dump, use the -id argument. To display the records for the 10 most recent dumps, omit both the -ndumps and -id arguments.

The -verbose flag produces very detailed information that is useful mostly for debugging purposes. It can be combined only with the -id argument.

Options

-ndumps
Displays the Backup Database record for each of the specified number of dumps that were most recently performed. If the database contains fewer dumps than are requested, the output includes the records for all existing dumps. Do not combine this argument with the -id or -verbose options; omit all options to display the records for the last 10 dumps.

-id
Specifies the dump ID number of a single dump for which to display the Backup Database record. Precede the dump id value with the -id switch; otherwise, the command interpreter interprets it as the value of the -ndumps argument. Combine this argument with the -verbose flag if desired, but not with the -ndumps argument; omit all options to display the records for the last 10 dumps.

-verbose
Provides more detailed information about the dump specified with the -id argument, which must be provided along with it. Do not combine this flag with the -ndumps argument.

-localauth
Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The backup command interpreter presents it to the Backup Server, Volume Server and VL Server during mutual authentication. Do not combine this flag with the -cell argument. For more details, see the introductory backup reference page.

-cell
Names the cell in which to run the command. Do not combine this argument with the -localauth flag. For more details, see the introductory backup reference page.

-help
Prints the online help for this command. All other valid options are ignored.

Output

If the -ndumps argument is provided, the output presents the following information in table form, with a separate line for each dump:

dumpid
The dump ID number.

parentid
The dump ID number of the dump's parent dump. A value of 0 (zero) identifies a full dump.

lv
The depth in the dump hierarchy of the dump level used to create the dump. A value of 0 (zero) identifies a full dump, in which case the value in the parentid field is also 0. A value of 1 or greater indicates an incremental dump made at the corresponding level in the dump hierarchy.

created
The date and time at which the Backup System started the dump operation that created the dump.

nt
The number of tapes that contain the data in the dump. A value of 0 (zero) indicates that the dump operation was terminated or failed. Use the backup deletedump command to remove such entries.

nvols
The number of volumes from which the dump includes data. If a volume spans tapes, it is counted twice. A value of 0 (zero) indicates that the dump operation was terminated or failed; the value in the nt field is also 0 in this case.

dump name
The dump name in the form
   volume_set_name.dump_level_name (initial_dump_ID)
   

where volume_set_name is the name of the volume set, and dump_level_name is the last element in the dump level pathname at which the volume set was dumped.

The initial_dump_ID, if displayed, is the dump ID of the initial dump in the dump set to which this dump belongs. If there is no value in parentheses, the dump is the initial dump in a dump set that has no appended dumps.

If the -id argument is provided alone, the first line of output begins with the string Dump and reports information for the entire dump in the following fields:

id
The dump ID number.

level
The depth in the dump hierarchy of the dump level used to create the dump. A value of 0 (zero) identifies a full dump. A value of 1 (one) or greater indicates an incremental dump made at the specified level in the dump hierarchy.

volumes
The number of volumes for which the dump includes data.

created
The date and time at which the dump operation began.

If an XBSA server was the backup medium for the dump (rather than a tape device or backup data file), the following line appears next:

   Backup Service: XBSA_program: Server: hostname

where XBSA_program is the name of the XBSA-compliant program and hostname is the name of the machine on which the program runs.

Next the output includes an entry for each tape that houses volume data from the dump. Following the string Tape, the first two lines of each entry report information about that tape in the following fields:

name
The tape's permanent name if it has one, or its AFS tape name otherwise, and its tape ID number in parentheses.

nVolumes
The number of volumes for which this tape includes dump data.

created
The date and time at which the Tape Coordinator began writing data to this tape.

Following another blank line, the tape-specific information concludes with a table that includes a line for each volume dump on the tape. The information appears in columns with the following headings:

Pos
The relative position of each volume in this tape or file. On a tape, the counter begins at position 2 (the tape label occupies position 1), and increments by one for each volume. For volumes in a backup data file, the position numbers start with 1 and do not usually increment only by one, because each is the ordinal of the 16 KB offset in the file at which the volume's data begins. The difference between the position numbers therefore indicates how many 16 KB blocks each volume's data occupies. For example, if the second volume is at position 5 and the third volume in the list is at position 9, that means that the dump of the second volume occupies 64 KB (four 16-KB blocks) of space in the file.

Clone time
For a backup or read-only volume, the time at which it was cloned from its read/write source. For a Read/Write volume, it is the same as the dump creation date reported on the first line of the output.

Nbytes
The number of bytes of data in the dump of the volume.

Volume
The volume name, complete with .backup or .readonly extension if appropriate.

If both the -id and -verbose options are provided, the output is divided into several sections:

Examples

The following example displays information about the last five dumps:

The following example displays a more detailed record for a single dump.

   % backup dumpinfo -id 922097346
   Dump: id 922097346, level 0, volumes 1, created Mon Mar 22 05:09:06 1999
   Tape: name monday.user.backup (922097346)
   nVolumes 1, created 03/22/1999 05:09
    Pos       Clone time   Nbytes Volume
      1 03/22/1999 04:43 27787914 user.pat.backup   

The following example displays even more detailed information about the dump displayed in the previous example (dump ID 922097346). This example includes only one exemplar of each type of section (Dump, Tape, and Volume):

   % backup dumpinfo -id 922097346 -verbose
   Dump
   ----
   id = 922097346
   Initial id = 0
   Appended id = 922099568
   parent = 0
   level = 0
   flags = 0x0
   volumeSet = user
   dump path = /monday1
   name = user.monday1
   created = Mon Mar 22 05:09:06 1999
   nVolumes = 1
   Group id  = 10
   tapeServer =
   format= user.monday1.%d
   maxTapes = 1
   Start Tape Seq = 1
   name = pat
   instance =
   cell =
   Tape
   ----
   tape name = monday.user.backup
   AFS tape name = user.monday1.1
   flags = 0x20
   written = Mon Mar 22 05:09:06 1999
   expires = NEVER
   kBytes Tape Used = 121
   nMBytes Data = 0
   nBytes  Data = 19092
   nFiles = 0
   nVolumes = 1
   seq = 1
   tapeid = 0
   useCount = 1
   dump = 922097346
   Volume
   ------
   name = user.pat.backup
   flags = 0x18
   id = 536871640
   server =
   partition = 0
   nFrags = 1
   position = 2
   clone = Mon Mar 22 04:43:06 1999
   startByte = 0
   nBytes = 19092
   seq = 0
   dump = 922097346
   tape = user.monday1.1   

Privilege Required

The issuer must be listed in the /usr/afs/etc/UserList file on every machine where the Backup Server is running, or must be logged onto a server machine as the local superuser root if the -localauth flag is included.

Related Information

backup

backup deletedump

backup status

Purpose

Reports a Tape Coordinator's status

Synopsis

backup status [-portoffset <TC port offset>]
              [-localauth]  [-cell <cell name>]  [-help]
  
backup st [-p <TC port offset>]  [-l]  [-c <cell name>]  [-h]

Description

The backup status command displays which operation, if any, the indicated Tape Coordinator is currently executing.

Options

-portoffset
Specifies the port offset number of the Tape Coordinator for which to report the status.

-localauth
Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The backup command interpreter presents it to the Backup Server, Volume Server and VL Server during mutual authentication. Do not combine this flag with the -cell argument. For more details, see the introductory backup reference page.

-cell
Names the cell in which to run the command. Do not combine this argument with the -localauth flag. For more details, see the introductory backup reference page.

-help
Prints the online help for this command. All other valid options are ignored.

Output

The following message indicates that the Tape Coordinator is not currently performing an operation:

   Tape coordinator is idle

Otherwise, the output includes a message of the following format for each running or pending operation:

   Task task_ID:  operation:   status

where

task_ID
Is a task identification number assigned by the Tape Coordinator. It begins with the Tape Coordinator's port offset number.

operation
Identifies the operation the Tape Coordinator is performing, which is initiated by the indicated command:

status
Indicates the job's current status in one of the following messages.

number Kbytes transferred, volume volume_name
For a running dump operation, indicates the number of kilobytes copied to tape or a backup data file so far, and the volume currently being dumped.

number Kbytes, restore.volume
For a running restore operation, indicates the number of kilobytes copied into AFS from a tape or a backup data file so far.

[abort requested]
The (backup) kill command was issued, but the termination signal has yet to reach the Tape Coordinator.

[abort sent]
The operation is canceled by the (backup) kill command. Once the Backup System removes an operation from the queue or stops it from running, it no longer appears at all in the output from the command.

[butc contact lost]
The backup command interpreter cannot reach the Tape Coordinator. The message can mean either that the Tape Coordinator handling the operation was terminated or failed while the operation was running, or that the connection to the Tape Coordinator timed out.

[done]
The Tape Coordinator has finished the operation.

[drive wait]
The operation is waiting for the specified tape drive to become free.

[operator wait]
The Tape Coordinator is waiting for the backup operator to insert a tape in the drive.

If the Tape Coordinator is communicating with an XBSA server (a third-party backup utility that implements the Open Group's Backup Service API [XBSA]), the following message appears last in the output:

   XBSA_program Tape coordinator

where XBSA_program is the name of the XBSA-compliant program.

Examples

The following example shows that the Tape Coordinator with port offset 4 has so far dumped about 1.5 MB of data for the current dump operation, and is currently dumping the volume named user.pat.backup:

   % backup status -portoffset 4
   Task 4001:  Dump:   1520 Kbytes transferred,  volume user.pat.backup   

Privilege Required

The issuer must be listed in the /usr/afs/etc/UserList file on every machine where the Backup Server is running, or must be logged onto a server machine as the local superuser root if the -localauth flag is included.

Related Information

backup

butc

vos delentry

Purpose

Removes a volume entry from the VLDB.

Synopsis

vos delentry [-id <volume name or ID>+]
             [-prefix <prefix of volume whose VLDB entry is to be deleted>] 
             [-server <machine name>]  [-partition <partition name>]  
             [-cell <cell name>]  [-noauth]  [-localauth]  [-verbose]  [-help]
     
vos de [-i <volume name or ID>+]
       [-pr <prefix of volume whose VLDB entry is to be deleted>]  
       [-s <machine name>]  [-pa <partition name>]  [-c <cell name>] 
       [-n]  [-l]  [-v]  [-h]

Description

The vos delentry command removes the Volume Location Database (VLDB) entry for each specified volume. Specify one or more read/write volumes; specifying a read-only or backup volume results in an error. The command has no effect on the actual volumes on file server machines, if they exist.

This command is useful if a volume removal operation did not update the VLDB (perhaps because the vos zap command was used), but the system administrator does not feel it is necessary to use the vos syncserv and vos syncvldb commands to synchronize an entire file server machine.

To remove the VLDB entry for a single volume, use the -id argument. To remove groups of volumes, combine the -prefix, -server, and -partition arguments. The following list describes how to remove the VLDB entry for the indicated group of volumes:

Cautions

A single VLDB entry represents all versions of a volume (read/write, readonly, and backup). The command removes the entire entry even though only the read/write volume is specified.

Do not use this command to remove a volume in normal circumstances; it does not remove a volume from the file server machine, and so is likely to make the VLDB inconsistent with state of the volumes on server machines. Use the vos remove command to remove both the volume and its VLDB entry.

Options

-id
Specifies the complete name or volume ID number of each read/write volume for which to remove the VLDB entry. The entire entry is removed. Provide this argument or some combination of the -prefix, -server, and -partition arguments.

-prefix
Specifies a character string of any length; the VLDB entry for a volume whose name begins with the string is removed. Include field separators (such as periods) if appropriate. Combine this argument with the -server argument, -partition argument, or both.

-server
Identifies a file server machine; if a volume's VLDB entry lists a site on the machine, the entry is removed. Provide the machine's IP address or its host name (either fully qualified or using an unambiguous abbreviation). For details, see the introductory reference page for the vos command suite.

Combine this argument with the -prefix argument, the -partition argument, or both.

-partition
Identifies a partition; if a volume's VLDB entry lists a site on the partition, the entry is removed. Provide the partition's complete name with preceding slash (for example, /vicepa) or use one of the three acceptable abbreviated forms. For details, see the introductory reference page for the vos command suite.

Combine this argument with the -prefix argument, the -server argument, or both.

-cell
Names the cell in which to run the command. Do not combine this argument with the -localauth flag. For more details, see the introductory vos reference page.

-noauth
Assigns the unprivileged identity anonymous to the issuer. Do not combine this flag with the -localauth flag. For more details, see the introductory vos reference page.

-localauth
Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The vos command interpreter presents it to the Volume Server and Volume Location Server during mutual authentication. Do not combine this flag with the -cell argument or -noauth flag. For more details, see the introductory vos reference page.

-verbose
Produces on the standard output stream a detailed trace of the command's execution. If this argument is omitted, only warnings and error messages appear.

-help
Prints the online help for this command. All other valid options are ignored.

Output

The following message confirms the success of the command by indicating how many VLDB entries were removed.

   Deleted number VLDB entries   

Examples

The following command removes the VLDB entry for the volume user.temp.

   % vos delentry user.temp   

The following command removes the VLDB entry for every volume whose name begins with the string test and for which the VLDB lists a site on the file server machine fs3.abc.com.

   % vos delentry -prefix test -server fs3.abc.com   

Privilege Required

The issuer must be listed in the /usr/afs/etc/UserList file on the machine specified with the -server argument and on each database server machine. If the -localauth flag is included, the issuer must instead be logged on to a server machine as the local superuser root.

Related Information

vos

vos remove

vos syncserv

vos syncvldb

vos zap


[Return to Library] [Contents] [Previous Topic] [Top of Topic]



© IBM Corporation 2000. All Rights Reserved