<li><a href="#balance"> 3.10 Is there a way to automatically balance disk usage across fileservers?</a>
<li><a href="#shutdown"> 3.11 Can I shutdown an AFS fileserver without affecting users?</a>
<li><a href="#mail"> 3.12 How can I set up mail delivery to users with <code>$HOME</code>s in AFS?</a>
- <li><a href="#shadowro"> 3.13 Should I replicate a [[ReadOnly]] volume on the same partition and server as the [[ReadWrite]] volume?</a>
+ <li><a href="#shadowro"> 3.13 Should I replicate a ReadOnly volume on the same partition and server as the ReadWrite volume?</a>
<li><a href="#multihomed"> 3.14 Will AFS run on a multi-homed fileserver?</a>
<li><a href="#replicatehome"> 3.15 Can I replicate my user's home directory AFS volumes?</a>
<li><a href="#listclients"> 3.16 How can I list which clients have cached files from a server?</a>
- <li><a href="#backupvol"> 3.17 Do Backup volumes require as much space as [[ReadWrite]] volumes?</a>
+ <li><a href="#backupvol"> 3.17 Do Backup volumes require as much space as ReadWrite volumes?</a>
<li><a href="#ntp"> 3.18 Should I run <code>ntpd</code> on my AFS client?</a>
<li><a href="#cellservdb"> 3.19 Why and how should I keep <code>/usr/vice/etc/CellServDB</code> current?</a>
<li><a href="#fileservers"> 3.20 How can I compile a list of AFS fileservers?</a>
<li><a href="#anonftp"> 3.21 How can I set up anonymous FTP login to access <code>/afs</code>?</a>
<li><a href="#encrypt"> 3.22 Is the data sent over the network encrypted in AFS?</a>
<li><a href="#filesystems"> 3.23 What underlying filesystems can I use for AFS?</a>
- <li><a href="#3.30 Compiling _OpenAFS"> 3.24 Compiling [[OpenAFS]] from source</a>
- <li><a href="#3.31 Upgrading _OpenAFS"> 3.25 Upgrading [[OpenAFS]]</a>
- <li><a href="#3.32 Debugging _OpenAFS"> 3.26 Notes on debugging [[OpenAFS]]</a>
+ <li><a href="#3.30 Compiling _OpenAFS"> 3.24 Compiling OpenAFS from source</a>
+ <li><a href="#3.31 Upgrading _OpenAFS"> 3.25 Upgrading OpenAFS</a>
+ <li><a href="#3.32 Debugging _OpenAFS"> 3.26 Notes on debugging OpenAFS</a>
<li><a href="#3.33 Tuning client cache for hug"> 3.27 Tuning client cache for huge data</a>
<li><a href="#3.34 Settting up PAM with AFS"> 3.28 Settting up PAM with AFS</a>
<li><a href="#afskrbconf"> 3.29 How can I have a Kerberos realm different from the AFS cell name? How can I use an AFS cell across multiple Kerberos realms?</a>
Second, you need to set up the ACLs so that "`postman`" has lookup rights down to the user's `$HOME` and "`lik`" on the destination directory (for this example, we'll use `$HOME/Mail`).
-### <a name="shadowro"></a><a name="3.14 Should I replicate a _Read"></a> 3.13 Should I replicate a [[ReadOnly]] volume on the same partition and server as the [[ReadWrite]] volume?
+### <a name="cheapclone"></a><a name="3.14 Should I replicate a _Read"></a> 3.13 Should I replicate a [[ReadOnly]] volume on the same partition and server as the [[ReadWrite]] volume?
Yes, Absolutely! It improves the robustness of your served volumes.
However, you are **very** strongly encouraged to keep one RO copy of a volume on the _same server and partition_ as the RW. There are two reasons for this:
-1. The RO that is on the same server and partition as the RW is a clone (just a copy of the header, not a full copy of each file). It therefore is very small, but provides access to the same set of files that all other (full copy) [[ReadOnly]] volumes do. Transarc trainers referred to this as the "cheap replica", although the term "shadow" is finding some currency.
+1. The RO that is on the same server and partition as the RW is a clone (just a copy of the header, not a full copy of each file). It therefore is very small, but provides access to the same set of files that all other (full copy) [[ReadOnly]] volumes do. Transarc trainers referred to this as the "cheap replica"; some admins call it a "shadow", but this is not the same as a [[shadow volume|AdminFAQ#shadow volume]].
2. To prevent the frustration that occurs when all your ROs are unavailable but a perfectly healthy RW was accessible but not used.
If you keep a "cheap replica", then by definition, if the RW is available, one of the ROs is also available, and clients will utilize that site.
Yes, it will. Older AFS assumed that there is one address per host, but modern [[OpenAFS]] identifies servers ad clients by UUIDs (universally unique identifiers) so that a fileserver will be recognized by any of its registered addresses.
-See the documentation for the [`NetInfo`](http://docs.openafs.org/Reference/5/NetInfo.html), [`NetRestrict`](http://docs.openafs.org/Reference/5/NetRestrict.html) files. The UUID for a fileserver is generated when the [`sysid`](http://docs.openafs.org/Reference/5/sysid.html) file is created.
+See the documentation for the [`NetInfo`](http://docs.openafs.org/Reference/5/NetInfo.html) and [`NetRestrict`](http://docs.openafs.org/Reference/5/NetRestrict.html) files. The UUID for a fileserver is generated when the [`sysid`](http://docs.openafs.org/Reference/5/sysid.html) file is created.
+
+If you have multiple addresses and must use only one of them (say, multiple addresses on the same subnet), you may need to use the `-rxbind` option to the network server processes `bosserver`, `kaserver`, `ptserver`, `vlserver`, `volserver`, `fileserver` as appropriate. (Note that some of these do not currently document `-rxbind`, notably `kaserver` because it is not being maintained. Again, the preferred solution here is to migrate off of `kaserver`, but the `rxbind` option _will_ work if needed.)
+
+Database servers can *not* safely operate multihomed; the Ubik replication protocol assumes a 1-to-1 mapping between addresses and servers. Use the [`NetInfo`](http://docs.openafs.org/Reference/5/NetInfo.html) and [`NetRestrict`](http://docs.openafs.org/Reference/5/NetRestrict.html) files to associate database servers with a single address.
### <a name="replicatehome"></a><a name="3.17 Can I replicate my user's"></a><a name="3.17 Can I replicate my user's "></a> 3.15 Can I replicate my user's home directory AFS volumes?
- efs (SGI) - Transarc AFS supported efs, but [[OpenAFS]] doesn't have a license to use the efs code
- zfs (Solaris, FreeBSD, other ports) - you can however use a zvolume with a ufs or other supported filesystem
-The OpenAFS cache manager will detect an unsupported filesystem and refuse to start.
+On certain OSes, the OpenAFS cache manager has some checks for unsupported filesystem types and will refuse to start, but these checks are not 100% reliable.
The following file systems have been reported to work for the AFS client cache:
Attempting to create shadows of two different RW volumes on the same partition with the same name is prohibited by the `volserver`. Technically it is possible to create two shadow volumes with the same name on different partitions; however, this is not advisable and may lead to undefined behavior.
-(Some AFS administrators may refer to an RO clone of an RW volume on the same server/partition as a "shadow"; this terminology predates the existence of shadow volumes.)
+(Some AFS administrators may refer to an RO clone of an RW volume on the same server/partition as a "shadow"; this terminology predates the existence of shadow volumes and should be avoided.)
### <a name="multirealm"></a><a name="3.51 Can I authenticate to my af"></a> 3.39 Can I authenticate to my AFS cell using multiple Kerberos realms?