The changes implemented for demand-attach include:
-- [[volume finite-state automata|AFSLore/DemandAttach#Volume_Finite_State_Automata]]
+- [[volume finite-state automata|DemandAttach#Volume_Finite_State_Automata]]
- volumes are attached on demand
- volume _garbage collector_ to detach unused volumes
- notion of volume state means read-only volumes aren't salvaged
-- [[vnode finite-state automata|AFSLore/DemandAttach#Vnode_Finite_State_Automata]]
+- [[vnode finite-state automata|DemandAttach#Vnode_Finite_State_Automata]]
- global lock is only held when required and never held across high-latency operations
- automatic salvaging of volumes
- shutdown is done in parallel (maximum number of threads utilized)
A traditional file-server uses the `bnode` type `fs` and has a definition similar to
bnode fs fs 1
- parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 200 -rxpck 2000 -rxbind
- parm /usr/afs/bin/volserver -p 127 -log -rxbind
+ parm /usr/afs/bin/fileserver -p 123 -L -busyat 200 -rxpck 2000 -cb 4000000
+ parm /usr/afs/bin/volserver -p 127 -log
parm /usr/afs/bin/salvager -parallel all32
end
Since an additional component was required for the demand-attach file-server, a new `bnode` type ( `dafs`) is required. The definition should be similar to
bnode dafs dafs 1
- parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 50 -rxpck 2000 -rxbind -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
- parm /usr/afs/bin/volserver -p 64 -log -rxbind
+ parm /usr/afs/bin/dafileserver -p 123 -L -busyat 200 -rxpck 2000 -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
+ parm /usr/afs/bin/davolserver -p 64 -log
parm /usr/afs/bin/salvageserver
- parm /usr/afs/bin/salvager -parallel all32
+ parm /usr/afs/bin/dasalvager -parallel all32
end
-The instance for a demand-attach file-server is therefore `dafs` instead of `fs`.
+The instance for a demand-attach file-server is therefore `dafs`
+instead of `fs`. For a complete list of configuration options see the
+[dafileserver man page](http://docs.openafs.org/Reference/8/dafileserver.html).
### <a name="File-server Start-up / Shutdown"></a><a name="File-server Start-up / Shutdown "></a> File-server Start-up / Shutdown Sequence
</tr>
<tr>
<td> </td>
- <td> %BULLET% host / callback state restored </td>
+ <td> host / callback state restored </td>
</tr>
<tr>
<td> </td>
- <td> %BULLET% host / callback state consistency verified </td>
+ <td> host / callback state consistency verified </td>
</tr>
<tr>
- <td> %BULLET% build vice partition list </td>
- <td> %BULLET% build vice partition list </td>
+ <td> build vice partition list </td>
+ <td> build vice partition list </td>
</tr>
<tr>
- <td> %BULLET% volumes are attached </td>
- <td> %BULLET% volume headers read </td>
+ <td> volumes are attached </td>
+ <td> volume headers read </td>
</tr>
<tr>
<td> </td>
- <td> %BULLET% volumes placed into <em>pre-attached</em> state </td>
+ <td> volumes placed into <em>pre-attached</em> state </td>
</tr>
</table>
-The [[host / callback state|AFSLore/DemandAttach#FSStateDat]] is covered later. The _pre-attached_ state indicates that the file-server has read the volume headers and is aware that the volume exists, but that it has not been attached (and hence is not on-line).
+The [[host / callback state|DemandAttach#FSStateDat]] is covered later. The _pre-attached_ state indicates that the file-server has read the volume headers and is aware that the volume exists, but that it has not been attached (and hence is not on-line).
The shutdown sequence for both file-server types is:
<th bgcolor="#99CCCC"><strong> Demand-Attach </strong></th>
</tr>
<tr>
- <td> %BULLET% break callbacks </td>
- <td> %BULLET% quiesce host / callback state </td>
+ <td> break callbacks </td>
+ <td> quiesce host / callback state </td>
</tr>
<tr>
- <td> %BULLET% shutdown volumes </td>
- <td> %BULLET% shutdown on-line volumes </td>
+ <td> shutdown volumes </td>
+ <td> shutdown on-line volumes </td>
</tr>
<tr>
<td> </td>
- <td> %BULLET% verify host / callback state consistency </td>
+ <td> verify host / callback state consistency </td>
</tr>
<tr>
<td> </td>
- <td> %BULLET% save host / callback state </td>
+ <td> save host / callback state </td>
</tr>
</table>
### <a name="Volume Finite-State Automata"></a> Volume Finite-State Automata
-The volume finite-state automata is available in the source tree under `doc/arch/dafs-fsa.dot`. See [[=fssync-debug=|AFSLore/DemandAttach#fssync_debug]] for information on debugging the volume package.
+The volume finite-state automata is available in the source tree under `doc/arch/dafs-fsa.dot`. See [[=fssync-debug=|DemandAttach#fssync_debug]] for information on debugging the volume package.
<a name="VolumeLeastRecentlyUsed"></a>
</tr>
<tr>
<td> intermediate (mid) </td>
- <td> Volumes transitioning from new -> old (see [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details). </td>
+ <td> Volumes transitioning from new -> old (see [[DemandAttach#VLRUStateTransitions][state transitions] for details). </td>
</tr>
<tr>
<td> new </td>
- <td> Volumes which have been accessed. See [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details. </td>
+ <td> Volumes which have been accessed. See [[DemandAttach#VLRUStateTransitions][state transitions] for details. </td>
</tr>
<tr>
<td> old </td>
- <td> Volumes which are continually accessed. See [[AFSLore.DemandAttach.#VLRUStateTransitions][state transitions] for details. </td>
+ <td> Volumes which are continually accessed. See [[DemandAttach.#VLRUStateTransitions][state transitions] for details. </td>
</tr>
</table>
The vnode finite-state automata is available in the source tree under `doc/arch/dafs-vnode-fsa.dot`
-`/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. \*Indiscriminate use of <code>**fsync-debug**</code> can lead to extremely bad things occurring. Use with care. %ENDCOLOR%
+`/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. **Indiscriminate use of <code>fsync-debug</code>** can lead to extremely bad things occurring. Use with care.
<a name="SalvageServer"></a>
- file-server automatically requests volumes be salvaged as required, i.e. they are marked as requiring salvaging when attached.
- manual initiation of salvaging may be required when access is through the `volserver` (may be addressed at some later date).
-- `bos salvage` requires the `-forceDAFS` flag to initiate salvaging wit DAFS. However, %RED% **salvaging should not be initiated using this method**.%ENDCOLOR%
+- `bos salvage` requires the `-forceDAFS` flag to initiate salvaging with DAFS. However, **salvaging should not be initiated using this method**.
- infinite salvage, attach, salvage, ... loops are possible. There is therefore a hard-limit on the number of times a volume will be salvaged which is reset when the volume is removed or the file-server is restarted.
- volumes are salvaged in parallel and is controlled by the `-Parallel` argument to the `salvageserver`. Defaults to 4.
- the `salvageserver` and the `inode` file-server are incompatible:
- because volumes are inter-mingled on a partition (rather than being separated), a lock for the entire partition on which the volume is located is held throughout. Both the `fileserver` and `volserver` will block if they require this lock, e.g. to restore / dump a volume located on the partition.
- inodes for a particular volume can be located anywhere on a partition. Salvaging therefore results in **every** inode on a partition having to be read to determine whether it belongs to the volume. This is extremely I/O intensive and leads to horrendous salvaging performance.
-- `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. %RED% **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.** %ENDCOLOR%
-- See [[=salvsync-debug=|AFSLore/DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
+- `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.**
+- See [[=salvsync-debug=|DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
<a name="FSStateDat"></a>
</tr>
</table>
-Arguments controlling the [[VLRU:|Main/WebHome#VolumeLeastRecentlyUsed]]
+Arguments controlling the VLRU
<table border="1" cellpadding="0" cellspacing="0">
<tr>
### <a name="==fssync-debug=="></a> <code>**fssync-debug**</code>
-%RED% **Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
+**Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care.**
`fssync-debug` provides low-level inspection and control over the volume package of the file-server. It can be used to display the file-server information associated with a volume, e.g.
- `VOL_IN_HASH` indicates that the volume has been added to the volume linked-list
- `VOL_ON_VBYP_LIST` indicates that the volume is linked off the partition list
- `VOL_ON_VLRU` means the volume is on a VLRU queue
-- the `salvage` structure (detailed [[here|AFSLore/DemandAttach#salvsync_debug]])
+- the `salvage` structure (detailed [[here|DemandAttach#salvsync_debug]])
- the `stats` structure, particularly the volume operation times ( `last_*`).
- the `vlru` structure, particularly the VLRU queue
### <a name="==salvsync-debug=="></a> <code>**salvsync-debug**</code>
-%RED% **Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
+**Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care**
`salvsync-debug` provides low-level inspection and control of the salvageserver process, including the scheduling order of volumes.
This is the method that should be used on demand-attach file-servers to initiate the manual salvage of volumes. It should be used with care.
-Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. %RED% Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. %ENDCOLOR% To modify the priority of a request, use
+Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. To modify the priority of a request, use
salvsync-debug priority -vol 537119916 -part /vicepb -priority 999999