X-Git-Url: http://git.openafs.org/?p=openafs-wiki.git;a=blobdiff_plain;f=DemandAttach.mdwn;h=65c20adb658880bea4aba3422a2762baef02152b;hp=312bf9f8fb688fddc88fcbc10b196ebf0d1c9cfb;hb=d29b783323f608186f7a613f92717406c203e539;hpb=32f90cee2d1c9c403813387ee5882c6c50550274
diff --git a/DemandAttach.mdwn b/DemandAttach.mdwn
index 312bf9f..65c20ad 100644
--- a/DemandAttach.mdwn
+++ b/DemandAttach.mdwn
@@ -99,23 +99,23 @@ The table below compares the start-up sequence for a traditional file-server and
 |
- %BULLET% host / callback state restored |
+ host / callback state restored |
 |
- %BULLET% host / callback state consistency verified |
+ host / callback state consistency verified |
- %BULLET% build vice partition list |
- %BULLET% build vice partition list |
+ build vice partition list |
+ build vice partition list |
- %BULLET% volumes are attached |
- %BULLET% volume headers read |
+ volumes are attached |
+ volume headers read |
 |
- %BULLET% volumes placed into pre-attached state |
+ volumes placed into pre-attached state |
@@ -129,20 +129,20 @@ The shutdown sequence for both file-server types is:
Demand-Attach |
- %BULLET% break callbacks |
- %BULLET% quiesce host / callback state |
+ break callbacks |
+ quiesce host / callback state |
- %BULLET% shutdown volumes |
- %BULLET% shutdown on-line volumes |
+ shutdown volumes |
+ shutdown on-line volumes |
 |
- %BULLET% verify host / callback state consistency |
+ verify host / callback state consistency |
 |
- %BULLET% save host / callback state |
+ save host / callback state |
@@ -242,7 +242,7 @@ The state of the various VLRU queues is dumped with the file-server state and at
The vnode finite-state automata is available in the source tree under `doc/arch/dafs-vnode-fsa.dot`
-`/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. \*Indiscriminate use of **fsync-debug**
can lead to extremely bad things occurring. Use with care. %ENDCOLOR%
+`/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. **Indiscriminate use of fsync-debug
** can lead to extremely bad things occurring. Use with care.
@@ -252,13 +252,13 @@ Demand salvaging is implemented by the `salvageserver`. The actual code for salv
- file-server automatically requests volumes be salvaged as required, i.e. they are marked as requiring salvaging when attached.
- manual initiation of salvaging may be required when access is through the `volserver` (may be addressed at some later date).
-- `bos salvage` requires the `-forceDAFS` flag to initiate salvaging wit DAFS. However, %RED% **salvaging should not be initiated using this method**.%ENDCOLOR%
+- `bos salvage` requires the `-forceDAFS` flag to initiate salvaging with DAFS. However, **salvaging should not be initiated using this method**.
- infinite salvage, attach, salvage, ... loops are possible. There is therefore a hard-limit on the number of times a volume will be salvaged which is reset when the volume is removed or the file-server is restarted.
- volumes are salvaged in parallel and is controlled by the `-Parallel` argument to the `salvageserver`. Defaults to 4.
- the `salvageserver` and the `inode` file-server are incompatible:
- because volumes are inter-mingled on a partition (rather than being separated), a lock for the entire partition on which the volume is located is held throughout. Both the `fileserver` and `volserver` will block if they require this lock, e.g. to restore / dump a volume located on the partition.
- inodes for a particular volume can be located anywhere on a partition. Salvaging therefore results in **every** inode on a partition having to be read to determine whether it belongs to the volume. This is extremely I/O intensive and leads to horrendous salvaging performance.
-- `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. %RED% **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.** %ENDCOLOR%
+- `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.**
- See [[=salvsync-debug=|DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
@@ -306,7 +306,7 @@ Arguments controlling the host / callback state:
-Arguments controlling the [[VLRU:|WebHome#VolumeLeastRecentlyUsed]]
+Arguments controlling the VLRU
@@ -366,7 +366,7 @@ Several tools aid debugging problems with demand-attach file-servers. They opera
### **fssync-debug**
-%RED% **Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
+**Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care.**
`fssync-debug` provides low-level inspection and control over the volume package of the file-server. It can be used to display the file-server information associated with a volume, e.g.
@@ -485,7 +485,7 @@ An understanding of the [volume finite-state machine](http://www.dementia.org/tw
### **salvsync-debug**
-%RED% **Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
+**Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care**
`salvsync-debug` provides low-level inspection and control of the salvageserver process, including the scheduling order of volumes.
@@ -529,7 +529,7 @@ To initiate the salvaging of a volume
This is the method that should be used on demand-attach file-servers to initiate the manual salvage of volumes. It should be used with care.
-Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. %RED% Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. %ENDCOLOR% To modify the priority of a request, use
+Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. To modify the priority of a request, use
salvsync-debug priority -vol 537119916 -part /vicepb -priority 999999