X-Git-Url: http://git.openafs.org/?p=openafs-wiki.git;a=blobdiff_plain;f=DemandAttach.mdwn;h=4215521dff5a26c7e18736aba35c3386e3f21206;hp=58c23d2599076d0f3c07118beac6dbe778e4baab;hb=f8ba0a6e1f0f7219831960a0746aaa3b6906783d;hpb=d08ea06d29a4e8b902978b675eb1039a890e4481
diff --git a/DemandAttach.mdwn b/DemandAttach.mdwn
index 58c23d2..4215521 100644
--- a/DemandAttach.mdwn
+++ b/DemandAttach.mdwn
@@ -1,37 +1,10 @@
-## Demand-Attach File-Server (DAFS)
+[[!toc levels=3]]
+
+# Demand-Attach File-Server (DAFS)
+
OpenAFS 1.5 contains Demand-Attach File-Server (DAFS). DAFS is a significant departure from the more _traditional_ AFS file-server and this document details those changes.
-
-
-## Why Demand-Attach File-Server (DAFS) ?
+## Why Demand-Attach File-Server (DAFS) ?
On a traditional file-server, volumes are attached at start-up and detached only at shutdown. Any attached volume can be modified and changes are periodically flushed to disk or on shutdown. When a file-server isn't shutdown cleanly, the integrity of every attached volume has to be verified by the salvager, whether the volume had been modified or not. As file-servers grow larger (and the number of volumes increase), the length of time required to salvage and attach volumes increases, e.g. it takes around two hours for a file-server housing 512GB data to salvage and attach volumes !
@@ -41,7 +14,7 @@ The primary objective of the demand-attach file-server was to dramatically reduc
Large portions of this document were taken / influenced by the presentation entitled [Demand Attach / Fast-Restart Fileserver](http://workshop.openafs.org/afsbpw06/talks/tkeiser-dafs.pdf) given by Tom Keiser at the [AFS and Kerberos Best Practices Workshop](http://workshop.openafs.org/) in [2006](http://workshop.openafs.org/afsbpw06/).
-## An Overview of Demand-Attach File-Server
+## An Overview of Demand-Attach File-Server
Demand-attach necessitated a significant re-design of certain aspects of the AFS code, including:
@@ -66,29 +39,31 @@ The changes implemented for demand-attach include:
- callbacks are no longer broken on shutdown
- instead, host / callback state is preserved across restarts
-## The Gory Details of the Demand-Attach File-Server
+## The Gory Details of the Demand-Attach File-Server
-### Bos Configuration
+### Bos Configuration
A traditional file-server uses the `bnode` type `fs` and has a definition similar to
bnode fs fs 1
- parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 200 -rxpck 2000 -rxbind
- parm /usr/afs/bin/volserver -p 127 -log -rxbind
+ parm /usr/afs/bin/fileserver -p 123 -L -busyat 200 -rxpck 2000 -cb 4000000
+ parm /usr/afs/bin/volserver -p 127 -log
parm /usr/afs/bin/salvager -parallel all32
end
Since an additional component was required for the demand-attach file-server, a new `bnode` type ( `dafs`) is required. The definition should be similar to
bnode dafs dafs 1
- parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 50 -rxpck 2000 -rxbind -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
- parm /usr/afs/bin/volserver -p 64 -log -rxbind
+ parm /usr/afs/bin/dafileserver -p 123 -L -busyat 200 -rxpck 2000 -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
+ parm /usr/afs/bin/davolserver -p 64 -log
parm /usr/afs/bin/salvageserver
- parm /usr/afs/bin/salvager -parallel all32
+ parm /usr/afs/bin/dasalvager -parallel all32
end
-The instance for a demand-attach file-server is therefore `dafs` instead of `fs`.
+The instance for a demand-attach file-server is therefore `dafs`
+instead of `fs`. For a complete list of configuration options see the
+[dafileserver man page](http://docs.openafs.org/Reference/8/dafileserver.html).
-### File-server Start-up / Shutdown Sequence
+### File-server Start-up / Shutdown Sequence
The table below compares the start-up sequence for a traditional file-server and a demand-attach file-server.
@@ -148,13 +123,13 @@ The shutdown sequence for both file-server types is:
On a traditional file-server, volumes are off-lined (detached) serially. In demand-attach, as many threads as possible are used to detach volumes, which is possible due to the notion of a volume has an associated state.
-### Volume Finite-State Automata
+### Volume Finite-State Automata
The volume finite-state automata is available in the source tree under `doc/arch/dafs-fsa.dot`. See [[=fssync-debug=|DemandAttach#fssync_debug]] for information on debugging the volume package.
-
-### Volume Least Recently Used (VLRU) Queues
+
+### Volume Least Recently Used (VLRU) Queues
The Volume Least Recently Used (VLRU) is a garbage collection facility which automatically off-lines volumes in the background. The purpose of this facility is to pro-actively off-line infrequently used volumes to improve shutdown and salvage times. The process of off-lining a volume from the "attached" state to the "pre-attached" state is called soft detachment.
@@ -189,7 +164,7 @@ VLRU works in a manner similar to a generational garbage collector. There are fi
The state of the various VLRU queues is dumped with the file-server state and at shutdown.
- The VLRU queues new, mid (intermediate) and old are generational queues for active volumes. State transitions are controlled by inactivity timers and are
+ The VLRU queues new, mid (intermediate) and old are generational queues for active volumes. State transitions are controlled by inactivity timers and are
@@ -238,15 +213,15 @@ The state of the various VLRU queues is dumped with the file-server state and at
`vlruthresh` has been optimized for RO file-servers, where volumes are frequently accessed once a day and soft-detaching has little effect (RO volumes are not salvaged; one of the main reasons for soft detaching).
-### Vnode Finite-State Automata
+### Vnode Finite-State Automata
The vnode finite-state automata is available in the source tree under `doc/arch/dafs-vnode-fsa.dot`
`/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. **Indiscriminate use of fsync-debug
** can lead to extremely bad things occurring. Use with care.
-
-### Demand Salvaging
+
+### Demand Salvaging
Demand salvaging is implemented by the `salvageserver`. The actual code for salvaging a volume remains largely unchanged. However, the method for invoking salvaging with demand-attach has changed:
@@ -261,15 +236,15 @@ Demand salvaging is implemented by the `salvageserver`. The actual code for salv
- `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.**
- See [[=salvsync-debug=|DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
-
-### File-Server Host / Callback State
+
+### File-Server Host / Callback State
Host / callback information is persistent across restarts with demand-attach. On shutdown, the file-server writes the data to `/usr/afs/local/fsstate.dat`. The contents of this file are read and verified at start-up and hence it is unnecessary to break callbacks on shutdown with demand-attach.
The contents of `fsstate.dat` can be inspected using `/usr/afs/bin/state_analyzer`.
-## File-Server Arguments (relating to Demand-Attach)
+## File-Server Arguments (relating to Demand-Attach)
These are available in the man-pages (section 8) for the fileserver; some details are provided here for convenience:
@@ -299,14 +274,14 @@ Arguments controlling the host / callback state:
fs-state-verify |
- |
+ n/a |
both |
- |
Controls the behavior of the state verification mechanism. Before saving or restoring the fileserver state information, the internal host and callback data structures are verified. A value of 'none' turns off all verification. A value of 'save' only performs the verification steps prior to saving state to disk. A value of 'restore' only performs the verification steps after restoring state from disk. A value of 'both' performs all verification steps both prior to saving and after restoring state. |
-Arguments controlling the [[VLRU:|WebHome#VolumeLeastRecentlyUsed]]
+Arguments controlling the VLRU
@@ -360,11 +335,11 @@ Arguments controlling the [[VLRU:|WebHome#VolumeLeastRecentlyUsed]]
-## Tools for Debugging Demand-Attach File-Server
+## Tools for Debugging Demand-Attach File-Server
Several tools aid debugging problems with demand-attach file-servers. They operate at an extremely low-level and hence require a detailed knowledge of the architecture / code.
-### **fssync-debug**
+### **fssync-debug**
**Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care.**
@@ -483,7 +458,7 @@ Note that the `volumeid` argument must be the numeric ID and the `partition` arg
An understanding of the [volume finite-state machine](http://www.dementia.org/twiki//view/dafs-fsa.png) is required before the state of a volume should be manipulated.
-### **salvsync-debug**
+### **salvsync-debug**
**Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care**
@@ -535,11 +510,11 @@ Under normal circumstances, the priority ( `prio`) of a salvage request is the n
(where `priority` is a 32-bit integer).
-### **state\_analyzer**
+### **state\_analyzer**
`state_analyzer` allows the contents of the host / callback state file ( `/usr/afs/local/fsstate.dat`) to be inspected.
-#### Header Information
+#### Header Information
Header information is gleaned through the `hdr` command
@@ -568,7 +543,7 @@ Header information is gleaned through the `hdr` command
}
fs state analyzer>
-#### Host Information
+#### Host Information
Host information can be gleaned through the `h` command, e.g.
@@ -653,7 +628,7 @@ Host information can be gleaned through the `h` command, e.g.
}
fs state analyzer: h(1)>
-#### Callback Information
+#### Callback Information
Callback information is available through the `cb` command, e.g.