1 ## <a name="Demand-Attach File-Server (DAFS)"></a> Demand-Attach File-Server (DAFS)
3 OpenAFS 1.5 contains Demand-Attach File-Server (DAFS). DAFS is a significant departure from the more _traditional_ AFS file-server and this document details those changes.
7 <li><a href="#Demand-Attach File-Server (DAFS)"> Demand-Attach File-Server (DAFS)</a></li>
8 <li><a href="#Why Demand-Attach File-Server (D"> Why Demand-Attach File-Server (DAFS) ?</a></li>
9 <li><a href="#An Overview of Demand-Attach Fil"> An Overview of Demand-Attach File-Server</a></li>
10 <li><a href="#The Gory Details of the Demand-A"> The Gory Details of the Demand-Attach File-Server</a><ul>
11 <li><a href="#Bos Configuration"> Bos Configuration</a></li>
12 <li><a href="#File-server Start-up / Shutdown"> File-server Start-up / Shutdown Sequence</a></li>
13 <li><a href="#Volume Finite-State Automata"> Volume Finite-State Automata</a></li>
14 <li><a href="#Volume Least Recently Used (VLRU"> Volume Least Recently Used (VLRU) Queues</a></li>
15 <li><a href="#Vnode Finite-State Automata"> Vnode Finite-State Automata</a></li>
16 <li><a href="#Demand Salvaging"> Demand Salvaging</a></li>
17 <li><a href="#File-Server Host / Callback Stat"> File-Server Host / Callback State</a></li>
20 <li><a href="#File-Server Arguments (relating"> File-Server Arguments (relating to Demand-Attach)</a></li>
21 <li><a href="#Tools for Debugging Demand-Attac"> Tools for Debugging Demand-Attach File-Server</a><ul>
22 <li><a href="#==fssync-debug=="> fssync-debug</a></li>
23 <li><a href="#==salvsync-debug=="> salvsync-debug</a></li>
24 <li><a href="#==state_analyzer=="> state_analyzer</a><ul>
25 <li><a href="#Header Information"> Header Information</a></li>
26 <li><a href="#Host Information"> Host Information</a></li>
27 <li><a href="#Callback Information"> Callback Information</a></li>
35 ## <a name="Why Demand-Attach File-Server (D"></a> Why Demand-Attach File-Server (DAFS) ?
37 On a traditional file-server, volumes are attached at start-up and detached only at shutdown. Any attached volume can be modified and changes are periodically flushed to disk or on shutdown. When a file-server isn't shutdown cleanly, the integrity of every attached volume has to be verified by the salvager, whether the volume had been modified or not. As file-servers grow larger (and the number of volumes increase), the length of time required to salvage and attach volumes increases, e.g. it takes around two hours for a file-server housing 512GB data to salvage and attach volumes !
39 On a Demand-Attach File-Server (DAFS), volumes are attached only when accessed by clients. On start-up, the file-server reads only the volume headers to determine what volumes reside on what partitions. When accessed by clients, the volumes are attached. After some period of inactivity, volumes are automatically detached. This dramatically improves start-up and shutdown times. A demand-attach file-server can be restarted in seconds compared to hours for the same traditional file-server.
41 The primary objective of the demand-attach file-server was to dramatically reduce the amount of time required to restart an AFS file-server.
43 Large portions of this document were taken / influenced by the presentation entitled [Demand Attach / Fast-Restart Fileserver](http://workshop.openafs.org/afsbpw06/talks/tkeiser-dafs.pdf) given by Tom Keiser at the [AFS and Kerberos Best Practices Workshop](http://workshop.openafs.org/) in [2006](http://workshop.openafs.org/afsbpw06/).
45 ## <a name="An Overview of Demand-Attach Fil"></a> An Overview of Demand-Attach File-Server
47 Demand-attach necessitated a significant re-design of certain aspects of the AFS code, including:
49 - volume package has a number of severe limitations
50 - single global lock, leading to poor scaling
51 - lock is held across high latency operations, e.g. disk I/O
52 - no notion of state for concurrently accessed objects
53 - the vnode package suffers from the same limitations
54 - breaking callbacks is time consuming
55 - salvaging does not have to be done with the file-server offline
57 The changes implemented for demand-attach include:
59 - [[volume finite-state automata|AFSLore/DemandAttach#Volume_Finite_State_Automata]]
60 - volumes are attached on demand
61 - volume _garbage collector_ to detach unused volumes
62 - notion of volume state means read-only volumes aren't salvaged
63 - [[vnode finite-state automata|AFSLore/DemandAttach#Vnode_Finite_State_Automata]]
64 - global lock is only held when required and never held across high-latency operations
65 - automatic salvaging of volumes
66 - shutdown is done in parallel (maximum number of threads utilized)
67 - callbacks are no longer broken on shutdown
68 - instead, host / callback state is preserved across restarts
70 ## <a name="The Gory Details of the Demand-A"></a> The Gory Details of the Demand-Attach File-Server
72 ### <a name="Bos Configuration"></a> Bos Configuration
74 A traditional file-server uses the `bnode` type `fs` and has a definition similar to
77 parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 200 -rxpck 2000 -rxbind
78 parm /usr/afs/bin/volserver -p 127 -log -rxbind
79 parm /usr/afs/bin/salvager -parallel all32
82 Since an additional component was required for the demand-attach file-server, a new `bnode` type ( `dafs`) is required. The definition should be similar to
85 parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 50 -rxpck 2000 -rxbind -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
86 parm /usr/afs/bin/volserver -p 64 -log -rxbind
87 parm /usr/afs/bin/salvageserver
88 parm /usr/afs/bin/salvager -parallel all32
91 The instance for a demand-attach file-server is therefore `dafs` instead of `fs`.
93 ### <a name="File-server Start-up / Shutdown"></a><a name="File-server Start-up / Shutdown "></a> File-server Start-up / Shutdown Sequence
95 The table below compares the start-up sequence for a traditional file-server and a demand-attach file-server.
97 <table border="1" cellpadding="0" cellspacing="0">
99 <th bgcolor="#99CCCC"><strong> Traditional </strong></th>
100 <th bgcolor="#99CCCC"><strong> Demand-Attach </strong></th>
104 <td> %BULLET% host / callback state restored </td>
108 <td> %BULLET% host / callback state consistency verified </td>
111 <td> %BULLET% build vice partition list </td>
112 <td> %BULLET% build vice partition list </td>
115 <td> %BULLET% volumes are attached </td>
116 <td> %BULLET% volume headers read </td>
120 <td> %BULLET% volumes placed into <em>pre-attached</em> state </td>
124 The [[host / callback state|AFSLore/DemandAttach#FSStateDat]] is covered later. The _pre-attached_ state indicates that the file-server has read the volume headers and is aware that the volume exists, but that it has not been attached (and hence is not on-line).
126 The shutdown sequence for both file-server types is:
128 <table border="1" cellpadding="0" cellspacing="0">
130 <th bgcolor="#99CCCC"><strong> Traditional </strong></th>
131 <th bgcolor="#99CCCC"><strong> Demand-Attach </strong></th>
134 <td> %BULLET% break callbacks </td>
135 <td> %BULLET% quiesce host / callback state </td>
138 <td> %BULLET% shutdown volumes </td>
139 <td> %BULLET% shutdown on-line volumes </td>
143 <td> %BULLET% verify host / callback state consistency </td>
147 <td> %BULLET% save host / callback state </td>
151 On a traditional file-server, volumes are off-lined (detached) serially. In demand-attach, as many threads as possible are used to detach volumes, which is possible due to the notion of a volume has an associated state.
153 ### <a name="Volume Finite-State Automata"></a> Volume Finite-State Automata
155 The volume finite-state automata is available in the source tree under `doc/arch/dafs-fsa.dot`. See [[=fssync-debug=|AFSLore/DemandAttach#fssync_debug]] for information on debugging the volume package.
157 <a name="VolumeLeastRecentlyUsed"></a>
159 ### <a name="Volume Least Recently Used (VLRU"></a> Volume Least Recently Used (VLRU) Queues
161 The Volume Least Recently Used (VLRU) is a garbage collection facility which automatically off-lines volumes in the background. The purpose of this facility is to pro-actively off-line infrequently used volumes to improve shutdown and salvage times. The process of off-lining a volume from the "attached" state to the "pre-attached" state is called soft detachment.
163 VLRU works in a manner similar to a generational garbage collector. There are five queues on which volumes can reside.
165 <table border="1" cellpadding="0" cellspacing="0">
167 <th bgcolor="#99CCCC"><strong> Queue </strong></th>
168 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
172 <td> Volumes which have not been accessed recently and hence are candidates for soft detachment. </td>
176 <td> Volumes which are administratively prevented from VLRU activity, i.e. will never be detached. </td>
179 <td> intermediate (mid) </td>
180 <td> Volumes transitioning from new -> old (see [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details). </td>
184 <td> Volumes which have been accessed. See [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details. </td>
188 <td> Volumes which are continually accessed. See [[AFSLore.DemandAttach.#VLRUStateTransitions][state transitions] for details. </td>
192 The state of the various VLRU queues is dumped with the file-server state and at shutdown.
194 <a name="VLRUStateTransitions"></a> The VLRU queues new, mid (intermediate) and old are generational queues for active volumes. State transitions are controlled by inactivity timers and are
196 <table border="1" cellpadding="0" cellspacing="0">
198 <th bgcolor="#99CCCC"><strong> Transition </strong></th>
199 <th bgcolor="#99CCCC"><strong> Timeout (minutes) </strong></th>
200 <th bgcolor="#99CCCC"><strong> Actual Timeout (in MS) </strong></th>
201 <th bgcolor="#99CCCC"><strong> Reason (since last transition) </strong></th>
204 <td> candidate->new </td>
207 <td> new activity </td>
210 <td> new->candidate </td>
211 <td> 1 * vlruthresh </td>
213 <td> no activity </td>
216 <td> new->mid </td>
217 <td> 2 * vlruthresh </td>
222 <td> mid->old </td>
223 <td> 4 * vlruthresh </td>
228 <td> old->mid </td>
229 <td> 2 * vlruthresh </td>
231 <td> no activity </td>
234 <td> mid->new </td>
235 <td> 1 * vlruthresh </td>
237 <td> no activity </td>
241 `vlruthresh` has been optimized for RO file-servers, where volumes are frequently accessed once a day and soft-detaching has little effect (RO volumes are not salvaged; one of the main reasons for soft detaching).
243 ### <a name="Vnode Finite-State Automata"></a> Vnode Finite-State Automata
245 The vnode finite-state automata is available in the source tree under `doc/arch/dafs-vnode-fsa.dot`
247 `/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. \*Indiscriminate use of <code>**fsync-debug**</code> can lead to extremely bad things occurring. Use with care. %ENDCOLOR%
249 <a name="SalvageServer"></a>
251 ### <a name="Demand Salvaging"></a> Demand Salvaging
253 Demand salvaging is implemented by the `salvageserver`. The actual code for salvaging a volume remains largely unchanged. However, the method for invoking salvaging with demand-attach has changed:
255 - file-server automatically requests volumes be salvaged as required, i.e. they are marked as requiring salvaging when attached.
256 - manual initiation of salvaging may be required when access is through the `volserver` (may be addressed at some later date).
257 - `bos salvage` requires the `-forceDAFS` flag to initiate salvaging wit DAFS. However, %RED% **salvaging should not be initiated using this method**.%ENDCOLOR%
258 - infinite salvage, attach, salvage, ... loops are possible. There is therefore a hard-limit on the number of times a volume will be salvaged which is reset when the volume is removed or the file-server is restarted.
259 - volumes are salvaged in parallel and is controlled by the `-Parallel` argument to the `salvageserver`. Defaults to 4.
260 - the `salvageserver` and the `inode` file-server are incompatible:
261 - because volumes are inter-mingled on a partition (rather than being separated), a lock for the entire partition on which the volume is located is held throughout. Both the `fileserver` and `volserver` will block if they require this lock, e.g. to restore / dump a volume located on the partition.
262 - inodes for a particular volume can be located anywhere on a partition. Salvaging therefore results in **every** inode on a partition having to be read to determine whether it belongs to the volume. This is extremely I/O intensive and leads to horrendous salvaging performance.
263 - `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. %RED% **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.** %ENDCOLOR%
264 - See [[=salvsync-debug=|AFSLore/DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
266 <a name="FSStateDat"></a>
268 ### <a name="File-Server Host / Callback Stat"></a> File-Server Host / Callback State
270 Host / callback information is persistent across restarts with demand-attach. On shutdown, the file-server writes the data to `/usr/afs/local/fsstate.dat`. The contents of this file are read and verified at start-up and hence it is unnecessary to break callbacks on shutdown with demand-attach.
272 The contents of `fsstate.dat` can be inspected using `/usr/afs/bin/state_analyzer`.
274 ## <a name="File-Server Arguments (relating"></a><a name="File-Server Arguments (relating "></a> File-Server Arguments (relating to Demand-Attach)
276 These are available in the man-pages (section 8) for the fileserver; some details are provided here for convenience:
278 Arguments controlling the host / callback state:
280 <table border="1" cellpadding="0" cellspacing="0">
282 <th bgcolor="#99CCCC"><strong> Parameter </strong></th>
283 <th bgcolor="#99CCCC"><strong> Options </strong></th>
284 <th bgcolor="#99CCCC"><strong> Default </strong></th>
285 <th bgcolor="#99CCCC"><strong> Suggested Value </strong></th>
286 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
289 <td><code>fs-state-dont-save</code></td>
291 <td> state saved </td>
293 <td><code>fileserver</code> state will not be saved during shutdown </td>
296 <td><code>fs-state-dont-restore</code></td>
298 <td> state restored </td>
300 <td><code>fileserver</code> state will not be restored during startup </td>
303 <td><code>fs-state-verify</code></td>
304 <td><none %vbar%="%VBAR%" %vbar^%="%VBAR^%" both="both" restore="restore" save="save"> </none></td>
307 <td> Controls the behavior of the state verification mechanism. Before saving or restoring the <code>fileserver</code> state information, the internal host and callback data structures are verified. A value of 'none' turns off all verification. A value of 'save' only performs the verification steps prior to saving state to disk. A value of 'restore' only performs the verification steps after restoring state from disk. A value of 'both' performs all verification steps both prior to saving and after restoring state. </td>
311 Arguments controlling the [[VLRU:|Main/WebHome#VolumeLeastRecentlyUsed]]
313 <table border="1" cellpadding="0" cellspacing="0">
315 <th bgcolor="#99CCCC"><strong> Parameter </strong></th>
316 <th bgcolor="#99CCCC"><strong> Options </strong></th>
317 <th bgcolor="#99CCCC"><strong> Default </strong></th>
318 <th bgcolor="#99CCCC"><strong> Suggested Value </strong></th>
319 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
322 <td><code>vattachpar</code></td>
323 <td> positive integer </td>
326 <td> Controls the parallelism of the volume package start-up and shutdown routines. On start-up, vice partitions are scanned for volumes to pre-attach using a number of worker threads, the number of which is the minimum of <code>vattachpar</code> or the number of vice partitions. On shutdown, <code>vattachpar</code> worker threads are used to detach volumes. The shutdown code is mp-scaleable well beyond the number of vice partitions. Tom Keiser (from SNA) found 128 threads for a single vice partition had a statistically significant performance improvement over 64 threads. </td>
329 <td><code>vhashsize</code></td>
330 <td> positive integer </td>
333 <td> This parameter controls the size of the volume hash table. The table will contain 2^( <code>vhashsize</code>) entries. Hash bucket utilization statistics are given in the <code>fileserver</code> state information as well as on shutdown. </td>
336 <td><code>vlrudisable</code></td>
340 <td> Disables the Volume Least Recently Used (VLRU) cache. </td>
343 <td><code>vlruthresh</code></td>
344 <td> positive integer </td>
345 <td> 120 minutes </td>
346 <td> 1440 (24 hrs) </td>
347 <td> Minutes of inactivity before a volume is eligible for soft detachment. </td>
350 <td><code>vlruinterval</code></td>
351 <td> positive integer </td>
352 <td> 120 seconds </td>
354 <td> Number of seconds between VLRU candidate queue scans </td>
357 <td><code>vlrumax</code></td>
358 <td> positive integer </td>
361 <td> Max number of volumes which will be soft detached in a single pass of the scanner </td>
365 ## <a name="Tools for Debugging Demand-Attac"></a> Tools for Debugging Demand-Attach File-Server
367 Several tools aid debugging problems with demand-attach file-servers. They operate at an extremely low-level and hence require a detailed knowledge of the architecture / code.
369 ### <a name="==fssync-debug=="></a> <code>**fssync-debug**</code>
371 %RED% **Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
373 `fssync-debug` provides low-level inspection and control over the volume package of the file-server. It can be used to display the file-server information associated with a volume, e.g.
375 ozsaw7 2# vos exam user.af -cell w.ln
376 user.af 537119916 RW 2123478 K On-line
378 RWrite 537119916 ROnly 0 Backup 537119917
380 Creation Wed Sep 17 17:48:17 2003
381 Copy Thu Dec 11 18:01:37 2008
382 Backup Thu Jun 25 01:49:20 2009
383 Last Update Thu Jun 25 16:17:35 2009
384 85271 accesses in the past day (i.e., vnode references)
386 RWrite: 537119916 Backup: 537119917
388 server ln1qaf01 partition /vicepb RW Site
389 ozsaw7 3# /usr/afs/bin/fssync-debug query -vol 537119916 -part /vicepb
390 calling FSYNC_VolOp with command code 65543 (FSYNC_VOL_QUERY)
391 FSYNC_VolOp returned 0 (SYNC_OK)
392 protocol response code was 0 (SYNC_OK)
393 protocol reason code was 0 (0)
398 partition = 0xf90dfb8
399 linkHandle = 0x10478400
400 nextVnodeUnique = 2259017
401 diskDataHandle = 0x104783d0
409 updateTime = 1245943107
410 vnodeIndex[vSmall] = {
416 vnodeIndex[vLarge] = {
422 updateTime = 1245943107
423 attach_state = VOL_STATE_ATTACHED
424 attach_flags = VOL_HDR_ATTACHED | VOL_HDR_LOADED | VOL_HDR_IN_LRU | VOL_IN_HASH | VOL_ON_VBYP_LIST | VOL_ON_VLRU
438 hash_short_circuits = {
454 last_attach = 1245891030
455 last_get = 1245943107
456 last_promote = 1245891030
457 last_hdr_get = 1245943107
458 last_hdr_load = 1245891030
459 last_salvage = 1242508846
460 last_salvage_req = 1242508846
461 last_vol_op = 1245890958
464 idx = 0 (VLRU_QUEUE_NEW)
469 Note that the `volumeid` argument must be the numeric ID and the `partition` argument must be the **exact** partition name (and not an abbreviation). An explanation of all these values is beyond the scope of this document. The important fields are:
471 - `attach_state`, which is usually
472 - `VOL_STATE_PREATTACHED` which means the volume headers have been read, but the volume is not attached
473 - `VOL_STATE_ATTACHED` which means the volume is fully attached
474 - `VOL_STATE_ERROR` which indicates that the volume cannot be attached
476 - `VOL_HDR_ATTACHED` means the volume headers have been read (and hence the file-server is aware of the volumes existence)
477 - `VOL_HDR_LOADED` means the volume headers are resident in memory
478 - `VOL_HDR_IN_LRU` means the volume headers are on the least-recently used queue
479 - `VOL_IN_HASH` indicates that the volume has been added to the volume linked-list
480 - `VOL_ON_VBYP_LIST` indicates that the volume is linked off the partition list
481 - `VOL_ON_VLRU` means the volume is on a VLRU queue
482 - the `salvage` structure (detailed [[here|AFSLore/DemandAttach#salvsync_debug]])
483 - the `stats` structure, particularly the volume operation times ( `last_*`).
484 - the `vlru` structure, particularly the VLRU queue
486 An understanding of the [volume finite-state machine](http://www.dementia.org/twiki//view/dafs-fsa.png) is required before the state of a volume should be manipulated.
488 ### <a name="==salvsync-debug=="></a> <code>**salvsync-debug**</code>
490 %RED% **Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
492 `salvsync-debug` provides low-level inspection and control of the salvageserver process, including the scheduling order of volumes.
494 `salvsync-debug` can be used to query the current salvage status of a volume, e,g,
496 ozsaw7 4# /usr/afs/bin/salvsync-debug query -vol 537119916 -part /vicepb
497 calling SALVSYNC_SalvageVolume with command code 65540 (SALVSYNC_QUERY)
498 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
499 protocol response code was 0 (SYNC_OK)
500 protocol reason code was 0 (**UNKNOWN**)
502 state = 4 (SALVSYNC_STATE_DONE)
508 To initiate the salvaging of a volume
510 ozsaw7 5# /usr/afs/bin/salvsync-debug salvage -vol 537119916 -part /vicepb
511 calling SALVSYNC_SalvageVolume with command code 65537 (SALVSYNC_SALVAGE)
512 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
513 protocol response code was 0 (SYNC_OK)
514 protocol reason code was 0 (**UNKNOWN**)
516 state = 1 (SALVSYNC_STATE_QUEUED)
521 ozsaw7 6# /usr/afs/bin/salvsync-debug query -vol 537119916 -part /vicepb calling SALVSYNC_SalvageVolume with command code 65540 (SALVSYNC_QUERY)
522 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
523 protocol response code was 0 (SYNC_OK)
524 protocol reason code was 0 (**UNKNOWN**)
526 state = 2 (SALVSYNC_STATE_SALVAGING)
532 This is the method that should be used on demand-attach file-servers to initiate the manual salvage of volumes. It should be used with care.
534 Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. %RED% Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. %ENDCOLOR% To modify the priority of a request, use
536 salvsync-debug priority -vol 537119916 -part /vicepb -priority 999999
538 (where `priority` is a 32-bit integer).
540 ### <a name="==state_analyzer=="></a> <code>**state\_analyzer**</code>
542 `state_analyzer` allows the contents of the host / callback state file ( `/usr/afs/local/fsstate.dat`) to be inspected.
544 #### <a name="Header Information"></a> Header Information
546 Header information is gleaned through the `hdr` command
548 fs state analyzer> hdr
549 loading structure from address 0xfed80000 (offset 0)
555 timestamp = "Tue Jun 23 11:51:49 2009"
557 server_uuid = "002e9712-ae67-1a2e-8a-42-900e866eaa77"
569 server_version_string = "@(#) OpenAFS 1.4.6-22 built 2009-04-18 "
573 #### <a name="Host Information"></a> Host Information
575 Host information can be gleaned through the `h` command, e.g.
578 fs state analyzer: h(0)> this
579 loading structure from address 0xfed80500 (offset 1280)
580 host_state_entry_header = {
587 host = "161.144.167.187"
595 LastCall = "Tue Jun 23 11:51:45 2009"
596 ActiveCall = "Tue Jun 23 11:51:45 2009"
597 cpsCall = "Tue Jun 23 11:51:45 2009"
602 numberOfInterfaces = 2
603 uuid = "aae8a851-1d54-4b83-ad-17-db967bd89e1b"
605 addr = "161.144.167.187"
613 fs state analyzer: h(0)> next
614 loading structure from address 0xfed80568 (offset 1384)
615 host_state_entry_header = {
622 host = "10.181.34.134"
630 LastCall = "Tue Jun 23 11:51:08 2009"
631 ActiveCall = "Tue Jun 23 11:51:08 2009"
632 cpsCall = "Tue Jun 23 11:51:08 2009"
637 numberOfInterfaces = 4
638 uuid = "00107e94-794d-1a3d-ae-e2-0ab52421aa77"
640 addr = "10.181.36.33"
644 addr = "10.181.36.31"
648 addr = "10.181.32.134"
652 addr = "10.181.34.134"
656 fs state analyzer: h(1)>
658 #### <a name="Callback Information"></a> Callback Information
660 Callback information is available through the `cb` command, e.g.
662 fs state analyzer> cb
663 fs state analyzer: fe(0):cb(0)> dump
664 loading structure from address 0xfed97b6c (offset 97132)
680 The `dump` command (as opposed to `this`) displays all call-backs for the current file-entry. Moving to the next file-entry can be achieved by
682 fs state analyzer: fe(0):cb(0)> quit
683 fs state analyzer: fe(0)> next
684 loading structure from address 0xfed97b90 (offset 97168)
685 callback_state_entry_header = {
702 fs state analyzer: fe(1)> cb
703 fs state analyzer: fe(1):cb(0)> dump
704 loading structure from address 0xfed97bd4 (offset 97236)