1 ## <a name="Demand-Attach File-Server (DAFS)"></a> Demand-Attach File-Server (DAFS)
2 OpenAFS 1.5 contains Demand-Attach File-Server (DAFS). DAFS is a significant departure from the more _traditional_ AFS file-server and this document details those changes.
6 <li><a href="#Demand-Attach File-Server (DAFS)"> Demand-Attach File-Server (DAFS)</a></li>
7 <li><a href="#Why Demand-Attach File-Server (D"> Why Demand-Attach File-Server (DAFS) ?</a></li>
8 <li><a href="#An Overview of Demand-Attach Fil"> An Overview of Demand-Attach File-Server</a></li>
9 <li><a href="#The Gory Details of the Demand-A"> The Gory Details of the Demand-Attach File-Server</a><ul>
10 <li><a href="#Bos Configuration"> Bos Configuration</a></li>
11 <li><a href="#File-server Start-up / Shutdown"> File-server Start-up / Shutdown Sequence</a></li>
12 <li><a href="#Volume Finite-State Automata"> Volume Finite-State Automata</a></li>
13 <li><a href="#Volume Least Recently Used (VLRU"> Volume Least Recently Used (VLRU) Queues</a></li>
14 <li><a href="#Vnode Finite-State Automata"> Vnode Finite-State Automata</a></li>
15 <li><a href="#Demand Salvaging"> Demand Salvaging</a></li>
16 <li><a href="#File-Server Host / Callback Stat"> File-Server Host / Callback State</a></li>
19 <li><a href="#File-Server Arguments (relating"> File-Server Arguments (relating to Demand-Attach)</a></li>
20 <li><a href="#Tools for Debugging Demand-Attac"> Tools for Debugging Demand-Attach File-Server</a><ul>
21 <li><a href="#==fssync-debug=="> fssync-debug</a></li>
22 <li><a href="#==salvsync-debug=="> salvsync-debug</a></li>
23 <li><a href="#==state_analyzer=="> state_analyzer</a><ul>
24 <li><a href="#Header Information"> Header Information</a></li>
25 <li><a href="#Host Information"> Host Information</a></li>
26 <li><a href="#Callback Information"> Callback Information</a></li>
34 ## <a name="Why Demand-Attach File-Server (D"></a> Why Demand-Attach File-Server (DAFS) ?
36 On a traditional file-server, volumes are attached at start-up and detached only at shutdown. Any attached volume can be modified and changes are periodically flushed to disk or on shutdown. When a file-server isn't shutdown cleanly, the integrity of every attached volume has to be verified by the salvager, whether the volume had been modified or not. As file-servers grow larger (and the number of volumes increase), the length of time required to salvage and attach volumes increases, e.g. it takes around two hours for a file-server housing 512GB data to salvage and attach volumes !
38 On a Demand-Attach File-Server (DAFS), volumes are attached only when accessed by clients. On start-up, the file-server reads only the volume headers to determine what volumes reside on what partitions. When accessed by clients, the volumes are attached. After some period of inactivity, volumes are automatically detached. This dramatically improves start-up and shutdown times. A demand-attach file-server can be restarted in seconds compared to hours for the same traditional file-server.
40 The primary objective of the demand-attach file-server was to dramatically reduce the amount of time required to restart an AFS file-server.
42 Large portions of this document were taken / influenced by the presentation entitled [Demand Attach / Fast-Restart Fileserver](http://workshop.openafs.org/afsbpw06/talks/tkeiser-dafs.pdf) given by Tom Keiser at the [AFS and Kerberos Best Practices Workshop](http://workshop.openafs.org/) in [2006](http://workshop.openafs.org/afsbpw06/).
44 ## <a name="An Overview of Demand-Attach Fil"></a> An Overview of Demand-Attach File-Server
46 Demand-attach necessitated a significant re-design of certain aspects of the AFS code, including:
48 - volume package has a number of severe limitations
49 - single global lock, leading to poor scaling
50 - lock is held across high latency operations, e.g. disk I/O
51 - no notion of state for concurrently accessed objects
52 - the vnode package suffers from the same limitations
53 - breaking callbacks is time consuming
54 - salvaging does not have to be done with the file-server offline
56 The changes implemented for demand-attach include:
58 - [[volume finite-state automata|AFSLore/DemandAttach#Volume_Finite_State_Automata]]
59 - volumes are attached on demand
60 - volume _garbage collector_ to detach unused volumes
61 - notion of volume state means read-only volumes aren't salvaged
62 - [[vnode finite-state automata|AFSLore/DemandAttach#Vnode_Finite_State_Automata]]
63 - global lock is only held when required and never held across high-latency operations
64 - automatic salvaging of volumes
65 - shutdown is done in parallel (maximum number of threads utilized)
66 - callbacks are no longer broken on shutdown
67 - instead, host / callback state is preserved across restarts
69 ## <a name="The Gory Details of the Demand-A"></a> The Gory Details of the Demand-Attach File-Server
71 ### <a name="Bos Configuration"></a> Bos Configuration
72 A traditional file-server uses the `bnode` type `fs` and has a definition similar to
75 parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 200 -rxpck 2000 -rxbind
76 parm /usr/afs/bin/volserver -p 127 -log -rxbind
77 parm /usr/afs/bin/salvager -parallel all32
80 Since an additional component was required for the demand-attach file-server, a new `bnode` type ( `dafs`) is required. The definition should be similar to
83 parm /usr/afs/bin/fileserver -p 123 -pctspare 20 -L -busyat 50 -rxpck 2000 -rxbind -cb 4000000 -vattachpar 128 -vlruthresh 1440 -vlrumax 8 -vhashsize 11
84 parm /usr/afs/bin/volserver -p 64 -log -rxbind
85 parm /usr/afs/bin/salvageserver
86 parm /usr/afs/bin/salvager -parallel all32
89 The instance for a demand-attach file-server is therefore `dafs` instead of `fs`.
91 ### <a name="File-server Start-up / Shutdown"></a><a name="File-server Start-up / Shutdown "></a> File-server Start-up / Shutdown Sequence
93 The table below compares the start-up sequence for a traditional file-server and a demand-attach file-server.
95 <table border="1" cellpadding="0" cellspacing="0">
97 <th bgcolor="#99CCCC"><strong> Traditional </strong></th>
98 <th bgcolor="#99CCCC"><strong> Demand-Attach </strong></th>
102 <td> %BULLET% host / callback state restored </td>
106 <td> %BULLET% host / callback state consistency verified </td>
109 <td> %BULLET% build vice partition list </td>
110 <td> %BULLET% build vice partition list </td>
113 <td> %BULLET% volumes are attached </td>
114 <td> %BULLET% volume headers read </td>
118 <td> %BULLET% volumes placed into <em>pre-attached</em> state </td>
122 The [[host / callback state|AFSLore/DemandAttach#FSStateDat]] is covered later. The _pre-attached_ state indicates that the file-server has read the volume headers and is aware that the volume exists, but that it has not been attached (and hence is not on-line).
124 The shutdown sequence for both file-server types is:
126 <table border="1" cellpadding="0" cellspacing="0">
128 <th bgcolor="#99CCCC"><strong> Traditional </strong></th>
129 <th bgcolor="#99CCCC"><strong> Demand-Attach </strong></th>
132 <td> %BULLET% break callbacks </td>
133 <td> %BULLET% quiesce host / callback state </td>
136 <td> %BULLET% shutdown volumes </td>
137 <td> %BULLET% shutdown on-line volumes </td>
141 <td> %BULLET% verify host / callback state consistency </td>
145 <td> %BULLET% save host / callback state </td>
149 On a traditional file-server, volumes are off-lined (detached) serially. In demand-attach, as many threads as possible are used to detach volumes, which is possible due to the notion of a volume has an associated state.
151 ### <a name="Volume Finite-State Automata"></a> Volume Finite-State Automata
153 The volume finite-state automata is available in the source tree under `doc/arch/dafs-fsa.dot`. See [[=fssync-debug=|AFSLore/DemandAttach#fssync_debug]] for information on debugging the volume package.
155 <a name="VolumeLeastRecentlyUsed"></a>
157 ### <a name="Volume Least Recently Used (VLRU"></a> Volume Least Recently Used (VLRU) Queues
159 The Volume Least Recently Used (VLRU) is a garbage collection facility which automatically off-lines volumes in the background. The purpose of this facility is to pro-actively off-line infrequently used volumes to improve shutdown and salvage times. The process of off-lining a volume from the "attached" state to the "pre-attached" state is called soft detachment.
161 VLRU works in a manner similar to a generational garbage collector. There are five queues on which volumes can reside.
163 <table border="1" cellpadding="0" cellspacing="0">
165 <th bgcolor="#99CCCC"><strong> Queue </strong></th>
166 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
170 <td> Volumes which have not been accessed recently and hence are candidates for soft detachment. </td>
174 <td> Volumes which are administratively prevented from VLRU activity, i.e. will never be detached. </td>
177 <td> intermediate (mid) </td>
178 <td> Volumes transitioning from new -> old (see [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details). </td>
182 <td> Volumes which have been accessed. See [[AFSLore.DemandAttach#VLRUStateTransitions][state transitions] for details. </td>
186 <td> Volumes which are continually accessed. See [[AFSLore.DemandAttach.#VLRUStateTransitions][state transitions] for details. </td>
190 The state of the various VLRU queues is dumped with the file-server state and at shutdown.
192 <a name="VLRUStateTransitions"></a> The VLRU queues new, mid (intermediate) and old are generational queues for active volumes. State transitions are controlled by inactivity timers and are
194 <table border="1" cellpadding="0" cellspacing="0">
196 <th bgcolor="#99CCCC"><strong> Transition </strong></th>
197 <th bgcolor="#99CCCC"><strong> Timeout (minutes) </strong></th>
198 <th bgcolor="#99CCCC"><strong> Actual Timeout (in MS) </strong></th>
199 <th bgcolor="#99CCCC"><strong> Reason (since last transition) </strong></th>
202 <td> candidate->new </td>
205 <td> new activity </td>
208 <td> new->candidate </td>
209 <td> 1 * vlruthresh </td>
211 <td> no activity </td>
214 <td> new->mid </td>
215 <td> 2 * vlruthresh </td>
220 <td> mid->old </td>
221 <td> 4 * vlruthresh </td>
226 <td> old->mid </td>
227 <td> 2 * vlruthresh </td>
229 <td> no activity </td>
232 <td> mid->new </td>
233 <td> 1 * vlruthresh </td>
235 <td> no activity </td>
239 `vlruthresh` has been optimized for RO file-servers, where volumes are frequently accessed once a day and soft-detaching has little effect (RO volumes are not salvaged; one of the main reasons for soft detaching).
241 ### <a name="Vnode Finite-State Automata"></a> Vnode Finite-State Automata
243 The vnode finite-state automata is available in the source tree under `doc/arch/dafs-vnode-fsa.dot`
245 `/usr/afs/bin/fssync-debug` provides low-level inspection and control of the file-server volume package. \*Indiscriminate use of <code>**fsync-debug**</code> can lead to extremely bad things occurring. Use with care. %ENDCOLOR%
247 <a name="SalvageServer"></a>
249 ### <a name="Demand Salvaging"></a> Demand Salvaging
251 Demand salvaging is implemented by the `salvageserver`. The actual code for salvaging a volume remains largely unchanged. However, the method for invoking salvaging with demand-attach has changed:
253 - file-server automatically requests volumes be salvaged as required, i.e. they are marked as requiring salvaging when attached.
254 - manual initiation of salvaging may be required when access is through the `volserver` (may be addressed at some later date).
255 - `bos salvage` requires the `-forceDAFS` flag to initiate salvaging wit DAFS. However, %RED% **salvaging should not be initiated using this method**.%ENDCOLOR%
256 - infinite salvage, attach, salvage, ... loops are possible. There is therefore a hard-limit on the number of times a volume will be salvaged which is reset when the volume is removed or the file-server is restarted.
257 - volumes are salvaged in parallel and is controlled by the `-Parallel` argument to the `salvageserver`. Defaults to 4.
258 - the `salvageserver` and the `inode` file-server are incompatible:
259 - because volumes are inter-mingled on a partition (rather than being separated), a lock for the entire partition on which the volume is located is held throughout. Both the `fileserver` and `volserver` will block if they require this lock, e.g. to restore / dump a volume located on the partition.
260 - inodes for a particular volume can be located anywhere on a partition. Salvaging therefore results in **every** inode on a partition having to be read to determine whether it belongs to the volume. This is extremely I/O intensive and leads to horrendous salvaging performance.
261 - `/usr/afs/bin/salvsync-debug` provides low-level inspection and control over the `salvageserver`. %RED% **Indiscriminate use of `salvsync-debug` can lead to extremely bad things occurring. Use with care.** %ENDCOLOR%
262 - See [[=salvsync-debug=|AFSLore/DemandAttach#salvsync_debug]] for information on debugging problems with the salvageserver.
264 <a name="FSStateDat"></a>
266 ### <a name="File-Server Host / Callback Stat"></a> File-Server Host / Callback State
268 Host / callback information is persistent across restarts with demand-attach. On shutdown, the file-server writes the data to `/usr/afs/local/fsstate.dat`. The contents of this file are read and verified at start-up and hence it is unnecessary to break callbacks on shutdown with demand-attach.
270 The contents of `fsstate.dat` can be inspected using `/usr/afs/bin/state_analyzer`.
272 ## <a name="File-Server Arguments (relating"></a><a name="File-Server Arguments (relating "></a> File-Server Arguments (relating to Demand-Attach)
274 These are available in the man-pages (section 8) for the fileserver; some details are provided here for convenience:
276 Arguments controlling the host / callback state:
278 <table border="1" cellpadding="0" cellspacing="0">
280 <th bgcolor="#99CCCC"><strong> Parameter </strong></th>
281 <th bgcolor="#99CCCC"><strong> Options </strong></th>
282 <th bgcolor="#99CCCC"><strong> Default </strong></th>
283 <th bgcolor="#99CCCC"><strong> Suggested Value </strong></th>
284 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
287 <td><code>fs-state-dont-save</code></td>
289 <td> state saved </td>
291 <td><code>fileserver</code> state will not be saved during shutdown </td>
294 <td><code>fs-state-dont-restore</code></td>
296 <td> state restored </td>
298 <td><code>fileserver</code> state will not be restored during startup </td>
301 <td><code>fs-state-verify</code></td>
302 <td><none %vbar%="%VBAR%" %vbar^%="%VBAR^%" both="both" restore="restore" save="save"> </none></td>
305 <td> Controls the behavior of the state verification mechanism. Before saving or restoring the <code>fileserver</code> state information, the internal host and callback data structures are verified. A value of 'none' turns off all verification. A value of 'save' only performs the verification steps prior to saving state to disk. A value of 'restore' only performs the verification steps after restoring state from disk. A value of 'both' performs all verification steps both prior to saving and after restoring state. </td>
309 Arguments controlling the [[VLRU:|Main/WebHome#VolumeLeastRecentlyUsed]]
311 <table border="1" cellpadding="0" cellspacing="0">
313 <th bgcolor="#99CCCC"><strong> Parameter </strong></th>
314 <th bgcolor="#99CCCC"><strong> Options </strong></th>
315 <th bgcolor="#99CCCC"><strong> Default </strong></th>
316 <th bgcolor="#99CCCC"><strong> Suggested Value </strong></th>
317 <th bgcolor="#99CCCC"><strong> Meaning </strong></th>
320 <td><code>vattachpar</code></td>
321 <td> positive integer </td>
324 <td> Controls the parallelism of the volume package start-up and shutdown routines. On start-up, vice partitions are scanned for volumes to pre-attach using a number of worker threads, the number of which is the minimum of <code>vattachpar</code> or the number of vice partitions. On shutdown, <code>vattachpar</code> worker threads are used to detach volumes. The shutdown code is mp-scaleable well beyond the number of vice partitions. Tom Keiser (from SNA) found 128 threads for a single vice partition had a statistically significant performance improvement over 64 threads. </td>
327 <td><code>vhashsize</code></td>
328 <td> positive integer </td>
331 <td> This parameter controls the size of the volume hash table. The table will contain 2^( <code>vhashsize</code>) entries. Hash bucket utilization statistics are given in the <code>fileserver</code> state information as well as on shutdown. </td>
334 <td><code>vlrudisable</code></td>
338 <td> Disables the Volume Least Recently Used (VLRU) cache. </td>
341 <td><code>vlruthresh</code></td>
342 <td> positive integer </td>
343 <td> 120 minutes </td>
344 <td> 1440 (24 hrs) </td>
345 <td> Minutes of inactivity before a volume is eligible for soft detachment. </td>
348 <td><code>vlruinterval</code></td>
349 <td> positive integer </td>
350 <td> 120 seconds </td>
352 <td> Number of seconds between VLRU candidate queue scans </td>
355 <td><code>vlrumax</code></td>
356 <td> positive integer </td>
359 <td> Max number of volumes which will be soft detached in a single pass of the scanner </td>
363 ## <a name="Tools for Debugging Demand-Attac"></a> Tools for Debugging Demand-Attach File-Server
365 Several tools aid debugging problems with demand-attach file-servers. They operate at an extremely low-level and hence require a detailed knowledge of the architecture / code.
367 ### <a name="==fssync-debug=="></a> <code>**fssync-debug**</code>
369 %RED% **Indiscriminate use of `fssync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
371 `fssync-debug` provides low-level inspection and control over the volume package of the file-server. It can be used to display the file-server information associated with a volume, e.g.
373 ozsaw7 2# vos exam user.af -cell w.ln
374 user.af 537119916 RW 2123478 K On-line
376 RWrite 537119916 ROnly 0 Backup 537119917
378 Creation Wed Sep 17 17:48:17 2003
379 Copy Thu Dec 11 18:01:37 2008
380 Backup Thu Jun 25 01:49:20 2009
381 Last Update Thu Jun 25 16:17:35 2009
382 85271 accesses in the past day (i.e., vnode references)
384 RWrite: 537119916 Backup: 537119917
386 server ln1qaf01 partition /vicepb RW Site
387 ozsaw7 3# /usr/afs/bin/fssync-debug query -vol 537119916 -part /vicepb
388 calling FSYNC_VolOp with command code 65543 (FSYNC_VOL_QUERY)
389 FSYNC_VolOp returned 0 (SYNC_OK)
390 protocol response code was 0 (SYNC_OK)
391 protocol reason code was 0 (0)
396 partition = 0xf90dfb8
397 linkHandle = 0x10478400
398 nextVnodeUnique = 2259017
399 diskDataHandle = 0x104783d0
407 updateTime = 1245943107
408 vnodeIndex[vSmall] = {
414 vnodeIndex[vLarge] = {
420 updateTime = 1245943107
421 attach_state = VOL_STATE_ATTACHED
422 attach_flags = VOL_HDR_ATTACHED | VOL_HDR_LOADED | VOL_HDR_IN_LRU | VOL_IN_HASH | VOL_ON_VBYP_LIST | VOL_ON_VLRU
436 hash_short_circuits = {
452 last_attach = 1245891030
453 last_get = 1245943107
454 last_promote = 1245891030
455 last_hdr_get = 1245943107
456 last_hdr_load = 1245891030
457 last_salvage = 1242508846
458 last_salvage_req = 1242508846
459 last_vol_op = 1245890958
462 idx = 0 (VLRU_QUEUE_NEW)
467 Note that the `volumeid` argument must be the numeric ID and the `partition` argument must be the **exact** partition name (and not an abbreviation). An explanation of all these values is beyond the scope of this document. The important fields are:
469 - `attach_state`, which is usually
470 - `VOL_STATE_PREATTACHED` which means the volume headers have been read, but the volume is not attached
471 - `VOL_STATE_ATTACHED` which means the volume is fully attached
472 - `VOL_STATE_ERROR` which indicates that the volume cannot be attached
474 - `VOL_HDR_ATTACHED` means the volume headers have been read (and hence the file-server is aware of the volumes existence)
475 - `VOL_HDR_LOADED` means the volume headers are resident in memory
476 - `VOL_HDR_IN_LRU` means the volume headers are on the least-recently used queue
477 - `VOL_IN_HASH` indicates that the volume has been added to the volume linked-list
478 - `VOL_ON_VBYP_LIST` indicates that the volume is linked off the partition list
479 - `VOL_ON_VLRU` means the volume is on a VLRU queue
480 - the `salvage` structure (detailed [[here|AFSLore/DemandAttach#salvsync_debug]])
481 - the `stats` structure, particularly the volume operation times ( `last_*`).
482 - the `vlru` structure, particularly the VLRU queue
484 An understanding of the [volume finite-state machine](http://www.dementia.org/twiki//view/dafs-fsa.png) is required before the state of a volume should be manipulated.
486 ### <a name="==salvsync-debug=="></a> <code>**salvsync-debug**</code>
488 %RED% **Indiscriminate use of `salvsync-debug` can have extremely dire consequences. Use with care** %ENDCOLOR%
490 `salvsync-debug` provides low-level inspection and control of the salvageserver process, including the scheduling order of volumes.
492 `salvsync-debug` can be used to query the current salvage status of a volume, e,g,
494 ozsaw7 4# /usr/afs/bin/salvsync-debug query -vol 537119916 -part /vicepb
495 calling SALVSYNC_SalvageVolume with command code 65540 (SALVSYNC_QUERY)
496 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
497 protocol response code was 0 (SYNC_OK)
498 protocol reason code was 0 (**UNKNOWN**)
500 state = 4 (SALVSYNC_STATE_DONE)
506 To initiate the salvaging of a volume
508 ozsaw7 5# /usr/afs/bin/salvsync-debug salvage -vol 537119916 -part /vicepb
509 calling SALVSYNC_SalvageVolume with command code 65537 (SALVSYNC_SALVAGE)
510 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
511 protocol response code was 0 (SYNC_OK)
512 protocol reason code was 0 (**UNKNOWN**)
514 state = 1 (SALVSYNC_STATE_QUEUED)
519 ozsaw7 6# /usr/afs/bin/salvsync-debug query -vol 537119916 -part /vicepb calling SALVSYNC_SalvageVolume with command code 65540 (SALVSYNC_QUERY)
520 SALVSYNC_SalvageVolume returned 0 (SYNC_OK)
521 protocol response code was 0 (SYNC_OK)
522 protocol reason code was 0 (**UNKNOWN**)
524 state = 2 (SALVSYNC_STATE_SALVAGING)
530 This is the method that should be used on demand-attach file-servers to initiate the manual salvage of volumes. It should be used with care.
532 Under normal circumstances, the priority ( `prio`) of a salvage request is the number of times the volume has been requested by clients. %RED% Modifying the priority (and hence the order volumes are salvaged) under heavy demand-salvaging usually leads to extremely bad things happening. %ENDCOLOR% To modify the priority of a request, use
534 salvsync-debug priority -vol 537119916 -part /vicepb -priority 999999
536 (where `priority` is a 32-bit integer).
538 ### <a name="==state_analyzer=="></a> <code>**state\_analyzer**</code>
540 `state_analyzer` allows the contents of the host / callback state file ( `/usr/afs/local/fsstate.dat`) to be inspected.
542 #### <a name="Header Information"></a> Header Information
544 Header information is gleaned through the `hdr` command
546 fs state analyzer> hdr
547 loading structure from address 0xfed80000 (offset 0)
553 timestamp = "Tue Jun 23 11:51:49 2009"
555 server_uuid = "002e9712-ae67-1a2e-8a-42-900e866eaa77"
567 server_version_string = "@(#) OpenAFS 1.4.6-22 built 2009-04-18 "
571 #### <a name="Host Information"></a> Host Information
573 Host information can be gleaned through the `h` command, e.g.
576 fs state analyzer: h(0)> this
577 loading structure from address 0xfed80500 (offset 1280)
578 host_state_entry_header = {
585 host = "161.144.167.187"
593 LastCall = "Tue Jun 23 11:51:45 2009"
594 ActiveCall = "Tue Jun 23 11:51:45 2009"
595 cpsCall = "Tue Jun 23 11:51:45 2009"
600 numberOfInterfaces = 2
601 uuid = "aae8a851-1d54-4b83-ad-17-db967bd89e1b"
603 addr = "161.144.167.187"
611 fs state analyzer: h(0)> next
612 loading structure from address 0xfed80568 (offset 1384)
613 host_state_entry_header = {
620 host = "10.181.34.134"
628 LastCall = "Tue Jun 23 11:51:08 2009"
629 ActiveCall = "Tue Jun 23 11:51:08 2009"
630 cpsCall = "Tue Jun 23 11:51:08 2009"
635 numberOfInterfaces = 4
636 uuid = "00107e94-794d-1a3d-ae-e2-0ab52421aa77"
638 addr = "10.181.36.33"
642 addr = "10.181.36.31"
646 addr = "10.181.32.134"
650 addr = "10.181.34.134"
654 fs state analyzer: h(1)>
656 #### <a name="Callback Information"></a> Callback Information
658 Callback information is available through the `cb` command, e.g.
660 fs state analyzer> cb
661 fs state analyzer: fe(0):cb(0)> dump
662 loading structure from address 0xfed97b6c (offset 97132)
678 The `dump` command (as opposed to `this`) displays all call-backs for the current file-entry. Moving to the next file-entry can be achieved by
680 fs state analyzer: fe(0):cb(0)> quit
681 fs state analyzer: fe(0)> next
682 loading structure from address 0xfed97b90 (offset 97168)
683 callback_state_entry_header = {
700 fs state analyzer: fe(1)> cb
701 fs state analyzer: fe(1):cb(0)> dump
702 loading structure from address 0xfed97bd4 (offset 97236)