From: Andrew Deason Date: Sat, 2 Mar 2019 21:58:00 +0000 (-0600) Subject: afs: Cleanup state on rxfs_*Init errors X-Git-Tag: openafs-devel-1_9_0~332 X-Git-Url: https://git.openafs.org/?p=openafs.git;a=commitdiff_plain;h=11cc0a3c4e0d76f1650596bd1568f01367ab5be2 afs: Cleanup state on rxfs_*Init errors Currently, rxfs_storeInit and rxfs_fetchInit return early if they encounter an error while starting the relevant fetch/store RPC (e.g. StartRXAFS_FetchData64). In this scenario, they osi_FreeSmallSpace their rock before returning, but they never go through their destructor to free the contents of the rock (rxfs_storeDestroy/rxfs_fetchDestroy), leaking any resources inside that have already been initialized. The only thing that could have been initialized by this point is v->call, so hitting this condition means we leak an Rx call, and means we can report the wrong error code (since we never go through rx_EndCall, we never look at the call's abort code). For rxfs_fetchInit, most code paths call rx_EndCall explicitly, except for the code path where StartRXAFS_FetchData64 itself fails. For both fetches and stores, it's difficult to hit this condition, because this requires that the StartRXAFS_* call fails, before we have sent or received any data from the wire. However, this can be hit if the call is already aborted before we use it, which can happen if the underlying connection has already been aborted by a connection abort. Before commit 0835d7c2 ("afs: make sure to call afs_Analyze after afs_Conn"), this was most easily hit by trying to fetch data with a bad security object (for example, with expired credentials). After the first fetch failed due to a connection abort (e.g. RXKADEXPIRED), afs_GetDCache would retry the fetch with the same connection, and StartRXAFS_FetchData64 would fail because the connection and call were already aborted. In this case, we'd leak the Rx call, and we would throw an RXGEN_CC_MARSHAL error (-450), instead of the correct RXKADEXPIRED error. This causes libafs to report that the target server as unreachable, due to the negative error code. With commit 0835d7c2, this doesn't happen because we call afs_Analyze before retrying the fetch, which detects the invalid credentials and forces creating a new connetion object. However, this situation should still be possible if a different call on the same connection triggered a connection-level abort before we called StartRXAFS_FetchData64. To fix this and ensure that we don't leak Rx calls, explicitly call rxfs_storeDestroy/rxfs_fetchDestroy in this error case, before returning from rxfs_storeInit/rxfs_fetchInit. Thanks to yadayada@in.ibm.com for reporting a related issue and providing analysis. Change-Id: I15e02f8c9e620c5861e3dcb03c42510528ce9a60 Reviewed-on: https://gerrit.openafs.org/13510 Tested-by: BuildBot Reviewed-by: Benjamin Kaduk --- diff --git a/src/afs/afs_fetchstore.c b/src/afs/afs_fetchstore.c index ed25a4d..72b82ec 100644 --- a/src/afs/afs_fetchstore.c +++ b/src/afs/afs_fetchstore.c @@ -400,8 +400,8 @@ rxfs_storeInit(struct vcache *avc, struct afs_conn *tc, code = -1; RX_AFS_GLOCK(); if (code) { - osi_FreeSmallSpace(v); - return code; + *rock = v; + return rxfs_storeDestroy(rock, code); } if (cacheDiskType == AFS_FCACHE_TYPE_UFS) { v->tbuffer = osi_AllocLargeSpace(AFS_LRALLOCSIZ); @@ -1067,8 +1067,8 @@ rxfs_fetchInit(struct afs_conn *tc, struct rx_connection *rxconn, } if (code) { - osi_FreeSmallSpace(v); - return code; + *rock = v; + return rxfs_fetchDestroy(rock, code); } if (cacheDiskType == AFS_FCACHE_TYPE_UFS) { v->tbuffer = osi_AllocLargeSpace(AFS_LRALLOCSIZ);