summaryrefslogtreecommitdiff
path: root/net/sunrpc/svc_xprt.c
AgeCommit message (Collapse)Author
2024-09-23SUNRPC: replace program list with program arrayNeilBrown
A service created with svc_create_pooled() can be given a linked list of programs and all of these will be served. Using a linked list makes it cumbersome when there are several programs that can be optionally selected with CONFIG settings. After this patch is applied, API consumers must use only svc_create_pooled() when creating an RPC service that listens for more than one RPC program. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Mike Snitzer <snitzer@kernel.org> Acked-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <anna.schumaker@oracle.com>
2024-09-01SUNRPC: make various functions static, or not exported.NeilBrown
Various functions are only used within the sunrpc module, and several are only use in the one file. So clean up: These are marked static, and any EXPORT is removed. svc_rcpb_setup() svc_rqst_alloc() svc_rqst_free() - also moved before first use svc_rpcbind_set_version() svc_drop() - also moved to svc.c These are now not EXPORTed, but are not static. svc_authenticate() svc_sock_update_bufs() Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-07-08SUNRPC: Add a trace point in svc_xprt_deferred_closeChuck Lever
The trace point in svc_xprt_close() reports only some local close requests. Try to capture more local close requests. Note that "trace-cmd record -T -e sunrpc:svc_xprt_close" will neatly capture the identity of the caller requesting the close. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-06-25Revert "nfsd: fix oops when reading pool_stats before server is started"NeilBrown
This reverts commit 8e948c365d9c10b685d1deb946bd833d6a9b43e0. The reverted commit moves a test on a field protected by a mutex outside of the protection of that mutex, and so is obviously racey. Depending on how the race goes, si->serv might be NULL when dereferenced in svc_pool_stats_start(), or svc_pool_stats_stop() might unlock a mutex that hadn't been locked. This bug that the commit tried to fix has been addressed by initialising ->mutex earlier. Fixes: 8e948c365d9c ("nfsd: fix oops when reading pool_stats before server is started") Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-06-17nfsd: fix oops when reading pool_stats before server is startedJeff Layton
Sourbh reported an oops that is triggerable by trying to read the pool_stats procfile before nfsd had been started. Move the check for a NULL serv in svc_pool_stats_start above the mutex acquisition, and fix the stop routine not to unlock the mutex if there is no serv yet. Fixes: 7b207ccd9833 ("svc: don't hold reference for poolstats, only mutex.") Reported-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Jeff Layton <jlayton@kernel.org> Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-05-06SUNRPC: Remove comment for sp_lockGuoqing Jiang
It is obsolete since sp_lock was discarded in commit 580a25756a9f ("SUNRPC: discard sp_lock"). Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-05-06SUNRPC: add a new svc_find_listener helperJeff Layton
svc_find_listener will return the transport instance pointer for the endpoint accepting connections/peer traffic from the specified transport class and matching sockaddr. Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-05-06SUNRPC: introduce svc_xprt_create_from_sa utility routineLorenzo Bianconi
Add svc_xprt_create_from_sa utility routine and refactor svc_xprt_create() codebase in order to introduce the capability to create a svc port from socket address. Reviewed-by: Jeff Layton <jlayton@kernel.org> Tested-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svc: don't hold reference for poolstats, only mutex.NeilBrown
A future patch will remove refcounting on svc_serv as it is of little use. It is currently used to keep the svc around while the pool_stats file is open. Change this to get the pointer, protected by the mutex, only in seq_start, and the release the mutex in seq_stop. This means that if the nfsd server is stopped and restarted while the pool_stats file it open, then some pool stats info could be from the first instance and some from the second. This might appear odd, but is unlikely to be a problem in practice. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-12-18SUNRPC: Revert 5f7fc5d69f6e92ec0b38774c387f5cf7812c5806Chuck Lever
Guillaume says: > I believe commit 5f7fc5d69f6e ("SUNRPC: Resupply rq_pages from > node-local memory") in Linux 6.5+ is incorrect. It passes > unconditionally rq_pool->sp_id as the NUMA node. > > While the comment in the svc_pool declaration in sunrpc/svc.h says > that sp_id is also the NUMA node id, it might not be the case if > the svc is created using svc_create_pooled(). svc_created_pooled() > can use the per-cpu pool mode therefore in this case sp_id would > be the cpu id. Fix this by reverting now. At a later point this minor optimization, and the deceptive labeling of the sp_id field, can be revisited. Reported-by: Guillaume Morin <guillaume@morinfr.org> Closes: https://lore.kernel.org/linux-nfs/ZYC9rsno8qYggVt9@bender.morinfr.org/T/#u Fixes: 5f7fc5d69f6e ("SUNRPC: Resupply rq_pages from node-local memory") Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: change the back-channel queue to lwqNeilBrown
This removes the need to store and update back-links in the list. It also remove the need for the _bh version of spin_lock(). Signed-off-by: NeilBrown <neilb@suse.de> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: use lwq for sp_sockets - renamed to sp_xprtsNeilBrown
lwq avoids using back pointers in lists, and uses less locking. This introduces a new spinlock, but the other one will be removed in a future patch. For svc_clean_up_xprts(), we now dequeue the entire queue, walk it to remove and process the xprts that need cleaning up, then re-enqueue the remaining queue. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: only have one thread waking up at a timeNeilBrown
Currently if several items of work become available in quick succession, that number of threads (if available) will be woken. By the time some of them wake up another thread that was already cache-warm might have come along and completed the work. Anecdotal evidence suggests as many as 15% of wakes find nothing to do once they get to the point of looking. This patch changes svc_pool_wake_idle_thread() to wake the first thread on the queue but NOT remove it. Subsequent calls will wake the same thread. Once that thread starts it will dequeue itself and after dequeueing some work to do, it will wake the next thread if there is more work ready. This results in a more orderly increase in the number of busy threads. As a bonus, this allows us to reduce locking around the idle queue. svc_pool_wake_idle_thread() no longer needs to take a lock (beyond rcu_read_lock()) as it doesn't manipulate the queue, it just looks at the first item. The thread itself can avoid locking by using the new llist_del_first_this() interface. This will safely remove the thread itself if it is the head. If it isn't the head, it will do nothing. If multiple threads call this concurrently only one will succeed. The others will do nothing, so no corruption can result. If a thread wakes up and finds that it cannot dequeue itself that means either - that it wasn't woken because it was the head of the queue. Maybe the freezer woke it. In that case it can go back to sleep (after trying to freeze of course). - some other thread found there was nothing to do very recently, and placed itself on the head of the queue in front of this thread. It must check again after placing itself there, so it can be deemed to be responsible for any pending work, and this thread can go back to sleep until woken. No code ever tests for busy threads any more. Only each thread itself cares if it is busy. So svc_thread_busy() is no longer needed. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: rename some functions from rqst_ to svc_thread_NeilBrown
Functions which directly manipulate a 'struct rqst', such as svc_rqst_alloc() or svc_rqst_release_pages(), can reasonably have "rqst" in there name. However functions that act on the running thread, such as XX_should_sleep() or XX_wait_for_work() should seem more natural with a "svc_thread_" prefix. So make those changes. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: change service idle list to be an llistNeilBrown
With an llist we don't need to take a lock to add a thread to the list, though we still need a lock to remove it. That will go in the next patch. Unlike double-linked lists, a thread cannot reliably remove itself from the list. Only the first thread can be removed, and that can change asynchronously. So some care is needed. We already check if there is pending work to do, so we are unlikely to add ourselves to the idle list and then want to remove ourselves again. If we DO find something needs to be done after adding ourselves to the list, we simply wake up the first thread on the list. If that was us, we successfully removed ourselves and can continue. If it was some other thread, they will do the work that needs to be done. We can safely sleep until woken. We also remove the test on freezing() from rqst_should_sleep(). Instead we set TASK_FREEZABLE before scheduling. This makes is safe to schedule() when a freeze is pending. As we now loop waiting to be removed from the idle queue, this is a cleaner way to handle freezing. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: discard SP_CONGESTEDNeilBrown
We can tell if a pool is congested by checking if the idle list is empty. We don't need a separate flag. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: add list of idle threadsNeilBrown
Rather than searching a list of threads to find an idle one, having a list of idle threads allows an idle thread to be found immediately. This adds some spin_lock calls which is not ideal, but as the hold-time is tiny it is still faster than searching a list. A future patch will remove them using llist.h. This involves some subtlety and so is left to a separate patch. This removes the need for the RQ_BUSY flag. The rqst is "busy" precisely when it is not on the "idle" list. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: change how svc threads are asked to exit.NeilBrown
svc threads are currently stopped using kthread_stop(). This requires identifying a specific thread. However we don't care which thread stops, just as long as one does. So instead, set a flag in the svc_pool to say that a thread needs to die, and have each thread check this flag instead of calling kthread_should_stop(). The first thread to find and clear this flag then moves towards exiting. This removes an explicit dependency on sp_all_threads which will make a future patch simpler. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: integrate back-channel processing with svc_recv()NeilBrown
Using svc_recv() for (NFSv4.1) back-channel handling means we have just one mechanism for waking threads. Also change kthread_freezable_should_stop() in nfs4_callback_svc() to kthread_should_stop() as used elsewhere. kthread_freezable_should_stop() effectively adds a try_to_freeze() call, and svc_recv() already contains that at an appropriate place. Signed-off-by: NeilBrown <neilb@suse.de> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: rename and refactor svc_get_next_xprt()NeilBrown
svc_get_next_xprt() does a lot more than just get an xprt. It also decides if it needs to sleep, depending not only on the availability of xprts but also on the need to exit or handle external work. So rename it to svc_rqst_wait_for_work() and only do the testing and waiting. Move all the waiting-related code out of svc_recv() into the new svc_rqst_wait_for_work(). Move the dequeueing code out of svc_get_next_xprt() into svc_recv(). Previously svc_xprt_dequeue() would be called twice, once before waiting and possibly once after. Now instead rqst_should_sleep() is called twice. Once to decide if waiting is needed, and once to check against after setting the task state do see if we might have missed a wakeup. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16SUNRPC: move all of xprt handling into svc_xprt_handle()NeilBrown
svc_xprt_handle() does lots of things itself, but leaves some to the caller - svc_recv(). This isn't elegant. Move that code out of svc_recv() into svc_xprt_handle() Move the calls to svc_xprt_release() from svc_send() and svc_drop() (the two possible final steps in svc_process()) and from svc_recv() (in the case where svc_process() wasn't called) into svc_xprt_handle(). Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: make rqst_should_sleep() idempotent()NeilBrown
Based on its name you would think that rqst_should_sleep() would be read-only, not changing anything. But in fact it will clear SP_TASK_PENDING if that was set. This is surprising, and it blurs the line between "check for work to do" and "dequeue work to do". So change the "test_and_clear" to simple "test" and clear the bit once the thread has decided to wake up and return to the caller. With this, it makes sense to *always* set SP_TASK_PENDING when asked, rather than to set it only if no thread could be woken up. [ cel: Previously TASK_PENDING indicated there is work waiting but no idle threads were found to pick up that work. After this patch, it acts as an XPT_BUSY flag for wake-ups that have no associated xprt. ] Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: Count ingress RPC messages per svc_poolChuck Lever
svc_xprt_enqueue() can be costly, since it involves selecting and waking up a process. More than one enqueue is done per incoming RPC. For example, svc_data_ready() enqueues, and so does svc_xprt_receive(). Also, if an RPC message requires more than one call to ->recvfrom() to receive it fully, each one of those calls does an enqueue. To get a sense of the average number of transport enqueue operations needed to process an incoming RPC message, re-use the "packets" pool stat. Track the number of complete RPC messages processed by each thread pool. Reviewed-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: Deduplicate thread wake-up codeChuck Lever
Refactor: Extract the loop that finds an idle service thread from svc_xprt_enqueue() and svc_wake_up(). Both functions do just about the same thing. Note that svc_wake_up() currently does not hold the RCU read lock while waking the target thread. It indeed should hold the lock, just as svc_xprt_enqueue() does, to ensure the rqstp does not vanish during the wake-up. This patch adds the RCU lock for svc_wake_up(). Note that shrinking the pool thread count is rare, and calls to svc_wake_up() are also quite infrequent. In practice, this race is very unlikely to be hit, so we are not marking the lock fix for stable backport at this time. Reviewed-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: Move trace_svc_xprt_enqueueChuck Lever
The xpt_flags field frequently changes between the time that svc_xprt_ready() grabs a copy and execution flow arrives at the tracepoint at the tail of svc_xprt_enqueue(). In fact, there's usually a sleep/wake-up in there, so those flags are almost guaranteed to be different. It would be more useful to record the exact flags that were used to decide whether the transport is ready, so move the tracepoint. Moving it means the tracepoint can't pick up the waker's pid. That can be added to struct svc_rqst if it turns out that is important. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: remove timeout arg from svc_recv()NeilBrown
Most svc threads have no interest in a timeout. nfsd sets it to 1 hour, but this is a wart of no significance. lockd uses the timeout so that it can call nlmsvc_retry_blocked(). It also sometimes calls svc_wake_up() to ensure this is called. So change lockd to be consistent and always use svc_wake_up() to trigger nlmsvc_retry_blocked() - using a timer instead of a timeout to svc_recv(). And change svc_recv() to not take a timeout arg. This makes the sp_threads_timedout counter always zero. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: change svc_recv() to return void.NeilBrown
svc_recv() currently returns a 0 on success or one of two errors: - -EAGAIN means no message was successfully received - -EINTR means the thread has been told to stop Previously nfsd would stop as the result of a signal as well as following kthread_stop(). In that case the difference was useful: EINTR means stop unconditionally. EAGAIN means stop if kthread_should_stop(), continue otherwise. Now threads only exit when kthread_should_stop() so we don't need the distinction. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29SUNRPC: call svc_process() from svc_recv().NeilBrown
All callers of svc_recv() go on to call svc_process() on success. Simplify callers by having svc_recv() do that for them. This loses one call to validate_process_creds() in nfsd. That was debugging code added 14 years ago. I don't think we need to keep it. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-08-29nfsd: don't allow nfsd threads to be signalled.NeilBrown
The original implementation of nfsd used signals to stop threads during shutdown. In Linux 2.3.46pre5 nfsd gained the ability to shutdown threads internally it if was asked to run "0" threads. After this user-space transitioned to using "rpc.nfsd 0" to stop nfsd and sending signals to threads was no longer an important part of the API. In commit 3ebdbe5203a8 ("SUNRPC: discard svo_setup and rename svc_set_num_threads_sync()") (v5.17-rc1~75^2~41) we finally removed the use of signals for stopping threads, using kthread_stop() instead. This patch makes the "obvious" next step and removes the ability to signal nfsd threads - or any svc threads. nfsd stops allowing signals and we don't check for their delivery any more. This will allow for some simplification in later patches. A change worth noting is in nfsd4_ssc_setup_dul(). There was previously a signal_pending() check which would only succeed when the thread was being shut down. It should really have tested kthread_should_stop() as well. Now it just does the latter, not the former. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-17SUNRPC: Remove transport class dprintk call sitesChuck Lever
Remove a couple of dprintk call sites that are of little value. Reviewed-by: Jeff Layton <jlayton@kernel.org> Acked-by: Tom Talpey <tom@talpey.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-17SUNRPC: Fix comments for transport class registrationChuck Lever
The preceding block comment before svc_register_xprt_class() is not related to that function. While we're here, add proper documenting comments for these two publicly-visible functions. Reviewed-by: Jeff Layton <jlayton@kernel.org> Acked-by: Tom Talpey <tom@talpey.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-17SUNRPC: Move initialization of rq_stimeChuck Lever
Micro-optimization: Call ktime_get() only when ->xpo_recvfrom() has given us a full RPC message to process. rq_stime isn't used otherwise, so this avoids pointless work. Reviewed-by: Jeff Layton <jlayton@kernel.org> Acked-by: Tom Talpey <tom@talpey.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-05SUNRPC: Resupply rq_pages from node-local memoryChuck Lever
svc_init_buffer() is careful to allocate the initial set of server thread buffer pages from memory on the local NUMA node. svc_alloc_arg() should also be that careful. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-05SUNRPC: Remove dprintk() in svc_handle_xprt()Chuck Lever
When enabled, this dprintk() fires for every incoming RPC, which is an enormous amount of log traffic. These days, after the first few hundred log messages, the system journald is just going to mute it, along with all other NFSD debug output. Let's rely on trace points for this high-traffic information instead. Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-05-14SUNRPC: always free ctxt when freeing deferred requestNeilBrown
Since the ->xprt_ctxt pointer was added to svc_deferred_req, it has not been sufficient to use kfree() to free a deferred request. We may need to free the ctxt as well. As freeing the ctxt is all that ->xpo_release_rqst() does, we repurpose it to explicit do that even when the ctxt is not stored in an rqst. So we now have ->xpo_release_ctxt() which is given an xprt and a ctxt, which may have been taken either from an rqst or from a dreq. The caller is now responsible for clearing that pointer after the call to ->xpo_release_ctxt. We also clear dr->xprt_ctxt when the ctxt is moved into a new rqst when revisiting a deferred request. This ensures there is only one pointer to the ctxt, so the risk of double freeing in future is reduced. The new code in svc_xprt_release which releases both the ctxt and any rq_deferred depends on this. Fixes: 773f91b2cf3f ("SUNRPC: Fix NFSD's request deferral on RDMA transports") Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-05-14SUNRPC: double free xprt_ctxt while still in useNeilBrown
When an RPC request is deferred, the rq_xprt_ctxt pointer is moved out of the svc_rqst into the svc_deferred_req. When the deferred request is revisited, the pointer is copied into the new svc_rqst - and also remains in the svc_deferred_req. In the (rare?) case that the request is deferred a second time, the old svc_deferred_req is reused - it still has all the correct content. However in that case the rq_xprt_ctxt pointer is NOT cleared so that when xpo_release_xprt is called, the ctxt is freed (UDP) or possible added to a free list (RDMA). When the deferred request is revisited for a second time, it will reference this ctxt which may be invalid, and the free the object a second time which is likely to oops. So change svc_defer() to *always* clear rq_xprt_ctxt, and assert that the value is now stored in the svc_deferred_req. Fixes: 773f91b2cf3f ("SUNRPC: Fix NFSD's request deferral on RDMA transports") Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-04-27SUNRPC: Support TLS handshake in the server-side TCP socket codeChuck Lever
This patch adds opportunitistic RPC-with-TLS to the Linux in-kernel NFS server. If the client requests RPC-with-TLS and the user space handshake agent is running, the server will set up a TLS session. There are no policy settings yet. For example, the server cannot yet require the use of RPC-with-TLS to access its data. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-04-27SUNRPC: Clear rq_xid when receiving a new RPC CallChuck Lever
This is an eye-catcher for tracepoints that record the XID: it means svc_rqst() has not received a full RPC Call with an XID yet. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-04-27SUNRPC: Be even lazier about releasing pagesChuck Lever
A single RPC transaction that touches only a couple of pages means rq_pvec will not be even close to full in svc_xpt_release(). This is a common case. Instead, just leave the pages in rq_pvec until it is completely full. This improves the efficiency of the batch release mechanism on workloads that involve small RPC messages. The rq_pvec is also fully emptied just before thread exit. Reviewed-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-04-26SUNRPC: Relocate svc_free_res_pages()Chuck Lever
Clean-up: There doesn't seem to be a reason why this function is stuck in a header. One thing it prevents is the convenient addition of tracing. Moving it to a source file also makes the rq_respages clean-up logic easier to find. Reviewed-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-04-26SUNRPC: Ignore return value of ->xpo_sendtoChuck Lever
Clean up: All callers of svc_process() ignore its return value, so svc_process() can safely be converted to return void. Ditto for svc_send(). The return value of ->xpo_sendto() is now used only as part of a trace event. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-02-20SUNRPC: Remove ->xpo_secure_port()Chuck Lever
There's no need for the cost of this extra virtual function call during every RPC transaction: the RQ_SECURE bit can be set properly in ->xpo_recvfrom() instead. Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-02-20SUNRPC: Replace pool stats with per-CPU variablesChuck Lever
Eliminate the use of bus-locked operations in svc_xprt_enqueue(), which is a hot path. Replace them with per-cpu variables to reduce cross-CPU memory bus traffic. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-02-20SUNRPC: Hoist svcxdr_init_decode() into svc_process()Chuck Lever
Now the entire RPC Call header parsing path is handled via struct xdr_stream-based decoders. Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-01-06Revert "SUNRPC: Use RMW bitops in single-threaded hot paths"Chuck Lever
The premise that "Once an svc thread is scheduled and executing an RPC, no other processes will touch svc_rqst::rq_flags" is false. svc_xprt_enqueue() examines the RQ_BUSY flag in scheduled nfsd threads when determining which thread to wake up next. Found via KCSAN. Fixes: 28df0988815f ("SUNRPC: Use RMW bitops in single-threaded hot paths") Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2022-07-29SUNRPC: Expand the svc_alloc_arg_err tracepointChuck Lever
Record not only the number of pages requested, but the number of pages that were actually allocated, to get a measure of progress (or lack thereof). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2022-05-23SUNRPC: Use RMW bitops in single-threaded hot pathsChuck Lever
I noticed CPU pipeline stalls while using perf. Once an svc thread is scheduled and executing an RPC, no other processes will touch svc_rqst::rq_flags. Thus bus-locked atomics are not needed outside the svc thread scheduler. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2022-05-19SUNRPC: Simplify synopsis of svc_pool_for_cpu()Chuck Lever
Clean up: There is one caller. The @cpu argument can be made implicit now that a get_cpu/put_cpu pair is no longer needed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2022-05-19SUNRPC: Don't disable preemption while calling svc_pool_for_cpu().Sebastian Andrzej Siewior
svc_xprt_enqueue() disables preemption via get_cpu() and then asks for a pool of a specific CPU (current) via svc_pool_for_cpu(). While preemption is disabled, svc_xprt_enqueue() acquires svc_pool::sp_lock with bottom-halfs disabled, which can sleep on PREEMPT_RT. Disabling preemption is not required here. The pool is protected with a lock so the following list access is safe even cross-CPU. The following iteration through svc_pool::sp_all_threads is under RCU-readlock and remaining operations within the loop are atomic and do not rely on disabled-preemption. Use raw_smp_processor_id() as the argument for the requested CPU in svc_pool_for_cpu(). Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2022-05-19SUNRPC: Remove svc_rqst::rq_xprt_hlenChuck Lever
Clean up: This field is now always set to zero. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>