diff options
author | Gerd Rausch <gerd.rausch@oracle.com> | 2019-07-16 15:28:57 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-07-17 12:06:51 -0700 |
commit | c9467447fc50ec3715d8ec98f4da874fce539235 (patch) | |
tree | 51d23641565aed0c1bda2b866a56212f6669ace6 /net/rds/ib_mr.h | |
parent | 2c7da8e6b041a8df2661def81ac90c9c0c719909 (diff) |
net/rds: Get rid of "wait_clean_list_grace" and add locking
Waiting for activity on the "clean_list" to quiesce is no substitute
for proper locking.
We can have multiple threads competing for "llist_del_first"
via "rds_ib_reuse_mr", and a single thread competing
for "llist_del_all" and "llist_del_first" via "rds_ib_flush_mr_pool".
Since "llist_del_first" depends on "list->first->next" not to change
in the midst of the operation, simply waiting for all current calls
to "rds_ib_reuse_mr" to quiesce across all CPUs is woefully inadequate:
By the time "wait_clean_list_grace" is done iterating over all CPUs to see
that there is no concurrent caller to "rds_ib_reuse_mr", a new caller may
have just shown up on the first CPU.
Furthermore, <linux/llist.h> explicitly calls out the need for locking:
* Cases where locking is needed:
* If we have multiple consumers with llist_del_first used in one consumer,
* and llist_del_first or llist_del_all used in other consumers,
* then a lock is needed.
Also, while at it, drop the unused "pool" parameter
from "list_to_llist_nodes".
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/rds/ib_mr.h')
-rw-r--r-- | net/rds/ib_mr.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/rds/ib_mr.h b/net/rds/ib_mr.h index 42daccb7b5eb..ab26c20ed66f 100644 --- a/net/rds/ib_mr.h +++ b/net/rds/ib_mr.h @@ -98,6 +98,7 @@ struct rds_ib_mr_pool { struct llist_head free_list; /* unused MRs */ struct llist_head clean_list; /* unused & unmapped MRs */ wait_queue_head_t flush_wait; + spinlock_t clean_lock; /* "clean_list" concurrency */ atomic_t free_pinned; /* memory pinned by free MRs */ unsigned long max_items; |