diff options
author | Matthias Schiffer <mschiffer@universe-factory.net> | 2012-05-08 22:31:57 +0200 |
---|---|---|
committer | Antonio Quartulli <ordex@autistici.org> | 2012-06-18 18:01:06 +0200 |
commit | 75c5a2e788ab02f67931442e8dcbc854ae7252d1 (patch) | |
tree | f1ae84324fe9990fa8b04edd17384e644d8cb73b /net | |
parent | ef3a409391f55ad0bddbf017d4d4987b783a3059 (diff) |
batman-adv: fix locking in hash_add()
To ensure an entry isn't added twice all comparisons have to be protected by the
hash line write spinlock. This doesn't really hurt as the case that it is tried
to add an element already present to the hash shouldn't occur very often, so in
most cases the lock would have have to be taken anyways.
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
Acked-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Diffstat (limited to 'net')
-rw-r--r-- | net/batman-adv/hash.h | 15 |
1 files changed, 6 insertions, 9 deletions
diff --git a/net/batman-adv/hash.h b/net/batman-adv/hash.h index 93b3c71aeaf8..3d67ce49fc31 100644 --- a/net/batman-adv/hash.h +++ b/net/batman-adv/hash.h @@ -110,26 +110,23 @@ static inline int hash_add(struct hashtable_t *hash, head = &hash->table[index]; list_lock = &hash->list_locks[index]; - rcu_read_lock(); - __hlist_for_each_rcu(node, head) { + spin_lock_bh(list_lock); + + hlist_for_each(node, head) { if (!compare(node, data)) continue; ret = 1; - goto err_unlock; + goto unlock; } - rcu_read_unlock(); /* no duplicate found in list, add new element */ - spin_lock_bh(list_lock); hlist_add_head_rcu(data_node, head); - spin_unlock_bh(list_lock); ret = 0; - goto out; -err_unlock: - rcu_read_unlock(); +unlock: + spin_unlock_bh(list_lock); out: return ret; } |