diff options
author | Ondrej Mosnacek <omosnace@redhat.com> | 2020-04-08 11:08:08 +0200 |
---|---|---|
committer | Paul Moore <paul@paul-moore.com> | 2020-04-15 18:27:35 -0400 |
commit | 433e3aa37773e8a36858b9417c3e345eff79a079 (patch) | |
tree | 6dcbfa93ad447e42029cce5dba887148a1994dd3 | |
parent | 4b8503967ef5d1123d6e0a87d5723bdaeddf8b3f (diff) |
selinux: drop unnecessary smp_load_acquire() call
In commit 66f8e2f03c02 ("selinux: sidtab reverse lookup hash table") the
corresponding load is moved under the spin lock, so there is no race
possible and we can read the count directly. The smp_store_release() is
still needed to avoid racing with the lock-free readers.
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
-rw-r--r-- | security/selinux/ss/sidtab.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c index f511ffccb131..98d5ea3fcde4 100644 --- a/security/selinux/ss/sidtab.c +++ b/security/selinux/ss/sidtab.c @@ -276,8 +276,7 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context, if (*sid) goto out_unlock; - /* read entries only after reading count */ - count = smp_load_acquire(&s->count); + count = s->count; convert = s->convert; /* bail out if we already reached max entries */ |