diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 10:08:36 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 10:08:36 -0800 |
commit | 02bf43c7b7f7a19aa59a75f5244f0a3408bace1a (patch) | |
tree | 64cdee7009702c5bea4e2d1359d362f362f23219 /fs | |
parent | c76ff350bd57682ae12bea6383dd8baf4824ac96 (diff) | |
parent | 3b4c7bc01727e3a465759236eeac03d0dd686da3 (diff) |
Merge tag 'fs.xattr.simple.rework.rbtree.rwlock.v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping
Pull simple-xattr updates from Christian Brauner:
"This ports the simple xattr infrastucture to rely on a simple rbtree
protected by a read-write lock instead of a linked list protected by a
spinlock.
A while ago we received reports about scaling issues for filesystems
using the simple xattr infrastructure that also support setting a
larger number of xattrs. Specifically, cgroups and tmpfs.
Both cgroupfs and tmpfs can be mounted by unprivileged users in
unprivileged containers and root in an unprivileged container can set
an unrestricted number of security.* xattrs and privileged users can
also set unlimited trusted.* xattrs. A few more words on further that
below. Other xattrs such as user.* are restricted for kernfs-based
instances to a fairly limited number.
As there are apparently users that have a fairly large number of
xattrs we should scale a bit better. Using a simple linked list
protected by a spinlock used for set, get, and list operations doesn't
scale well if users use a lot of xattrs even if it's not a crazy
number.
Let's switch to a simple rbtree protected by a rwlock. It scales way
better and gets rid of the perf issues some people reported. We
originally had fancier solutions even using an rcu+seqlock protected
rbtree but we had concerns about being to clever and also that
deletion from an rbtree with rcu+seqlock isn't entirely safe.
The rbtree plus rwlock is perfectly fine. By far the most common
operation is getting an xattr. While setting an xattr is not and
should be comparatively rare. And listxattr() often only happens when
copying xattrs between files or together with the contents to a new
file.
Holding a lock across listxattr() is unproblematic because it doesn't
list the values of xattrs. It can only be used to list the names of
all xattrs set on a file. And the number of xattr names that can be
listed with listxattr() is limited to XATTR_LIST_MAX aka 65536 bytes.
If a larger buffer is passed then vfs_listxattr() caps it to
XATTR_LIST_MAX and if more xattr names are found it will return
-E2BIG. In short, the maximum amount of memory that can be retrieved
via listxattr() is limited and thus listxattr() bounded.
Of course, the API is broken as documented on xattr(7) already. While
I have no idea how the xattr api ended up in this state we should
probably try to come up with something here at some point. An iterator
pattern similar to readdir() as an alternative to listxattr() or
something else.
Right now it is extremly strange that users can set millions of xattrs
but then can't use listxattr() to know which xattrs are actually set.
And it's really trivial to do:
for i in {1..1000000}; do setfattr -n security.$i -v $i ./file1; done
And around 5000 xattrs it's impossible to use listxattr() to figure
out which xattrs are actually set. So I have suggested that we try to
limit the number of xattrs for simple xattrs at least. But that's a
future patch and I don't consider it very urgent.
A bonus of this port to rbtree+rwlock is that we shrink the memory
consumption for users of the simple xattr infrastructure.
This also adds kernel documentation to all the functions"
* tag 'fs.xattr.simple.rework.rbtree.rwlock.v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping:
xattr: use rbtree for simple_xattrs
Diffstat (limited to 'fs')
-rw-r--r-- | fs/xattr.c | 317 |
1 files changed, 250 insertions, 67 deletions
diff --git a/fs/xattr.c b/fs/xattr.c index 28f61fcb8f69..adab9a70b536 100644 --- a/fs/xattr.c +++ b/fs/xattr.c @@ -1011,8 +1011,29 @@ const char *xattr_full_name(const struct xattr_handler *handler, } EXPORT_SYMBOL(xattr_full_name); -/* - * Allocate new xattr and copy in the value; but leave the name to callers. +/** + * free_simple_xattr - free an xattr object + * @xattr: the xattr object + * + * Free the xattr object. Can handle @xattr being NULL. + */ +static inline void free_simple_xattr(struct simple_xattr *xattr) +{ + if (xattr) + kfree(xattr->name); + kvfree(xattr); +} + +/** + * simple_xattr_alloc - allocate new xattr object + * @value: value of the xattr object + * @size: size of @value + * + * Allocate a new xattr object and initialize respective members. The caller is + * responsible for handling the name of the xattr. + * + * Return: On success a new xattr object is returned. On failure NULL is + * returned. */ struct simple_xattr *simple_xattr_alloc(const void *value, size_t size) { @@ -1033,20 +1054,69 @@ struct simple_xattr *simple_xattr_alloc(const void *value, size_t size) return new_xattr; } -/* - * xattr GET operation for in-memory/pseudo filesystems +/** + * rbtree_simple_xattr_cmp - compare xattr name with current rbtree xattr entry + * @key: xattr name + * @node: current node + * + * Compare the xattr name with the xattr name attached to @node in the rbtree. + * + * Return: Negative value if continuing left, positive if continuing right, 0 + * if the xattr attached to @node matches @key. + */ +static int rbtree_simple_xattr_cmp(const void *key, const struct rb_node *node) +{ + const char *xattr_name = key; + const struct simple_xattr *xattr; + + xattr = rb_entry(node, struct simple_xattr, rb_node); + return strcmp(xattr->name, xattr_name); +} + +/** + * rbtree_simple_xattr_node_cmp - compare two xattr rbtree nodes + * @new_node: new node + * @node: current node + * + * Compare the xattr attached to @new_node with the xattr attached to @node. + * + * Return: Negative value if continuing left, positive if continuing right, 0 + * if the xattr attached to @new_node matches the xattr attached to @node. + */ +static int rbtree_simple_xattr_node_cmp(struct rb_node *new_node, + const struct rb_node *node) +{ + struct simple_xattr *xattr; + xattr = rb_entry(new_node, struct simple_xattr, rb_node); + return rbtree_simple_xattr_cmp(xattr->name, node); +} + +/** + * simple_xattr_get - get an xattr object + * @xattrs: the header of the xattr object + * @name: the name of the xattr to retrieve + * @buffer: the buffer to store the value into + * @size: the size of @buffer + * + * Try to find and retrieve the xattr object associated with @name. + * If @buffer is provided store the value of @xattr in @buffer + * otherwise just return the length. The size of @buffer is limited + * to XATTR_SIZE_MAX which currently is 65536. + * + * Return: On success the length of the xattr value is returned. On error a + * negative error code is returned. */ int simple_xattr_get(struct simple_xattrs *xattrs, const char *name, void *buffer, size_t size) { - struct simple_xattr *xattr; + struct simple_xattr *xattr = NULL; + struct rb_node *rbp; int ret = -ENODATA; - spin_lock(&xattrs->lock); - list_for_each_entry(xattr, &xattrs->head, list) { - if (strcmp(name, xattr->name)) - continue; - + read_lock(&xattrs->lock); + rbp = rb_find(name, &xattrs->rb_root, rbtree_simple_xattr_cmp); + if (rbp) { + xattr = rb_entry(rbp, struct simple_xattr, rb_node); ret = xattr->size; if (buffer) { if (size < xattr->size) @@ -1054,34 +1124,44 @@ int simple_xattr_get(struct simple_xattrs *xattrs, const char *name, else memcpy(buffer, xattr->value, xattr->size); } - break; } - spin_unlock(&xattrs->lock); + read_unlock(&xattrs->lock); return ret; } /** - * simple_xattr_set - xattr SET operation for in-memory/pseudo filesystems - * @xattrs: target simple_xattr list - * @name: name of the extended attribute - * @value: value of the xattr. If %NULL, will remove the attribute. - * @size: size of the new xattr - * @flags: %XATTR_{CREATE|REPLACE} - * @removed_size: returns size of the removed xattr, -1 if none removed + * simple_xattr_set - set an xattr object + * @xattrs: the header of the xattr object + * @name: the name of the xattr to retrieve + * @value: the value to store along the xattr + * @size: the size of @value + * @flags: the flags determining how to set the xattr + * @removed_size: the size of the removed xattr + * + * Set a new xattr object. + * If @value is passed a new xattr object will be allocated. If XATTR_REPLACE + * is specified in @flags a matching xattr object for @name must already exist. + * If it does it will be replaced with the new xattr object. If it doesn't we + * fail. If XATTR_CREATE is specified and a matching xattr does already exist + * we fail. If it doesn't we create a new xattr. If @flags is zero we simply + * insert the new xattr replacing any existing one. + * + * If @value is empty and a matching xattr object is found we delete it if + * XATTR_REPLACE is specified in @flags or @flags is zero. * - * %XATTR_CREATE is set, the xattr shouldn't exist already; otherwise fails - * with -EEXIST. If %XATTR_REPLACE is set, the xattr should exist; - * otherwise, fails with -ENODATA. + * If @value is empty and no matching xattr object for @name is found we do + * nothing if XATTR_CREATE is specified in @flags or @flags is zero. For + * XATTR_REPLACE we fail as mentioned above. * - * Returns 0 on success, -errno on failure. + * Return: On success zero and on error a negative error code is returned. */ int simple_xattr_set(struct simple_xattrs *xattrs, const char *name, const void *value, size_t size, int flags, ssize_t *removed_size) { - struct simple_xattr *xattr; - struct simple_xattr *new_xattr = NULL; - int err = 0; + struct simple_xattr *xattr = NULL, *new_xattr = NULL; + struct rb_node *parent = NULL, **rbp; + int err = 0, ret; if (removed_size) *removed_size = -1; @@ -1094,42 +1174,68 @@ int simple_xattr_set(struct simple_xattrs *xattrs, const char *name, new_xattr->name = kstrdup(name, GFP_KERNEL); if (!new_xattr->name) { - kvfree(new_xattr); + free_simple_xattr(new_xattr); return -ENOMEM; } } - spin_lock(&xattrs->lock); - list_for_each_entry(xattr, &xattrs->head, list) { - if (!strcmp(name, xattr->name)) { - if (flags & XATTR_CREATE) { - xattr = new_xattr; - err = -EEXIST; - } else if (new_xattr) { - list_replace(&xattr->list, &new_xattr->list); - if (removed_size) - *removed_size = xattr->size; - } else { - list_del(&xattr->list); - if (removed_size) - *removed_size = xattr->size; - } - goto out; - } - } - if (flags & XATTR_REPLACE) { - xattr = new_xattr; - err = -ENODATA; - } else { - list_add(&new_xattr->list, &xattrs->head); - xattr = NULL; + write_lock(&xattrs->lock); + rbp = &xattrs->rb_root.rb_node; + while (*rbp) { + parent = *rbp; + ret = rbtree_simple_xattr_cmp(name, *rbp); + if (ret < 0) + rbp = &(*rbp)->rb_left; + else if (ret > 0) + rbp = &(*rbp)->rb_right; + else + xattr = rb_entry(*rbp, struct simple_xattr, rb_node); + if (xattr) + break; } -out: - spin_unlock(&xattrs->lock); + if (xattr) { - kfree(xattr->name); - kvfree(xattr); + /* Fail if XATTR_CREATE is requested and the xattr exists. */ + if (flags & XATTR_CREATE) { + err = -EEXIST; + goto out_unlock; + } + + if (new_xattr) + rb_replace_node(&xattr->rb_node, &new_xattr->rb_node, + &xattrs->rb_root); + else + rb_erase(&xattr->rb_node, &xattrs->rb_root); + if (!err && removed_size) + *removed_size = xattr->size; + } else { + /* Fail if XATTR_REPLACE is requested but no xattr is found. */ + if (flags & XATTR_REPLACE) { + err = -ENODATA; + goto out_unlock; + } + + /* + * If XATTR_CREATE or no flags are specified together with a + * new value simply insert it. + */ + if (new_xattr) { + rb_link_node(&new_xattr->rb_node, parent, rbp); + rb_insert_color(&new_xattr->rb_node, &xattrs->rb_root); + } + + /* + * If XATTR_CREATE or no flags are specified and neither an + * old or new xattr exist then we don't need to do anything. + */ } + +out_unlock: + write_unlock(&xattrs->lock); + if (err) + free_simple_xattr(new_xattr); + else + free_simple_xattr(xattr); return err; } @@ -1153,14 +1259,31 @@ static int xattr_list_one(char **buffer, ssize_t *remaining_size, return 0; } -/* - * xattr LIST operation for in-memory/pseudo filesystems +/** + * simple_xattr_list - list all xattr objects + * @inode: inode from which to get the xattrs + * @xattrs: the header of the xattr object + * @buffer: the buffer to store all xattrs into + * @size: the size of @buffer + * + * List all xattrs associated with @inode. If @buffer is NULL we returned + * the required size of the buffer. If @buffer is provided we store the + * xattrs value into it provided it is big enough. + * + * Note, the number of xattr names that can be listed with listxattr(2) is + * limited to XATTR_LIST_MAX aka 65536 bytes. If a larger buffer is passed + * then vfs_listxattr() caps it to XATTR_LIST_MAX and if more xattr names + * are found it will return -E2BIG. + * + * Return: On success the required size or the size of the copied xattrs is + * returned. On error a negative error code is returned. */ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs, char *buffer, size_t size) { bool trusted = ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN); struct simple_xattr *xattr; + struct rb_node *rbp; ssize_t remaining_size = size; int err = 0; @@ -1181,8 +1304,10 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs, } #endif - spin_lock(&xattrs->lock); - list_for_each_entry(xattr, &xattrs->head, list) { + read_lock(&xattrs->lock); + for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) { + xattr = rb_entry(rbp, struct simple_xattr, rb_node); + /* skip "trusted." attributes for unprivileged callers */ if (!trusted && xattr_is_trusted(xattr->name)) continue; @@ -1191,18 +1316,76 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs, if (err) break; } - spin_unlock(&xattrs->lock); + read_unlock(&xattrs->lock); return err ? err : size - remaining_size; } -/* - * Adds an extended attribute to the list +/** + * rbtree_simple_xattr_less - compare two xattr rbtree nodes + * @new_node: new node + * @node: current node + * + * Compare the xattr attached to @new_node with the xattr attached to @node. + * Note that this function technically tolerates duplicate entries. + * + * Return: True if insertion point in the rbtree is found. + */ +static bool rbtree_simple_xattr_less(struct rb_node *new_node, + const struct rb_node *node) +{ + return rbtree_simple_xattr_node_cmp(new_node, node) < 0; +} + +/** + * simple_xattr_add - add xattr objects + * @xattrs: the header of the xattr object + * @new_xattr: the xattr object to add + * + * Add an xattr object to @xattrs. This assumes no replacement or removal + * of matching xattrs is wanted. Should only be called during inode + * initialization when a few distinct initial xattrs are supposed to be set. + */ +void simple_xattr_add(struct simple_xattrs *xattrs, + struct simple_xattr *new_xattr) +{ + write_lock(&xattrs->lock); + rb_add(&new_xattr->rb_node, &xattrs->rb_root, rbtree_simple_xattr_less); + write_unlock(&xattrs->lock); +} + +/** + * simple_xattrs_init - initialize new xattr header + * @xattrs: header to initialize + * + * Initialize relevant fields of a an xattr header. */ -void simple_xattr_list_add(struct simple_xattrs *xattrs, - struct simple_xattr *new_xattr) +void simple_xattrs_init(struct simple_xattrs *xattrs) { - spin_lock(&xattrs->lock); - list_add(&new_xattr->list, &xattrs->head); - spin_unlock(&xattrs->lock); + xattrs->rb_root = RB_ROOT; + rwlock_init(&xattrs->lock); +} + +/** + * simple_xattrs_free - free xattrs + * @xattrs: xattr header whose xattrs to destroy + * + * Destroy all xattrs in @xattr. When this is called no one can hold a + * reference to any of the xattrs anymore. + */ +void simple_xattrs_free(struct simple_xattrs *xattrs) +{ + struct rb_node *rbp; + + rbp = rb_first(&xattrs->rb_root); + while (rbp) { + struct simple_xattr *xattr; + struct rb_node *rbp_next; + + rbp_next = rb_next(rbp); + xattr = rb_entry(rbp, struct simple_xattr, rb_node); + rb_erase(&xattr->rb_node, &xattrs->rb_root); + free_simple_xattr(xattr); + rbp = rbp_next; + } } |