diff options
author | Jan Kara <jack@suse.cz> | 2016-12-12 21:34:12 -0500 |
---|---|---|
committer | Theodore Ts'o <tytso@mit.edu> | 2016-12-12 21:34:12 -0500 |
commit | 0cb80b4847553582830a59da2c022c37a1f4a119 (patch) | |
tree | e16efd7948740c035aef5fad3af596089e6e03da /fs | |
parent | 73b92a2a5e97d17cc4d5c4fe9d724d3273fb6fd2 (diff) |
dax: Fix sleep in atomic contex in grab_mapping_entry()
Commit 642261ac995e: "dax: add struct iomap based DAX PMD support" has
introduced unmapping of page tables if huge page needs to be split in
grab_mapping_entry(). However the unmapping happens after
radix_tree_preload() call which disables preemption and thus
unmap_mapping_range() tries to acquire i_mmap_lock in atomic context
which is a bug. Fix the problem by moving unmapping before
radix_tree_preload() call.
Fixes: 642261ac995e01d7837db1f4b90181496f7e6835
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/dax.c | 15 |
1 files changed, 7 insertions, 8 deletions
@@ -333,14 +333,6 @@ restart: } spin_unlock_irq(&mapping->tree_lock); - err = radix_tree_preload( - mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM); - if (err) { - if (pmd_downgrade) - put_locked_mapping_entry(mapping, index, entry); - return ERR_PTR(err); - } - /* * Besides huge zero pages the only other thing that gets * downgraded are empty entries which don't need to be @@ -350,6 +342,13 @@ restart: unmap_mapping_range(mapping, (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0); + err = radix_tree_preload( + mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM); + if (err) { + if (pmd_downgrade) + put_locked_mapping_entry(mapping, index, entry); + return ERR_PTR(err); + } spin_lock_irq(&mapping->tree_lock); if (pmd_downgrade) { |