summaryrefslogtreecommitdiff
path: root/security
diff options
context:
space:
mode:
authorRichard Kennedy <richard@rsk.demon.co.uk>2009-06-30 11:41:35 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2009-06-30 18:56:01 -0700
commitd7831a0bdf06b9f722b947bb0c205ff7d77cebd8 (patch)
tree99afa4a0e30b18724eece3a819867b290728dafe /security
parentdf279ca8966c3de83105428e3391ab17690802a9 (diff)
mm: prevent balance_dirty_pages() from doing too much work
balance_dirty_pages can overreact and move all of the dirty pages to writeback unnecessarily. balance_dirty_pages makes its decision to throttle based on the number of dirty plus writeback pages that are over the calculated limit,so it will continue to move pages even when there are plenty of pages in writeback and less than the threshold still dirty. This allows it to overshoot its limits and move all the dirty pages to writeback while waiting for the drives to catch up and empty the writeback list. A simple fio test easily demonstrates this problem. fio --name=f1 --directory=/disk1 --size=2G -rw=write --name=f2 --directory=/disk2 --size=1G --rw=write --startdelay=10 This is the simplest fix I could find, but I'm not entirely sure that it alone will be enough for all cases. But it certainly is an improvement on my desktop machine writing to 2 disks. Do we need something more for machines with large arrays where bdi_threshold * number_of_drives is greater than the dirty_ratio ? Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'security')
0 files changed, 0 insertions, 0 deletions