diff options
author | Ganesan Rajagopal <rganesan@arista.com> | 2022-05-13 16:48:57 -0700 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-05-13 16:48:57 -0700 |
commit | 8e20d4b332660a32e842e20c34cfc3b3456bc6dc (patch) | |
tree | 65fefca6acbf3bc7c2356ec37120027bb3a4c463 /mm | |
parent | 78f39084b41d287aedb2ea55f2c1895cfa11d61a (diff) |
mm/memcontrol: export memcg->watermark via sysfs for v2 memcg
We run a lot of automated tests when building our software and run into
OOM scenarios when the tests run unbounded. v1 memcg exports
memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
metric to heuristically limit the number of tests that can run in parallel
based on per test historical data.
This metric is currently not exported for v2 memcg and there is no other
easy way of getting this information. getrusage() syscall returns
"ru_maxrss" which can be used as an approximation but that's the max RSS
of a single child process across all children instead of the aggregated
max for all child processes. The only work around is to periodically poll
"memory.current" but that's not practical for short-lived one-off cgroups.
Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
Link: https://lkml.kernel.org/r/20220507050916.GA13577@us192.sjc.aristanetworks.com
Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
Acked-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Reviewed-by: Michal Koutný <mkoutny@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e1b5823ac060..ef76df7c6d12 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6103,6 +6103,14 @@ static u64 memory_current_read(struct cgroup_subsys_state *css, return (u64)page_counter_read(&memcg->memory) * PAGE_SIZE; } +static u64 memory_peak_read(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + + return (u64)memcg->memory.watermark * PAGE_SIZE; +} + static int memory_min_show(struct seq_file *m, void *v) { return seq_puts_memcg_tunable(m, @@ -6407,6 +6415,11 @@ static struct cftype memory_files[] = { .read_u64 = memory_current_read, }, { + .name = "peak", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = memory_peak_read, + }, + { .name = "min", .flags = CFTYPE_NOT_ON_ROOT, .seq_show = memory_min_show, |