From 9b5e5d0fdc91b73bba8cf5e0fbe3521a953e4e4d Mon Sep 17 00:00:00 2001 From: Lee Schermerhorn Date: Mon, 14 Dec 2009 17:58:32 -0800 Subject: hugetlb: use only nodes with memory for huge pages Register per node hstate sysfs attributes only for nodes with memory. Global replacement of 'all online nodes" with "all nodes with memory" in mm/hugetlb.c. Suggested by David Rientjes. A subsequent patch will handle adding/removing of per node hstate sysfs attributes when nodes transition to/from memoryless state via memory hotplug. NOTE: this patch has not been tested with memoryless nodes. Signed-off-by: Lee Schermerhorn Reviewed-by: Andi Kleen Cc: KAMEZAWA Hiroyuki Cc: Mel Gorman Cc: Randy Dunlap Cc: Nishanth Aravamudan Acked-by: David Rientjes Cc: Adam Litke Cc: Andy Whitcroft Cc: Eric Whitney Cc: Christoph Lameter Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/vm/hugetlbpage.txt | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt index 01c3108d2e31..6a8e4667ab38 100644 --- a/Documentation/vm/hugetlbpage.txt +++ b/Documentation/vm/hugetlbpage.txt @@ -90,11 +90,11 @@ huge page pool to 20, allocating or freeing huge pages, as required. On a NUMA platform, the kernel will attempt to distribute the huge page pool over all the set of allowed nodes specified by the NUMA memory policy of the task that modifies nr_hugepages. The default for the allowed nodes--when the -task has default memory policy--is all on-line nodes. Allowed nodes with -insufficient available, contiguous memory for a huge page will be silently -skipped when allocating persistent huge pages. See the discussion below of -the interaction of task memory policy, cpusets and per node attributes with -the allocation and freeing of persistent huge pages. +task has default memory policy--is all on-line nodes with memory. Allowed +nodes with insufficient available, contiguous memory for a huge page will be +silently skipped when allocating persistent huge pages. See the discussion +below of the interaction of task memory policy, cpusets and per node attributes +with the allocation and freeing of persistent huge pages. The success or failure of huge page allocation depends on the amount of physically contiguous memory that is present in system at the time of the @@ -226,7 +226,7 @@ resulting effect on persistent huge page allocation is as follows: without first moving to a cpuset that contains all of the desired nodes. 5) Boot-time huge page allocation attempts to distribute the requested number - of huge pages over all on-lines nodes. + of huge pages over all on-lines nodes with memory. Per Node Hugepages Attributes -- cgit v1.2.3-58-ga151