diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2013-09-04 08:36:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-09-04 08:36:35 -0700 |
commit | 5e0b3a4e88012d259e8b2c0f02f393c79686daf9 (patch) | |
tree | 1c6d7be145a7cce77996049eb78877ed95e87a4f /include/linux/nodemask.h | |
parent | 0d99b7087324978b09b59d8c7a0736214c4a42b1 (diff) | |
parent | 10866e62e8a6907d9072f10f9a0561db0c0cf50b (diff) |
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler changes from Ingo Molnar:
"Various optimizations, cleanups and smaller fixes - no major changes
in scheduler behavior"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/fair: Fix the sd_parent_degenerate() code
sched/fair: Rework and comment the group_imb code
sched/fair: Optimize find_busiest_queue()
sched/fair: Make group power more consistent
sched/fair: Remove duplicate load_per_task computations
sched/fair: Shrink sg_lb_stats and play memset games
sched: Clean-up struct sd_lb_stat
sched: Factor out code to should_we_balance()
sched: Remove one division operation in find_busiest_queue()
sched/cputime: Use this_cpu_add() in task_group_account_field()
cpumask: Fix cpumask leak in partition_sched_domains()
sched/x86: Optimize switch_mm() for multi-threaded workloads
generic-ipi: Kill unnecessary variable - csd_flags
numa: Mark __node_set() as __always_inline
sched/fair: Cleanup: remove duplicate variable declaration
sched/__wake_up_sync_key(): Fix nr_exclusive tasks which lead to WF_SYNC clearing
Diffstat (limited to 'include/linux/nodemask.h')
-rw-r--r-- | include/linux/nodemask.h | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 4e2cbfa640b7..58b9a02c38d2 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -98,8 +98,17 @@ typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; extern nodemask_t _unused_nodemask_arg_; +/* + * The inline keyword gives the compiler room to decide to inline, or + * not inline a function as it sees best. However, as these functions + * are called in both __init and non-__init functions, if they are not + * inlined we will end up with a section mis-match error (of the type of + * freeable items not being freed). So we must use __always_inline here + * to fix the problem. If other functions in the future also end up in + * this situation they will also need to be annotated as __always_inline + */ #define node_set(node, dst) __node_set((node), &(dst)) -static inline void __node_set(int node, volatile nodemask_t *dstp) +static __always_inline void __node_set(int node, volatile nodemask_t *dstp) { set_bit(node, dstp->bits); } |