diff options
author | Petr Machata <petrm@mellanox.com> | 2020-06-27 01:45:25 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2020-06-29 17:08:28 -0700 |
commit | aebe4426ccaa4838f36ea805cdf7d76503e65117 (patch) | |
tree | 2086a44d7f134b47c8817e68fc67f1e1d1fbc9ba /net/sched/sch_red.c | |
parent | 5e701e49b7b40166cc56f7b0db205355095cad6b (diff) |
net: sched: Pass root lock to Qdisc_ops.enqueue
A following patch introduces qevents, points in qdisc algorithm where
packet can be processed by user-defined filters. Should this processing
lead to a situation where a new packet is to be enqueued on the same port,
holding the root lock would lead to deadlocks. To solve the issue, qevent
handler needs to unlock and relock the root lock when necessary.
To that end, add the root lock argument to the qdisc op enqueue, and
propagate throughout.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_red.c')
-rw-r--r-- | net/sched/sch_red.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c index 555a1b9e467f..6ace7d757e8b 100644 --- a/net/sched/sch_red.c +++ b/net/sched/sch_red.c @@ -65,7 +65,7 @@ static int red_use_nodrop(struct red_sched_data *q) return q->flags & TC_RED_NODROP; } -static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch, +static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch, spinlock_t *root_lock, struct sk_buff **to_free) { struct red_sched_data *q = qdisc_priv(sch); @@ -118,7 +118,7 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch, break; } - ret = qdisc_enqueue(skb, child, to_free); + ret = qdisc_enqueue(skb, child, root_lock, to_free); if (likely(ret == NET_XMIT_SUCCESS)) { qdisc_qstats_backlog_inc(sch, skb); sch->q.qlen++; |