diff options
author | Jiri Pirko <jiri@nvidia.com> | 2023-10-13 14:10:26 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2023-10-18 09:23:01 +0100 |
commit | b5f4e371336a62a48f6ae51abb8366e968a8f88f (patch) | |
tree | b45659ee7e31b03b10ff570eaa9a3b3300d11e35 /arch | |
parent | a380687200e0f7f0e00d745796fd8b8ea4bcb746 (diff) |
devlink: don't take instance lock for nested handle put
Lockdep reports following issue:
WARNING: possible circular locking dependency detected
------------------------------------------------------
devlink/8191 is trying to acquire lock:
ffff88813f32c250 (&devlink->lock_key#14){+.+.}-{3:3}, at: devlink_rel_devlink_handle_put+0x11e/0x2d0
but task is already holding lock:
ffffffff8511eca8 (rtnl_mutex){+.+.}-{3:3}, at: unregister_netdev+0xe/0x20
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (rtnl_mutex){+.+.}-{3:3}:
lock_acquire+0x1c3/0x500
__mutex_lock+0x14c/0x1b20
register_netdevice_notifier_net+0x13/0x30
mlx5_lag_add_mdev+0x51c/0xa00 [mlx5_core]
mlx5_load+0x222/0xc70 [mlx5_core]
mlx5_init_one_devl_locked+0x4a0/0x1310 [mlx5_core]
mlx5_init_one+0x3b/0x60 [mlx5_core]
probe_one+0x786/0xd00 [mlx5_core]
local_pci_probe+0xd7/0x180
pci_device_probe+0x231/0x720
really_probe+0x1e4/0xb60
__driver_probe_device+0x261/0x470
driver_probe_device+0x49/0x130
__driver_attach+0x215/0x4c0
bus_for_each_dev+0xf0/0x170
bus_add_driver+0x21d/0x590
driver_register+0x133/0x460
vdpa_match_remove+0x89/0xc0 [vdpa]
do_one_initcall+0xc4/0x360
do_init_module+0x22d/0x760
load_module+0x51d7/0x6750
init_module_from_file+0xd2/0x130
idempotent_init_module+0x326/0x5a0
__x64_sys_finit_module+0xc1/0x130
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
-> #2 (mlx5_intf_mutex){+.+.}-{3:3}:
lock_acquire+0x1c3/0x500
__mutex_lock+0x14c/0x1b20
mlx5_register_device+0x3e/0xd0 [mlx5_core]
mlx5_init_one_devl_locked+0x8fa/0x1310 [mlx5_core]
mlx5_devlink_reload_up+0x147/0x170 [mlx5_core]
devlink_reload+0x203/0x380
devlink_nl_cmd_reload+0xb84/0x10e0
genl_family_rcv_msg_doit+0x1cc/0x2a0
genl_rcv_msg+0x3c9/0x670
netlink_rcv_skb+0x12c/0x360
genl_rcv+0x24/0x40
netlink_unicast+0x435/0x6f0
netlink_sendmsg+0x7a0/0xc70
sock_sendmsg+0xc5/0x190
__sys_sendto+0x1c8/0x290
__x64_sys_sendto+0xdc/0x1b0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
-> #1 (&dev->lock_key#8){+.+.}-{3:3}:
lock_acquire+0x1c3/0x500
__mutex_lock+0x14c/0x1b20
mlx5_init_one_devl_locked+0x45/0x1310 [mlx5_core]
mlx5_devlink_reload_up+0x147/0x170 [mlx5_core]
devlink_reload+0x203/0x380
devlink_nl_cmd_reload+0xb84/0x10e0
genl_family_rcv_msg_doit+0x1cc/0x2a0
genl_rcv_msg+0x3c9/0x670
netlink_rcv_skb+0x12c/0x360
genl_rcv+0x24/0x40
netlink_unicast+0x435/0x6f0
netlink_sendmsg+0x7a0/0xc70
sock_sendmsg+0xc5/0x190
__sys_sendto+0x1c8/0x290
__x64_sys_sendto+0xdc/0x1b0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
-> #0 (&devlink->lock_key#14){+.+.}-{3:3}:
check_prev_add+0x1af/0x2300
__lock_acquire+0x31d7/0x4eb0
lock_acquire+0x1c3/0x500
__mutex_lock+0x14c/0x1b20
devlink_rel_devlink_handle_put+0x11e/0x2d0
devlink_nl_port_fill+0xddf/0x1b00
devlink_port_notify+0xb5/0x220
__devlink_port_type_set+0x151/0x510
devlink_port_netdevice_event+0x17c/0x220
notifier_call_chain+0x97/0x240
unregister_netdevice_many_notify+0x876/0x1790
unregister_netdevice_queue+0x274/0x350
unregister_netdev+0x18/0x20
mlx5e_vport_rep_unload+0xc5/0x1c0 [mlx5_core]
__esw_offloads_unload_rep+0xd8/0x130 [mlx5_core]
mlx5_esw_offloads_rep_unload+0x52/0x70 [mlx5_core]
mlx5_esw_offloads_unload_rep+0x85/0xc0 [mlx5_core]
mlx5_eswitch_unload_sf_vport+0x41/0x90 [mlx5_core]
mlx5_devlink_sf_port_del+0x120/0x280 [mlx5_core]
genl_family_rcv_msg_doit+0x1cc/0x2a0
genl_rcv_msg+0x3c9/0x670
netlink_rcv_skb+0x12c/0x360
genl_rcv+0x24/0x40
netlink_unicast+0x435/0x6f0
netlink_sendmsg+0x7a0/0xc70
sock_sendmsg+0xc5/0x190
__sys_sendto+0x1c8/0x290
__x64_sys_sendto+0xdc/0x1b0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
other info that might help us debug this:
Chain exists of:
&devlink->lock_key#14 --> mlx5_intf_mutex --> rtnl_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(rtnl_mutex);
lock(mlx5_intf_mutex);
lock(rtnl_mutex);
lock(&devlink->lock_key#14);
Problem is taking the devlink instance lock of nested instance when RTNL
is already held.
To fix this, don't take the devlink instance lock when putting nested
handle. Instead, rely on the preparations done by previous two patches
to be able to access device pointer and obtain netns id without devlink
instance lock held.
Fixes: c137743bce02 ("devlink: introduce object and nested devlink relationship infra")
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'arch')
0 files changed, 0 insertions, 0 deletions