diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-07-16 19:28:34 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-07-16 19:28:34 -0700 |
commit | 51835949dda3783d4639cfa74ce13a3c9829de00 (patch) | |
tree | 2b593de5eba6ecc73f7c58fc65fdaffae45c7323 /drivers/net/wireless/intel/iwlwifi | |
parent | 0434dbe32053d07d658165be681505120c6b1abc (diff) | |
parent | 77ae5e5b00720372af2860efdc4bc652ac682696 (diff) |
Merge tag 'net-next-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextHEADmaster
Pull networking updates from Jakub Kicinski:
"Not much excitement - a handful of large patchsets (devmem among them)
did not make it in time.
Core & protocols:
- Use local_lock in addition to local_bh_disable() to protect per-CPU
resources in networking, a step closer for local_bh_disable() not
to act as a big lock on PREEMPT_RT
- Use flex array for netdevice priv area, ensure its cache alignment
- Add a sysctl knob to allow user to specify a default rto_min at
socket init time. Bit of a big hammer but multiple companies were
independently carrying such patch downstream so clearly it's useful
- Support scheduling transmission of packets based on CLOCK_TAI
- Un-pin TCP TIMEWAIT timer to avoid it firing on CPUs later cordoned
off using cpusets
- Support multiple L2TPv3 UDP tunnels using the same 5-tuple address
- Allow configuration of multipath hash seed, to both allow
synchronizing hashing of two routers, and preventing partial
accidental sync
- Improve TCP compliance with RFC 9293 for simultaneous connect()
- Support sending NAT keepalives in IPsec ESP in UDP states.
Userspace IKE daemon had to do this before, but the kernel can
better keep track of it
- Support sending supervision HSR frames with MAC addresses stored in
ProxyNodeTable when RedBox (i.e. HSR-SAN) is enabled
- Introduce IPPROTO_SMC for selecting SMC when socket is created
- Allow UDP GSO transmit from devices with no checksum offload
- openvswitch: add packet sampling via psample, separating the
sampled traffic from "upcall" packets sent to user space for
forwarding
- nf_tables: shrink memory consumption for transaction objects
Things we sprinkled into general kernel code:
- Power Sequencing subsystem (used by Qualcomm Bluetooth driver for
QCA6390) [ Already merged separately - Linus ]
- Add IRQ information in sysfs for auxiliary bus
- Introduce guard definition for local_lock
- Add aligned flavor of __cacheline_group_{begin, end}() markings for
grouping fields in structures
BPF:
- Notify user space (via epoll) when a struct_ops object is getting
detached/unregistered
- Add new kfuncs for a generic, open-coded bits iterator
- Enable BPF programs to declare arrays of kptr, bpf_rb_root, and
bpf_list_head
- Support resilient split BTF which cuts down on duplication and
makes BTF as compact as possible WRT BTF from modules
- Add support for dumping kfunc prototypes from BTF which enables
both detecting as well as dumping compilable prototypes for kfuncs
- riscv64 BPF JIT improvements in particular to add 12-argument
support for BPF trampolines and to utilize bpf_prog_pack for the
latter
- Add the capability to offload the netfilter flowtable in XDP layer
through kfuncs
Driver API:
- Allow users to configure IRQ tresholds between which automatic IRQ
moderation can choose
- Expand Power Sourcing (PoE) status with power, class and failure
reason. Support setting power limits
- Track additional RSS contexts in the core, make sure configuration
changes don't break them
- Support IPsec crypto offload for IPv6 ESP and IPv4 UDP-encapsulated
ESP data paths
- Support updating firmware on SFP modules
Tests and tooling:
- mptcp: use net/lib.sh to manage netns
- TCP-AO and TCP-MD5: replace debug prints used by tests with
tracepoints
- openvswitch: make test self-contained (don't depend on OvS CLI
tools)
Drivers:
- Ethernet high-speed NICs:
- Broadcom (bnxt):
- increase the max total outstanding PTP TX packets to 4
- add timestamping statistics support
- implement netdev_queue_mgmt_ops
- support new RSS context API
- Intel (100G, ice, idpf):
- implement FEC statistics and dumping signal quality indicators
- support E825C products (with 56Gbps PHYs)
- nVidia/Mellanox:
- support HW-GRO
- mlx4/mlx5: support per-queue statistics via netlink
- obey the max number of EQs setting in sub-functions
- AMD/Solarflare:
- support new RSS context API
- AMD/Pensando:
- ionic: rework fix for doorbell miss to lower overhead and
skip it on new HW
- Wangxun:
- txgbe: support Flow Director perfect filters
- Ethernet NICs consumer, embedded and virtual:
- Add driver for Tehuti Networks TN40xx chips
- Add driver for Meta's internal NIC chips
- Add driver for Ethernet MAC on Airoha EN7581 SoCs
- Add driver for Renesas Ethernet-TSN devices
- Google cloud vNIC:
- flow steering support
- Microsoft vNIC:
- support page sizes other than 4KB on ARM64
- vmware vNIC:
- support latency measurement (update to version 9)
- VirtIO net:
- support for Byte Queue Limits
- support configuring thresholds for automatic IRQ moderation
- support for AF_XDP Rx zero-copy
- Synopsys (stmmac):
- support for STM32MP13 SoC
- let platforms select the right PCS implementation
- TI:
- icssg-prueth: add multicast filtering support
- icssg-prueth: enable PTP timestamping and PPS
- Renesas:
- ravb: improve Rx performance 30-400% by using page pool,
theaded NAPI and timer-based IRQ coalescing
- ravb: add MII support for R-Car V4M
- Cadence (macb):
- macb: add ARP support to Wake-On-LAN
- Cortina:
- use phylib for RX and TX pause configuration
- Ethernet switches:
- nVidia/Mellanox:
- support configuration of multipath hash seed
- report more accurate max MTU
- use page_pool to improve Rx performance
- MediaTek:
- mt7530: add support for bridge port isolation
- Qualcomm:
- qca8k: add support for bridge port isolation
- Microchip:
- lan9371/2: add 100BaseTX PHY support
- NXP:
- vsc73xx: implement VLAN operations
- Ethernet PHYs:
- aquantia: enable support for aqr115c
- aquantia: add support for PHY LEDs
- realtek: add support for rtl8224 2.5Gbps PHY
- xpcs: add memory-mapped device support
- add BroadR-Reach link mode and support in Broadcom's PHY driver
- CAN:
- add document for ISO 15765-2 protocol support
- mcp251xfd: workaround for erratum DS80000789E, use timestamps to
catch when device returns incorrect FIFO status
- WiFi:
- mac80211/cfg80211:
- parse Transmit Power Envelope (TPE) data in mac80211 instead
of in drivers
- improvements for 6 GHz regulatory flexibility
- multi-link improvements
- support multiple radios per wiphy
- remove DEAUTH_NEED_MGD_TX_PREP flag
- Intel (iwlwifi):
- bump FW API to 91 for BZ/SC devices
- report 64-bit radiotap timestamp
- enable P2P low latency by default
- handle Transmit Power Envelope (TPE) advertised by AP
- remove support for older FW for new devices
- fast resume (keeping the device configured)
- mvm: re-enable Multi-Link Operation (MLO)
- aggregation (A-MSDU) optimizations
- MediaTek (mt76):
- mt7925 Multi-Link Operation (MLO) support
- Qualcomm (ath10k):
- LED support for various chipsets
- Qualcomm (ath12k):
- remove unsupported Tx monitor handling
- support channel 2 in 6 GHz band
- support Spatial Multiplexing Power Save (SMPS) in 6 GHz band
- supprt multiple BSSID (MBSSID) and Enhanced Multi-BSSID
Advertisements (EMA)
- support dynamic VLAN
- add panic handler for resetting the firmware state
- DebugFS support for datapath statistics
- WCN7850: support for Wake on WLAN
- Microchip (wilc1000):
- read MAC address during probe to make it visible to user space
- suspend/resume improvements
- TI (wl18xx):
- support newer firmware versions
- RealTek (rtw89):
- preparation for RTL8852BE-VT support
- Wake on WLAN support for WiFi 6 chips
- 36-bit PCI DMA support
- RealTek (rtlwifi):
- RTL8192DU support
- Broadcom (brcmfmac):
- Management Frame Protection support (to enable WPA3)
- Bluetooth:
- qualcomm: use the power sequencer for QCA6390
- btusb: mediatek: add ISO data transmission functions
- hci_bcm4377: add BCM4388 support
- btintel: add support for BlazarU core
- btintel: add support for Whale Peak2
- btnxpuart: add support for AW693 A1 chipset
- btnxpuart: add support for IW615 chipset
- btusb: add Realtek RTL8852BE support ID 0x13d3:0x3591"
* tag 'net-next-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1589 commits)
eth: fbnic: Fix spelling mistake "tiggerring" -> "triggering"
tcp: Replace strncpy() with strscpy()
wifi: ath12k: fix build vs old compiler
tcp: Don't access uninit tcp_rsk(req)->ao_keyid in tcp_create_openreq_child().
eth: fbnic: Write the TCAM tables used for RSS control and Rx to host
eth: fbnic: Add L2 address programming
eth: fbnic: Add basic Rx handling
eth: fbnic: Add basic Tx handling
eth: fbnic: Add link detection
eth: fbnic: Add initial messaging to notify FW of our presence
eth: fbnic: Implement Rx queue alloc/start/stop/free
eth: fbnic: Implement Tx queue alloc/start/stop/free
eth: fbnic: Allocate a netdevice and napi vectors with queues
eth: fbnic: Add FW communication mechanism
eth: fbnic: Add message parsing for FW messages
eth: fbnic: Add register init to set PCIe/Ethernet device config
eth: fbnic: Allocate core device specific structures and devlink interface
eth: fbnic: Add scaffolding for Meta's NIC driver
PCI: Add Meta Platforms vendor ID
net/sched: cls_flower: propagate tca[TCA_OPTIONS] to NL_REQ_ATTR_CHECK
...
Diffstat (limited to 'drivers/net/wireless/intel/iwlwifi')
104 files changed, 5374 insertions, 4829 deletions
diff --git a/drivers/net/wireless/intel/iwlwifi/Makefile b/drivers/net/wireless/intel/iwlwifi/Makefile index 8bb94a4c12cd..64c123314245 100644 --- a/drivers/net/wireless/intel/iwlwifi/Makefile +++ b/drivers/net/wireless/intel/iwlwifi/Makefile @@ -4,7 +4,7 @@ obj-$(CONFIG_IWLWIFI) += iwlwifi.o iwlwifi-objs += iwl-io.o iwlwifi-objs += iwl-drv.o iwlwifi-objs += iwl-debug.o -iwlwifi-objs += iwl-eeprom-read.o iwl-eeprom-parse.o +iwlwifi-objs += iwl-nvm-utils.o iwlwifi-objs += iwl-phy-db.o iwl-nvm-parse.o iwlwifi-objs += pcie/drv.o pcie/rx.o pcie/tx.o pcie/trans.o iwlwifi-objs += pcie/ctxt-info.o pcie/ctxt-info-gen3.o @@ -14,7 +14,6 @@ iwlwifi-$(CONFIG_IWLMVM) += cfg/7000.o cfg/8000.o cfg/9000.o cfg/22000.o iwlwifi-$(CONFIG_IWLMVM) += cfg/ax210.o cfg/bz.o cfg/sc.o iwlwifi-objs += iwl-dbg-tlv.o iwlwifi-objs += iwl-trans.o -iwlwifi-objs += queue/tx.o iwlwifi-objs += fw/img.o fw/notif-wait.o fw/rs.o iwlwifi-objs += fw/dbg.o fw/pnvm.o fw/dump.o diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c index d594694206b3..2e2fcb3807ef 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2015-2017 Intel Deutschland GmbH - * Copyright (C) 2018-2023 Intel Corporation + * Copyright (C) 2018-2024 Intel Corporation */ #include <linux/module.h> #include <linux/stringify.h> @@ -13,7 +13,7 @@ #define IWL_22000_UCODE_API_MAX 77 /* Lowest firmware API version supported */ -#define IWL_22000_UCODE_API_MIN 50 +#define IWL_22000_UCODE_API_MIN 77 /* NVM versions */ #define IWL_22000_NVM_VERSION 0x0a1d diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c b/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c index 25952d0bea99..975e8aed1526 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c @@ -13,7 +13,7 @@ #define IWL_AX210_UCODE_API_MAX 89 /* Lowest firmware API version supported */ -#define IWL_AX210_UCODE_API_MIN 59 +#define IWL_AX210_UCODE_API_MIN 77 /* NVM versions */ #define IWL_AX210_NVM_VERSION 0x0a1d diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c index bc98b87cf2a1..3b6b8b410be5 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c @@ -10,10 +10,10 @@ #include "fw/api/txq.h" /* Highest firmware API version supported */ -#define IWL_BZ_UCODE_API_MAX 90 +#define IWL_BZ_UCODE_API_MAX 92 /* Lowest firmware API version supported */ -#define IWL_BZ_UCODE_API_MIN 80 +#define IWL_BZ_UCODE_API_MIN 90 /* NVM versions */ #define IWL_BZ_NVM_VERSION 0x0a1d diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/sc.c b/drivers/net/wireless/intel/iwlwifi/cfg/sc.c index 9b79279fd76c..4ccb0b7bdc20 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/sc.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/sc.c @@ -10,10 +10,10 @@ #include "fw/api/txq.h" /* Highest firmware API version supported */ -#define IWL_SC_UCODE_API_MAX 90 +#define IWL_SC_UCODE_API_MAX 92 /* Lowest firmware API version supported */ -#define IWL_SC_UCODE_API_MIN 82 +#define IWL_SC_UCODE_API_MIN 90 /* NVM versions */ #define IWL_SC_NVM_VERSION 0x0a1d diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/Makefile b/drivers/net/wireless/intel/iwlwifi/dvm/Makefile index 6109d64006db..abcf8aeb010d 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/Makefile +++ b/drivers/net/wireless/intel/iwlwifi/dvm/Makefile @@ -2,7 +2,7 @@ # DVM obj-$(CONFIG_IWLDVM) += iwldvm.o iwldvm-objs += main.o rs.o mac80211.o ucode.o tx.o -iwldvm-objs += lib.o calib.o tt.o sta.o rx.o +iwldvm-objs += lib.o calib.o tt.o sta.o rx.o eeprom.o iwldvm-objs += power.o iwldvm-objs += scan.o diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/agn.h b/drivers/net/wireless/intel/iwlwifi/dvm/agn.h index fefaa414272b..a13add556a7b 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/agn.h +++ b/drivers/net/wireless/intel/iwlwifi/dvm/agn.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2021 Intel Corporation + * Copyright (C) 2005-2014, 2021, 2024 Intel Corporation */ #ifndef __iwl_agn_h__ #define __iwl_agn_h__ @@ -385,6 +385,25 @@ static inline void iwl_dvm_set_pmi(struct iwl_priv *priv, bool state) iwl_trans_set_pmi(priv->trans, state); } +/** + * iwl_parse_eeprom_data - parse EEPROM data and return values + * + * @trans: ransport we're parsing for, for debug only + * @cfg: device configuration for parsing and overrides + * @eeprom: the EEPROM data + * @eeprom_size: length of the EEPROM data + * + * This function parses all EEPROM values we need and then + * returns a (newly allocated) struct containing all the + * relevant values for driver use. The struct must be freed + * later with iwl_free_nvm_data(). + */ +struct iwl_nvm_data * +iwl_parse_eeprom_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, + const u8 *eeprom, size_t eeprom_size); + +int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size); + #ifdef CONFIG_IWLWIFI_DEBUGFS void iwl_dbgfs_register(struct iwl_priv *priv, struct dentry *dbgfs_dir); #else diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/commands.h b/drivers/net/wireless/intel/iwlwifi/dvm/commands.h index 04864d3fda63..3f49c0bccb28 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/commands.h +++ b/drivers/net/wireless/intel/iwlwifi/dvm/commands.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2023 Intel Corporation + * Copyright (C) 2005-2014, 2023-2024 Intel Corporation */ /* * Please use this file (commands.h) only for uCode API definitions. @@ -177,7 +177,7 @@ enum { * *****************************************************************************/ -/** +/* * iwlagn rate_n_flags bit fields * * rate_n_flags format is used in following iwlagn commands: @@ -251,7 +251,7 @@ enum { #define RATE_MCS_SGI_POS 13 #define RATE_MCS_SGI_MSK 0x2000 -/** +/* * rate_n_flags Tx antenna masks * bit14:16 */ @@ -2767,7 +2767,7 @@ struct iwl_missed_beacon_notif { * *****************************************************************************/ -/** +/* * SENSITIVITY_CMD = 0xa8 (command, has simple generic response) * * This command sets up the Rx signal detector for a sensitivity level that diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/dev.h b/drivers/net/wireless/intel/iwlwifi/dvm/dev.h index 25283e4b849f..4ac8b862ad41 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/dev.h +++ b/drivers/net/wireless/intel/iwlwifi/dvm/dev.h @@ -19,7 +19,7 @@ #include <linux/mutex.h> #include "fw/img.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "iwl-csr.h" #include "iwl-debug.h" #include "iwl-agn-hw.h" diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/devices.c b/drivers/net/wireless/intel/iwlwifi/dvm/devices.c index 39e40901fa46..48a8349680fc 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/devices.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/devices.c @@ -12,7 +12,7 @@ */ #include "iwl-io.h" #include "iwl-prph.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "agn.h" #include "dev.h" diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c b/drivers/net/wireless/intel/iwlwifi/dvm/eeprom.c index 2b290fab1ef2..931aa3f5798d 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/eeprom.c @@ -1,16 +1,18 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation - * Copyright (C) 2015 Intel Mobile Communications GmbH + * Copyright (C) 2005-2014, 2018-2019, 2021, 2024 Intel Corporation */ #include <linux/types.h> #include <linux/slab.h> #include <linux/export.h> + #include "iwl-drv.h" -#include "iwl-modparams.h" -#include "iwl-eeprom-parse.h" +#include "iwl-debug.h" +#include "iwl-io.h" +#include "iwl-prph.h" +#include "iwl-csr.h" +#include "agn.h" -#if IS_ENABLED(CONFIG_IWLDVM) /* EEPROM offset definitions */ /* indirect access definitions */ @@ -79,7 +81,6 @@ enum eeprom_sku_bits { #define EEPROM_RF_CFG_TX_ANT_MSK(x) ((x >> 8) & 0xF) /* bits 8-11 */ #define EEPROM_RF_CFG_RX_ANT_MSK(x) ((x >> 12) & 0xF) /* bits 12-15 */ - /* * EEPROM bands * These are the channel numbers from each band in the order @@ -257,7 +258,6 @@ struct iwl_eeprom_channel { s8 max_power_avg; } __packed; - enum iwl_eeprom_enhanced_txpwr_flags { IWL_EEPROM_ENH_TXP_FL_VALID = BIT(0), IWL_EEPROM_ENH_TXP_FL_BAND_52G = BIT(1), @@ -648,114 +648,385 @@ static int iwl_init_channel_map(struct device *dev, const struct iwl_cfg *cfg, return n_channels; } -#endif +/* + * EEPROM access time values: + * + * Driver initiates EEPROM read by writing byte address << 1 to CSR_EEPROM_REG. + * Driver then polls CSR_EEPROM_REG for CSR_EEPROM_REG_READ_VALID_MSK (0x1). + * When polling, wait 10 uSec between polling loops, up to a maximum 5000 uSec. + * Driver reads 16-bit value from bits 31-16 of CSR_EEPROM_REG. + */ +#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */ -int iwl_init_sband_channels(struct iwl_nvm_data *data, - struct ieee80211_supported_band *sband, - int n_channels, enum nl80211_band band) +/* + * The device's EEPROM semaphore prevents conflicts between driver and uCode + * when accessing the EEPROM; each access is a series of pulses to/from the + * EEPROM chip, not a single event, so even reads could conflict if they + * weren't arbitrated by the semaphore. + */ +#define IWL_EEPROM_SEM_TIMEOUT 10 /* microseconds */ +#define IWL_EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ + + +static int iwl_eeprom_acquire_semaphore(struct iwl_trans *trans) { - struct ieee80211_channel *chan = &data->channels[0]; - int n = 0, idx = 0; + u16 count; + int ret; + + for (count = 0; count < IWL_EEPROM_SEM_RETRY_LIMIT; count++) { + /* Request semaphore */ + iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); + + /* See if we got it */ + ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, + IWL_EEPROM_SEM_TIMEOUT); + if (ret >= 0) { + IWL_DEBUG_EEPROM(trans->dev, + "Acquired semaphore after %d tries.\n", + count+1); + return ret; + } + } - while (idx < n_channels && chan->band != band) - chan = &data->channels[++idx]; + return ret; +} - sband->channels = &data->channels[idx]; +static void iwl_eeprom_release_semaphore(struct iwl_trans *trans) +{ + iwl_clear_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); +} - while (idx < n_channels && chan->band == band) { - chan = &data->channels[++idx]; - n++; - } +static int iwl_eeprom_verify_signature(struct iwl_trans *trans, bool nvm_is_otp) +{ + u32 gp = iwl_read32(trans, CSR_EEPROM_GP) & CSR_EEPROM_GP_VALID_MSK; - sband->n_channels = n; + IWL_DEBUG_EEPROM(trans->dev, "EEPROM signature=0x%08x\n", gp); - return n; + switch (gp) { + case CSR_EEPROM_GP_BAD_SIG_EEP_GOOD_SIG_OTP: + if (!nvm_is_otp) { + IWL_ERR(trans, "EEPROM with bad signature: 0x%08x\n", + gp); + return -ENOENT; + } + return 0; + case CSR_EEPROM_GP_GOOD_SIG_EEP_LESS_THAN_4K: + case CSR_EEPROM_GP_GOOD_SIG_EEP_MORE_THAN_4K: + if (nvm_is_otp) { + IWL_ERR(trans, "OTP with bad signature: 0x%08x\n", gp); + return -ENOENT; + } + return 0; + case CSR_EEPROM_GP_BAD_SIGNATURE_BOTH_EEP_AND_OTP: + default: + IWL_ERR(trans, + "bad EEPROM/OTP signature, type=%s, EEPROM_GP=0x%08x\n", + nvm_is_otp ? "OTP" : "EEPROM", gp); + return -ENOENT; + } } -#define MAX_BIT_RATE_40_MHZ 150 /* Mbps */ -#define MAX_BIT_RATE_20_MHZ 72 /* Mbps */ +/****************************************************************************** + * + * OTP related functions + * +******************************************************************************/ -void iwl_init_ht_hw_capab(struct iwl_trans *trans, - struct iwl_nvm_data *data, - struct ieee80211_sta_ht_cap *ht_info, - enum nl80211_band band, - u8 tx_chains, u8 rx_chains) +static void iwl_set_otp_access_absolute(struct iwl_trans *trans) { - const struct iwl_cfg *cfg = trans->cfg; - int max_bit_rate = 0; - - tx_chains = hweight8(tx_chains); - if (cfg->rx_with_siso_diversity) - rx_chains = 1; - else - rx_chains = hweight8(rx_chains); - - if (!(data->sku_cap_11n_enable) || - (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_ALL) || - !cfg->ht_params) { - ht_info->ht_supported = false; - return; + iwl_read32(trans, CSR_OTP_GP_REG); + + iwl_clear_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_OTP_ACCESS_MODE); +} + +static int iwl_nvm_is_otp(struct iwl_trans *trans) +{ + u32 otpgp; + + /* OTP only valid for CP/PP and after */ + switch (trans->hw_rev & CSR_HW_REV_TYPE_MSK) { + case CSR_HW_REV_TYPE_NONE: + IWL_ERR(trans, "Unknown hardware type\n"); + return -EIO; + case CSR_HW_REV_TYPE_5300: + case CSR_HW_REV_TYPE_5350: + case CSR_HW_REV_TYPE_5100: + case CSR_HW_REV_TYPE_5150: + return 0; + default: + otpgp = iwl_read32(trans, CSR_OTP_GP_REG); + if (otpgp & CSR_OTP_GP_REG_DEVICE_SELECT) + return 1; + return 0; } +} + +static int iwl_init_otp_access(struct iwl_trans *trans) +{ + int ret; + + ret = iwl_finish_nic_init(trans); + if (ret) + return ret; + + iwl_set_bits_prph(trans, APMG_PS_CTRL_REG, + APMG_PS_CTRL_VAL_RESET_REQ); + udelay(5); + iwl_clear_bits_prph(trans, APMG_PS_CTRL_REG, + APMG_PS_CTRL_VAL_RESET_REQ); - if (data->sku_cap_mimo_disabled) - rx_chains = 1; + /* + * CSR auto clock gate disable bit - + * this is only applicable for HW with OTP shadow RAM + */ + if (trans->trans_cfg->base_params->shadow_ram_support) + iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG, + CSR_RESET_LINK_PWR_MGMT_DISABLED); - ht_info->ht_supported = true; - ht_info->cap = IEEE80211_HT_CAP_DSSSCCK40; + return 0; +} - if (cfg->ht_params->stbc) { - ht_info->cap |= (1 << IEEE80211_HT_CAP_RX_STBC_SHIFT); +static int iwl_read_otp_word(struct iwl_trans *trans, u16 addr, + __le16 *eeprom_data) +{ + int ret = 0; + u32 r; + u32 otpgp; + + iwl_write32(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); + ret = iwl_poll_bit(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_READ_VALID_MSK, + CSR_EEPROM_REG_READ_VALID_MSK, + IWL_EEPROM_ACCESS_TIMEOUT); + if (ret < 0) { + IWL_ERR(trans, "Time out reading OTP[%d]\n", addr); + return ret; + } + r = iwl_read32(trans, CSR_EEPROM_REG); + /* check for ECC errors: */ + otpgp = iwl_read32(trans, CSR_OTP_GP_REG); + if (otpgp & CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK) { + /* stop in this case */ + /* set the uncorrectable OTP ECC bit for acknowledgment */ + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); + IWL_ERR(trans, "Uncorrectable OTP ECC error, abort OTP read\n"); + return -EINVAL; + } + if (otpgp & CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK) { + /* continue in this case */ + /* set the correctable OTP ECC bit for acknowledgment */ + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK); + IWL_ERR(trans, "Correctable OTP ECC error, continue read\n"); + } + *eeprom_data = cpu_to_le16(r >> 16); + return 0; +} - if (tx_chains > 1) - ht_info->cap |= IEEE80211_HT_CAP_TX_STBC; +/* + * iwl_is_otp_empty: check for empty OTP + */ +static bool iwl_is_otp_empty(struct iwl_trans *trans) +{ + u16 next_link_addr = 0; + __le16 link_value; + bool is_empty = false; + + /* locate the beginning of OTP link list */ + if (!iwl_read_otp_word(trans, next_link_addr, &link_value)) { + if (!link_value) { + IWL_ERR(trans, "OTP is empty\n"); + is_empty = true; + } + } else { + IWL_ERR(trans, "Unable to read first block of OTP list.\n"); + is_empty = true; } - if (cfg->ht_params->ldpc) - ht_info->cap |= IEEE80211_HT_CAP_LDPC_CODING; + return is_empty; +} - if (trans->trans_cfg->mq_rx_supported || - iwlwifi_mod_params.amsdu_size >= IWL_AMSDU_8K) - ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU; - ht_info->ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K; - ht_info->ampdu_density = IEEE80211_HT_MPDU_DENSITY_4; +/* + * iwl_find_otp_image: find EEPROM image in OTP + * finding the OTP block that contains the EEPROM image. + * the last valid block on the link list (the block _before_ the last block) + * is the block we should read and used to configure the device. + * If all the available OTP blocks are full, the last block will be the block + * we should read and used to configure the device. + * only perform this operation if shadow RAM is disabled + */ +static int iwl_find_otp_image(struct iwl_trans *trans, + u16 *validblockaddr) +{ + u16 next_link_addr = 0, valid_addr; + __le16 link_value = 0; + int usedblocks = 0; - ht_info->mcs.rx_mask[0] = 0xFF; - ht_info->mcs.rx_mask[1] = 0x00; - ht_info->mcs.rx_mask[2] = 0x00; + /* set addressing mode to absolute to traverse the link list */ + iwl_set_otp_access_absolute(trans); - if (rx_chains >= 2) - ht_info->mcs.rx_mask[1] = 0xFF; - if (rx_chains >= 3) - ht_info->mcs.rx_mask[2] = 0xFF; + /* checking for empty OTP or error */ + if (iwl_is_otp_empty(trans)) + return -EINVAL; - if (cfg->ht_params->ht_greenfield_support) - ht_info->cap |= IEEE80211_HT_CAP_GRN_FLD; - ht_info->cap |= IEEE80211_HT_CAP_SGI_20; + /* + * start traverse link list + * until reach the max number of OTP blocks + * different devices have different number of OTP blocks + */ + do { + /* save current valid block address + * check for more block on the link list + */ + valid_addr = next_link_addr; + next_link_addr = le16_to_cpu(link_value) * sizeof(u16); + IWL_DEBUG_EEPROM(trans->dev, "OTP blocks %d addr 0x%x\n", + usedblocks, next_link_addr); + if (iwl_read_otp_word(trans, next_link_addr, &link_value)) + return -EINVAL; + if (!link_value) { + /* + * reach the end of link list, return success and + * set address point to the starting address + * of the image + */ + *validblockaddr = valid_addr; + /* skip first 2 bytes (link list pointer) */ + *validblockaddr += 2; + return 0; + } + /* more in the link list, continue */ + usedblocks++; + } while (usedblocks <= trans->trans_cfg->base_params->max_ll_items); - max_bit_rate = MAX_BIT_RATE_20_MHZ; + /* OTP has no valid blocks */ + IWL_DEBUG_EEPROM(trans->dev, "OTP has no valid blocks\n"); + return -EINVAL; +} - if (cfg->ht_params->ht40_bands & BIT(band)) { - ht_info->cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40; - ht_info->cap |= IEEE80211_HT_CAP_SGI_40; - max_bit_rate = MAX_BIT_RATE_40_MHZ; +/* + * iwl_read_eeprom - read EEPROM contents + * + * Load the EEPROM contents from adapter and return it + * and its size. + * + * NOTE: This routine uses the non-debug IO access functions. + */ +int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size) +{ + __le16 *e; + u32 gp = iwl_read32(trans, CSR_EEPROM_GP); + int sz; + int ret; + u16 addr; + u16 validblockaddr = 0; + u16 cache_addr = 0; + int nvm_is_otp; + + if (!eeprom || !eeprom_size) + return -EINVAL; + + nvm_is_otp = iwl_nvm_is_otp(trans); + if (nvm_is_otp < 0) + return nvm_is_otp; + + sz = trans->trans_cfg->base_params->eeprom_size; + IWL_DEBUG_EEPROM(trans->dev, "NVM size = %d\n", sz); + + e = kmalloc(sz, GFP_KERNEL); + if (!e) + return -ENOMEM; + + ret = iwl_eeprom_verify_signature(trans, nvm_is_otp); + if (ret < 0) { + IWL_ERR(trans, "EEPROM not found, EEPROM_GP=0x%08x\n", gp); + goto err_free; } - /* Highest supported Rx data rate */ - max_bit_rate *= rx_chains; - WARN_ON(max_bit_rate & ~IEEE80211_HT_MCS_RX_HIGHEST_MASK); - ht_info->mcs.rx_highest = cpu_to_le16(max_bit_rate); - - /* Tx MCS capabilities */ - ht_info->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED; - if (tx_chains != rx_chains) { - ht_info->mcs.tx_params |= IEEE80211_HT_MCS_TX_RX_DIFF; - ht_info->mcs.tx_params |= ((tx_chains - 1) << - IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT); + /* Make sure driver (instead of uCode) is allowed to read EEPROM */ + ret = iwl_eeprom_acquire_semaphore(trans); + if (ret < 0) { + IWL_ERR(trans, "Failed to acquire EEPROM semaphore.\n"); + goto err_free; } + + if (nvm_is_otp) { + ret = iwl_init_otp_access(trans); + if (ret) { + IWL_ERR(trans, "Failed to initialize OTP access.\n"); + goto err_unlock; + } + + iwl_write32(trans, CSR_EEPROM_GP, + iwl_read32(trans, CSR_EEPROM_GP) & + ~CSR_EEPROM_GP_IF_OWNER_MSK); + + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK | + CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); + /* traversing the linked list if no shadow ram supported */ + if (!trans->trans_cfg->base_params->shadow_ram_support) { + ret = iwl_find_otp_image(trans, &validblockaddr); + if (ret) + goto err_unlock; + } + for (addr = validblockaddr; addr < validblockaddr + sz; + addr += sizeof(u16)) { + __le16 eeprom_data; + + ret = iwl_read_otp_word(trans, addr, &eeprom_data); + if (ret) + goto err_unlock; + e[cache_addr / 2] = eeprom_data; + cache_addr += sizeof(u16); + } + } else { + /* eeprom is an array of 16bit values */ + for (addr = 0; addr < sz; addr += sizeof(u16)) { + u32 r; + + iwl_write32(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); + + ret = iwl_poll_bit(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_READ_VALID_MSK, + CSR_EEPROM_REG_READ_VALID_MSK, + IWL_EEPROM_ACCESS_TIMEOUT); + if (ret < 0) { + IWL_ERR(trans, + "Time out reading EEPROM[%d]\n", addr); + goto err_unlock; + } + r = iwl_read32(trans, CSR_EEPROM_REG); + e[addr / 2] = cpu_to_le16(r >> 16); + } + } + + IWL_DEBUG_EEPROM(trans->dev, "NVM Type: %s\n", + nvm_is_otp ? "OTP" : "EEPROM"); + + iwl_eeprom_release_semaphore(trans); + + *eeprom_size = sz; + *eeprom = (u8 *)e; + return 0; + + err_unlock: + iwl_eeprom_release_semaphore(trans); + err_free: + kfree(e); + + return ret; } -#if IS_ENABLED(CONFIG_IWLDVM) static void iwl_init_sbands(struct iwl_trans *trans, const struct iwl_cfg *cfg, struct iwl_nvm_data *data, const u8 *eeprom, size_t eeprom_size) @@ -790,7 +1061,6 @@ static void iwl_init_sbands(struct iwl_trans *trans, const struct iwl_cfg *cfg, } /* EEPROM data functions */ - struct iwl_nvm_data * iwl_parse_eeprom_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, const u8 *eeprom, size_t eeprom_size) @@ -837,8 +1107,8 @@ iwl_parse_eeprom_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, data->kelvin_temperature = *(__le16 *)tmp; data->kelvin_voltage = *((__le16 *)tmp + 1); - radio_cfg = iwl_eeprom_query16(eeprom, eeprom_size, - EEPROM_RADIO_CONFIG); + radio_cfg = + iwl_eeprom_query16(eeprom, eeprom_size, EEPROM_RADIO_CONFIG); data->radio_cfg_dash = EEPROM_RF_CFG_DASH_MSK(radio_cfg); data->radio_cfg_pnum = EEPROM_RF_CFG_PNUM_MSK(radio_cfg); data->radio_cfg_step = EEPROM_RF_CFG_STEP_MSK(radio_cfg); @@ -878,5 +1148,3 @@ iwl_parse_eeprom_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, kfree(data); return NULL; } -IWL_EXPORT_SYMBOL(iwl_parse_eeprom_data); -#endif diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c index 52b008ce53bd..74d163e56511 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c @@ -2,7 +2,7 @@ /****************************************************************************** * * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved. - * Copyright(C) 2018 - 2019, 2022 - 2023 Intel Corporation + * Copyright(C) 2018 - 2019, 2022 - 2024 Intel Corporation * * Portions of this file are derived from the ipw3945 project, as well * as portions of the ieee80211 subsystem header files. @@ -145,8 +145,6 @@ int iwlagn_mac_setup_register(struct iwl_priv *priv, #ifdef CONFIG_PM_SLEEP if (priv->fw->img[IWL_UCODE_WOWLAN].num_sec && - priv->trans->ops->d3_suspend && - priv->trans->ops->d3_resume && device_can_wakeup(priv->trans->dev)) { priv->wowlan_support.flags = WIPHY_WOWLAN_MAGIC_PKT | WIPHY_WOWLAN_DISCONNECT | @@ -302,7 +300,7 @@ static int iwlagn_mac_start(struct ieee80211_hw *hw) return ret; } -static void iwlagn_mac_stop(struct ieee80211_hw *hw) +static void iwlagn_mac_stop(struct ieee80211_hw *hw, bool suspend) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -730,8 +728,6 @@ static int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, ret = iwl_sta_rx_agg_stop(priv, sta, tid); break; case IEEE80211_AMPDU_TX_START: - if (!priv->trans->ops->txq_enable) - break; if (!iwl_enable_tx_ampdu()) break; IWL_DEBUG_HT(priv, "start Tx\n"); diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c index 8774dd7b921e..65b7c68e5ca7 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c @@ -1,7 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only /****************************************************************************** * - * Copyright(c) 2003 - 2014, 2018 - 2022 Intel Corporation. All rights reserved. + * Copyright(c) 2003 - 2014, 2018 - 2022 Intel Corporation. All rights reserved. + * Copyright(c) 2024 Intel Corporation. All rights reserved. * Copyright(c) 2015 Intel Deutschland GmbH * * Portions of this file are derived from the ipw3945 project, as well @@ -25,8 +26,7 @@ #include <asm/div64.h> -#include "iwl-eeprom-read.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "iwl-io.h" #include "iwl-trans.h" #include "iwl-op-mode.h" diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c index f4a6f76cf193..8879e668ef0d 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c @@ -2673,20 +2673,16 @@ static void rs_get_rate(void *priv_r, struct ieee80211_sta *sta, void *priv_sta, IWL_DEBUG_RATE_LIMIT(priv, "rate scale calculate new rate for skb\n"); /* Get max rate if user set max rate */ - if (lq_sta) { - lq_sta->max_rate_idx = fls(txrc->rate_idx_mask) - 1; - if ((sband->band == NL80211_BAND_5GHZ) && - (lq_sta->max_rate_idx != -1)) - lq_sta->max_rate_idx += IWL_FIRST_OFDM_RATE; - if ((lq_sta->max_rate_idx < 0) || - (lq_sta->max_rate_idx >= IWL_RATE_COUNT)) - lq_sta->max_rate_idx = -1; - } + lq_sta->max_rate_idx = fls(txrc->rate_idx_mask) - 1; + if (sband->band == NL80211_BAND_5GHZ && lq_sta->max_rate_idx != -1) + lq_sta->max_rate_idx += IWL_FIRST_OFDM_RATE; + if (lq_sta->max_rate_idx < 0 || lq_sta->max_rate_idx >= IWL_RATE_COUNT) + lq_sta->max_rate_idx = -1; - /* Treat uninitialized rate scaling data same as non-existing. */ - if (lq_sta && !lq_sta->drv) { + if (!lq_sta->drv) { IWL_DEBUG_RATE(priv, "Rate scaling not initialized yet.\n"); - priv_sta = NULL; + /* mac80211 already set up the data for using low rates */ + return; } rate_idx = lq_sta->last_txrate_idx; @@ -2756,7 +2752,6 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta, u8 sta_i lq_sta = &sta_priv->lq_sta; sband = hw->wiphy->bands[conf->chandef.chan->band]; - lq_sta->lq.sta_id = sta_id; for (j = 0; j < LQ_SIZE; j++) diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/tt.h b/drivers/net/wireless/intel/iwlwifi/dvm/tt.h index 23dfcda0dd86..4af792d41dce 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/tt.h +++ b/drivers/net/wireless/intel/iwlwifi/dvm/tt.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /****************************************************************************** * - * Copyright(c) 2007 - 2014, 2023 Intel Corporation. All rights reserved. + * Copyright(c) 2007 - 2014, 2023-2024 Intel Corporation. All rights reserved. * * Portions of this file are derived from the ipw3945 project, as well * as portions of the ieee80211 subsystem header files. @@ -64,7 +64,7 @@ struct iwl_tt_trans { }; /** - * struct iwl_tt_mgnt - Thermal Throttling Management structure + * struct iwl_tt_mgmt - Thermal Throttling Management structure * @advanced_tt: advanced thermal throttle required * @state: current Thermal Throttling state * @tt_power_mode: Thermal Throttling power mode index diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c index fa339791223b..79774c8c7ff4 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c @@ -27,6 +27,7 @@ static const size_t acpi_dsm_size[DSM_FUNC_NUM_FUNCS] = { [DSM_FUNC_FORCE_DISABLE_CHANNELS] = sizeof(u32), [DSM_FUNC_ENERGY_DETECTION_THRESHOLD] = sizeof(u32), [DSM_FUNC_RFI_CONFIG] = sizeof(u32), + [DSM_FUNC_ENABLE_11BE] = sizeof(u32), }; static int iwl_acpi_get_handle(struct device *dev, acpi_string method, diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/alive.h b/drivers/net/wireless/intel/iwlwifi/fw/api/alive.h index e00ab21e7358..ebe85fdf08d3 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/alive.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/alive.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018, 2020-2021 Intel Corporation + * Copyright (C) 2012-2014, 2018, 2020-2021, 2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -113,7 +113,7 @@ struct iwl_alive_ntf_v6 { } __packed; /* UCODE_ALIVE_NTFY_API_S_VER_6 */ /** - * enum iwl_extended_cfg_flag - commands driver may send before + * enum iwl_extended_cfg_flags - commands driver may send before * finishing init flow * @IWL_INIT_DEBUG_CFG: driver is going to send debug config command * @IWL_INIT_NVM: driver is going to send NVM_ACCESS commands @@ -126,7 +126,7 @@ enum iwl_extended_cfg_flags { }; /** - * struct iwl_extended_cfg_cmd - mark what commands ucode should wait for + * struct iwl_init_extended_cfg_cmd - mark what commands ucode should wait for * before finishing init flows * @init_flags: values from iwl_extended_cfg_flags */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/binding.h b/drivers/net/wireless/intel/iwlwifi/fw/api/binding.h index d9044ada6a43..2397fdc37fc5 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/binding.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/binding.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2020, 2022 Intel Corporation + * Copyright (C) 2012-2014, 2020, 2022, 2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -77,7 +77,7 @@ struct iwl_time_quota_data_v1 { } __packed; /* TIME_QUOTA_DATA_API_S_VER_1 */ /** - * struct iwl_time_quota_cmd - configuration of time quota between bindings + * struct iwl_time_quota_cmd_v1 - configuration of time quota between bindings * ( TIME_QUOTA_CMD = 0x2c ) * @quotas: allocations per binding * Note: on non-CDB the fourth one is the auxilary mac and is diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/coex.h b/drivers/net/wireless/intel/iwlwifi/fw/api/coex.h index bc27e15488f5..b97a43353779 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/coex.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/coex.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2023 Intel Corporation + * Copyright (C) 2023-2024 Intel Corporation * Copyright (C) 2013-2014, 2018-2019 Intel Corporation * Copyright (C) 2013-2014 Intel Mobile Communications GmbH * Copyright (C) 2017 Intel Deutschland GmbH @@ -77,73 +77,6 @@ struct iwl_bt_coex_ci_cmd { __le32 secondary_ch_phy_id; } __packed; /* BT_CI_MSG_API_S_VER_2 */ -#define BT_MBOX(n_dw, _msg, _pos, _nbits) \ - BT_MBOX##n_dw##_##_msg##_POS = (_pos), \ - BT_MBOX##n_dw##_##_msg = BITS(_nbits) << BT_MBOX##n_dw##_##_msg##_POS - -enum iwl_bt_mxbox_dw0 { - BT_MBOX(0, LE_SLAVE_LAT, 0, 3), - BT_MBOX(0, LE_PROF1, 3, 1), - BT_MBOX(0, LE_PROF2, 4, 1), - BT_MBOX(0, LE_PROF_OTHER, 5, 1), - BT_MBOX(0, CHL_SEQ_N, 8, 4), - BT_MBOX(0, INBAND_S, 13, 1), - BT_MBOX(0, LE_MIN_RSSI, 16, 4), - BT_MBOX(0, LE_SCAN, 20, 1), - BT_MBOX(0, LE_ADV, 21, 1), - BT_MBOX(0, LE_MAX_TX_POWER, 24, 4), - BT_MBOX(0, OPEN_CON_1, 28, 2), -}; - -enum iwl_bt_mxbox_dw1 { - BT_MBOX(1, BR_MAX_TX_POWER, 0, 4), - BT_MBOX(1, IP_SR, 4, 1), - BT_MBOX(1, LE_MSTR, 5, 1), - BT_MBOX(1, AGGR_TRFC_LD, 8, 6), - BT_MBOX(1, MSG_TYPE, 16, 3), - BT_MBOX(1, SSN, 19, 2), -}; - -enum iwl_bt_mxbox_dw2 { - BT_MBOX(2, SNIFF_ACT, 0, 3), - BT_MBOX(2, PAG, 3, 1), - BT_MBOX(2, INQUIRY, 4, 1), - BT_MBOX(2, CONN, 5, 1), - BT_MBOX(2, SNIFF_INTERVAL, 8, 5), - BT_MBOX(2, DISC, 13, 1), - BT_MBOX(2, SCO_TX_ACT, 16, 2), - BT_MBOX(2, SCO_RX_ACT, 18, 2), - BT_MBOX(2, ESCO_RE_TX, 20, 2), - BT_MBOX(2, SCO_DURATION, 24, 6), -}; - -enum iwl_bt_mxbox_dw3 { - BT_MBOX(3, SCO_STATE, 0, 1), - BT_MBOX(3, SNIFF_STATE, 1, 1), - BT_MBOX(3, A2DP_STATE, 2, 1), - BT_MBOX(3, ACL_STATE, 3, 1), - BT_MBOX(3, MSTR_STATE, 4, 1), - BT_MBOX(3, OBX_STATE, 5, 1), - BT_MBOX(3, A2DP_SRC, 6, 1), - BT_MBOX(3, OPEN_CON_2, 8, 2), - BT_MBOX(3, TRAFFIC_LOAD, 10, 2), - BT_MBOX(3, CHL_SEQN_LSB, 12, 1), - BT_MBOX(3, INBAND_P, 13, 1), - BT_MBOX(3, MSG_TYPE_2, 16, 3), - BT_MBOX(3, SSN_2, 19, 2), - BT_MBOX(3, UPDATE_REQUEST, 21, 1), -}; - -#define BT_MBOX_MSG(_notif, _num, _field) \ - ((le32_to_cpu((_notif)->mbox_msg[(_num)]) & BT_MBOX##_num##_##_field)\ - >> BT_MBOX##_num##_##_field##_POS) - -#define BT_MBOX_PRINT(_num, _field, _end) \ - pos += scnprintf(buf + pos, bufsz - pos, \ - "\t%s: %d%s", \ - #_field, \ - BT_MBOX_MSG(notif, _num, _field), \ - true ? "\n" : ", ") enum iwl_bt_activity_grading { BT_OFF = 0, BT_ON_NO_CONNECTION = 1, diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/config.h b/drivers/net/wireless/intel/iwlwifi/fw/api/config.h index 4419631604b4..1fc65469990e 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/config.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/config.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018-2019, 2023 Intel Corporation + * Copyright (C) 2012-2014, 2018-2019, 2023-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -76,7 +76,7 @@ struct iwl_phy_specific_cfg { } __packed; /* PHY_SPECIFIC_CONFIGURATION_API_VER_1*/ /** - * struct iwl_phy_cfg_cmd - Phy configuration command + * struct iwl_phy_cfg_cmd_v1 - Phy configuration command * * @phy_cfg: PHY configuration value, uses &enum iwl_fw_phy_cfg * @calib_control: calibration control data diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h index bbaaf3c73115..ffee7927cf26 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h @@ -42,7 +42,7 @@ struct iwl_d3_manager_config { /* TODO: OFFLOADS_QUERY_API_S_VER_1 */ /** - * enum iwl_d3_proto_offloads - enabled protocol offloads + * enum iwl_proto_offloads - enabled protocol offloads * @IWL_D3_PROTO_OFFLOAD_ARP: ARP data is enabled * @IWL_D3_PROTO_OFFLOAD_NS: NS (Neighbor Solicitation) is enabled * @IWL_D3_PROTO_IPV4_VALID: IPv4 data is valid @@ -195,7 +195,7 @@ struct iwl_wowlan_pattern_v1 { #define IWL_WOWLAN_MAX_PATTERNS 20 /** - * struct iwl_wowlan_patterns_cmd - WoWLAN wakeup patterns + * struct iwl_wowlan_patterns_cmd_v1 - WoWLAN wakeup patterns */ struct iwl_wowlan_patterns_cmd_v1 { /** diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h index f272b6a4e72e..2ab38eaeb290 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h @@ -231,28 +231,33 @@ struct iwl_synced_time_rsp { #define PTP_CTX_MAX_DATA_SIZE 128 /** - * struct iwl_time_msmt_ptp_ctx - Vendor specific information element + * struct iwl_time_msmt_ptp_ctx - Vendor specific element * to allow a space for flexibility for the userspace App * - * @element_id: element id of vendor specific ie - * @length: length of vendor specific ie - * @reserved: for alignment - * @data: vendor specific data blob + * @ftm: FTM specific vendor element + * @ftm.element_id: element id of vendor specific ie + * @ftm.length: length of vendor specific ie + * @ftm.reserved: for alignment + * @ftm.data: vendor specific data blob + * @tm: TM specific vendor element + * @tm.element_id: element id of vendor specific ie + * @tm.length: length of vendor specific ie + * @tm.data: vendor specific data blob */ struct iwl_time_msmt_ptp_ctx { - /* Differentiate between FTM and TM specific Vendor IEs */ + /* Differentiate between FTM and TM specific Vendor elements */ union { struct { u8 element_id; u8 length; __le16 reserved; u8 data[PTP_CTX_MAX_DATA_SIZE]; - } ftm; /* FTM specific vendor IE */ + } ftm; struct { u8 element_id; u8 length; u8 data[PTP_CTX_MAX_DATA_SIZE]; - } tm; /* TM specific vendor IE */ + } tm; }; } __packed /* PTP_CTX_VER_1 */; @@ -531,6 +536,10 @@ struct iwl_rx_baid_cfg_cmd_remove { /** * struct iwl_rx_baid_cfg_cmd - BAID allocation/config command * @action: the action, from &enum iwl_rx_baid_action + * @alloc: allocation data + * @modify: modify data + * @remove_v1: remove data (version 1) + * @remove: remove data */ struct iwl_rx_baid_cfg_cmd { __le32 action; @@ -565,6 +574,7 @@ enum iwl_scd_queue_cfg_operation { /** * struct iwl_scd_queue_cfg_cmd - scheduler queue allocation command * @operation: the operation, see &enum iwl_scd_queue_cfg_operation + * @u: union depending on command usage * @u.add.sta_mask: station mask * @u.add.tid: TID * @u.add.reserved: reserved @@ -634,6 +644,7 @@ enum iwl_sec_key_flags { /** * struct iwl_sec_key_cmd - security key command * @action: action from &enum iwl_ctxt_action + * @u: union depending on command type * @u.add.sta_mask: station mask for the new key * @u.add.key_id: key ID (0-7) for the new key * @u.add.key_flags: key flags per &enum iwl_sec_key_flags diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h index 47c914de2992..855cd13a181e 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h @@ -147,32 +147,34 @@ struct iwl_fw_ini_region_internal_buffer { * Configures parameters for region data collection * * @hdr: debug header - * @id: region id. Max id is &IWL_FW_INI_MAX_REGION_ID + * @id: region id. Max id is %IWL_FW_INI_MAX_REGION_ID * @type: region type. One of &enum iwl_fw_ini_region_type * @sub_type: region sub type * @sub_type_ver: region sub type version * @reserved: not in use * @name: region name * @dev_addr: device address configuration. Used by - * &IWL_FW_INI_REGION_DEVICE_MEMORY, &IWL_FW_INI_REGION_PERIPHERY_MAC, - * &IWL_FW_INI_REGION_PERIPHERY_PHY, &IWL_FW_INI_REGION_PERIPHERY_AUX, - * &IWL_FW_INI_REGION_PAGING, &IWL_FW_INI_REGION_CSR, - * &IWL_FW_INI_REGION_DRAM_IMR and &IWL_FW_INI_REGION_PCI_IOSF_CONFIG - * &IWL_FW_INI_REGION_DBGI_SRAM, &FW_TLV_DEBUG_REGION_TYPE_DBGI_SRAM, - * &IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP, + * %IWL_FW_INI_REGION_DEVICE_MEMORY, %IWL_FW_INI_REGION_PERIPHERY_MAC, + * %IWL_FW_INI_REGION_PERIPHERY_PHY, %IWL_FW_INI_REGION_PERIPHERY_AUX, + * %IWL_FW_INI_REGION_PAGING, %IWL_FW_INI_REGION_CSR, + * %IWL_FW_INI_REGION_DRAM_IMR and %IWL_FW_INI_REGION_PCI_IOSF_CONFIG + * %IWL_FW_INI_REGION_DBGI_SRAM, %FW_TLV_DEBUG_REGION_TYPE_DBGI_SRAM, + * %IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP, * @dev_addr_range: device address range configuration. Used by - * &IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE and - * &IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE - * @fifos: fifos configuration. Used by &IWL_FW_INI_REGION_TXF and - * &IWL_FW_INI_REGION_RXF + * %IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE and + * %IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE + * @fifos: fifos configuration. Used by %IWL_FW_INI_REGION_TXF and + * %IWL_FW_INI_REGION_RXF * @err_table: error table configuration. Used by - * IWL_FW_INI_REGION_LMAC_ERROR_TABLE and - * IWL_FW_INI_REGION_UMAC_ERROR_TABLE + * %IWL_FW_INI_REGION_LMAC_ERROR_TABLE and + * %IWL_FW_INI_REGION_UMAC_ERROR_TABLE * @internal_buffer: internal monitor buffer configuration. Used by - * &IWL_FW_INI_REGION_INTERNAL_BUFFER + * %IWL_FW_INI_REGION_INTERNAL_BUFFER + * @special_mem: special device memory region, used by + * %IWL_FW_INI_REGION_SPECIAL_DEVICE_MEMORY * @dram_alloc_id: dram allocation id. One of &enum iwl_fw_ini_allocation_id. - * Used by &IWL_FW_INI_REGION_DRAM_BUFFER - * @tlv_mask: tlv collection mask. Used by &IWL_FW_INI_REGION_TLV + * Used by %IWL_FW_INI_REGION_DRAM_BUFFER + * @tlv_mask: tlv collection mask. Used by %IWL_FW_INI_REGION_TLV * @addrs: array of addresses attached to the end of the region tlv */ struct iwl_fw_ini_region_tlv { @@ -291,7 +293,7 @@ struct iwl_fw_ini_addr_val { } __packed; /* FW_TLV_DEBUG_ADDR_VALUE_VER_1 */ /** - * struct iwl_fw_ini_conf_tlv - configuration TLV to set register/memory. + * struct iwl_fw_ini_conf_set_tlv - configuration TLV to set register/memory. * * @hdr: debug header * @time_point: time point to apply config. One of &enum iwl_fw_ini_time_point @@ -470,6 +472,10 @@ enum iwl_fw_ini_region_device_memory_subtype { * @IWL_FW_INI_TIME_POINT_EAPOL_FAILED: EAPOL failed * @IWL_FW_INI_TIME_POINT_FAKE_TX: fake Tx * @IWL_FW_INI_TIME_POINT_DEASSOC: de association + * @IWL_FW_INI_TIME_POINT_PRESET_OVERRIDE_EXT_REQ: request to override preset + * @IWL_FW_INI_TIME_POINT_PRESET_OVERRIDE_START: start handling override preset + * request + * @IWL_FW_INI_TIME_SCAN_FAILURE: failed scan channel list * @IWL_FW_INI_TIME_POINT_NUM: number of time points */ enum iwl_fw_ini_time_point { @@ -500,6 +506,9 @@ enum iwl_fw_ini_time_point { IWL_FW_INI_TIME_POINT_EAPOL_FAILED, IWL_FW_INI_TIME_POINT_FAKE_TX, IWL_FW_INI_TIME_POINT_DEASSOC, + IWL_FW_INI_TIME_POINT_PRESET_OVERRIDE_EXT_REQ, + IWL_FW_INI_TIME_POINT_PRESET_OVERRIDE_START, + IWL_FW_INI_TIME_SCAN_FAILURE, IWL_FW_INI_TIME_POINT_NUM, }; /* FW_TLV_DEBUG_TIME_POINT_API_E */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h index b31ae6889bd0..bea0f4668cc8 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h @@ -1,11 +1,12 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2018-2023 Intel Corporation + * Copyright (C) 2005-2014, 2018-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ #ifndef __iwl_fw_api_debug_h__ #define __iwl_fw_api_debug_h__ +#include "dbg-tlv.h" /** * enum iwl_debug_cmds - debug commands diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/location.h b/drivers/net/wireless/intel/iwlwifi/fw/api/location.h index 25530a29317e..30a54c7fa001 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/location.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/location.h @@ -2,6 +2,7 @@ /* * Copyright (C) 2015-2017 Intel Deutschland GmbH * Copyright (C) 2018-2022 Intel Corporation + * Copyright (C) 2024 Intel Corporation */ #ifndef __iwl_fw_api_location_h__ #define __iwl_fw_api_location_h__ @@ -390,10 +391,62 @@ struct iwl_tof_responder_config_cmd_v9 { __le16 max_time_between_msr; } __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_8 */ +/** + * struct iwl_tof_responder_config_cmd - ToF AP mode + * @cmd_valid_fields: &iwl_tof_responder_cmd_valid_field + * @responder_cfg_flags: &iwl_tof_responder_cfg_flags + * @format_bw: bits 0 - 3: &enum iwl_location_frame_format. + * bits 4 - 7: &enum iwl_location_bw. + * @bss_color: current AP bss_color + * @channel_num: current AP Channel + * @ctrl_ch_position: coding of the control channel position relative to + * the center frequency, see iwl_mvm_get_ctrl_pos() + * @sta_id: index of the AP STA when in AP mode + * @band: current AP band + * @toa_offset: Artificial addition [pSec] for the ToA - to be used for debug + * purposes, simulating station movement by adding various values + * to this field + * @common_calib: XVT: common calibration value + * @specific_calib: XVT: specific calibration value + * @bssid: Current AP BSSID + * @r2i_ndp_params: parameters for R2I NDP. + * bits 0 - 2: max number of LTF repetitions + * bits 3 - 5: max number of spatial streams (supported values are < 2) + * bits 6 - 7: max number of total LTFs see + * &enum ieee80211_range_params_max_total_ltf + * @i2r_ndp_params: parameters for I2R NDP. + * bits 0 - 2: max number of LTF repetitions + * bits 3 - 5: max number of spatial streams + * bits 6 - 7: max number of total LTFs see + * &enum ieee80211_range_params_max_total_ltf + * @min_time_between_msr: for non trigger based NDP ranging, minimum time + * between measurements in milliseconds. + * @max_time_between_msr: for non trigger based NDP ranging, maximum time + * between measurements in milliseconds. + */ +struct iwl_tof_responder_config_cmd { + __le32 cmd_valid_fields; + __le32 responder_cfg_flags; + u8 format_bw; + u8 bss_color; + u8 channel_num; + u8 ctrl_ch_position; + u8 sta_id; + u8 band; + __le16 toa_offset; + __le16 common_calib; + __le16 specific_calib; + u8 bssid[ETH_ALEN]; + u8 r2i_ndp_params; + u8 i2r_ndp_params; + __le16 min_time_between_msr; + __le16 max_time_between_msr; +} __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_10 */ + #define IWL_LCI_CIVIC_IE_MAX_SIZE 400 /** - * struct iwl_tof_responder_dyn_config_cmd - Dynamic responder settings + * struct iwl_tof_responder_dyn_config_cmd_v2 - Dynamic responder settings * @lci_len: The length of the 1st (LCI) part in the @lci_civic buffer * @civic_len: The length of the 2nd (CIVIC) part in the @lci_civic buffer * @lci_civic: The LCI/CIVIC buffer. LCI data (if exists) comes first, then, if @@ -561,6 +614,8 @@ struct iwl_tof_range_req_ap_entry_v2 { * the responder asked for LMR feedback although the initiator did not set * the LMR feedback bit in the FTM request. If not set, the initiator will * continue with the session and will provide the LMR feedback. + * @IWL_INITIATOR_AP_FLAGS_TEST_INCORRECT_SAC: send an incorrect SAC in the + * first NDP exchange. This is used for testing. */ enum iwl_initiator_ap_flags { IWL_INITIATOR_AP_FLAGS_ASAP = BIT(1), @@ -577,6 +632,7 @@ enum iwl_initiator_ap_flags { IWL_INITIATOR_AP_FLAGS_USE_CALIB = BIT(13), IWL_INITIATOR_AP_FLAGS_PMF = BIT(14), IWL_INITIATOR_AP_FLAGS_TERMINATE_ON_LMR_FEEDBACK = BIT(15), + IWL_INITIATOR_AP_FLAGS_TEST_INCORRECT_SAC = BIT(16), }; /** @@ -797,6 +853,7 @@ struct iwl_tof_range_req_ap_entry_v7 { } __packed; /* LOCATION_RANGE_REQ_AP_ENTRY_CMD_API_S_VER_7 */ #define IWL_LOCATION_MAX_STS_POS 3 +#define IWL_LOCATION_TOTAL_LTF_POS 6 /** * struct iwl_tof_range_req_ap_entry_v8 - AP configuration parameters @@ -954,6 +1011,78 @@ struct iwl_tof_range_req_ap_entry_v9 { } __packed; /* LOCATION_RANGE_REQ_AP_ENTRY_CMD_API_S_VER_9 */ /** + * struct iwl_tof_range_req_ap_entry_v10 - AP configuration parameters + * @initiator_ap_flags: see &enum iwl_initiator_ap_flags. + * @band: 0 for 5.2 GHz, 1 for 2.4 GHz, 2 for 6GHz + * @channel_num: AP Channel number + * @format_bw: bits 0 - 3: &enum iwl_location_frame_format. + * bits 4 - 7: &enum iwl_location_bw. + * @ctrl_ch_position: Coding of the control channel position relative to the + * center frequency, see iwl_mvm_get_ctrl_pos(). + * @bssid: AP's BSSID + * @burst_period: For EDCA based ranging: Recommended value to be sent to the + * AP. Measurement periodicity In units of 100ms. ignored if + * num_of_bursts_exp = 0. + * For non trigger based NDP ranging, the maximum time between + * measurements in units of milliseconds. + * @samples_per_burst: the number of FTMs pairs in single Burst (1-31); + * @num_of_bursts: Recommended value to be sent to the AP. 2s Exponent of + * the number of measurement iterations (min 2^0 = 1, max 2^14) + * @sta_id: the station id of the AP. Only relevant when associated to the AP, + * otherwise should be set to &IWL_MVM_INVALID_STA. + * @cipher: pairwise cipher suite for secured measurement. + * &enum iwl_location_cipher. + * @hltk: HLTK to be used for secured 11az measurement + * @tk: TK to be used for secured 11az measurement + * @calib: An array of calibration values per FTM rx bandwidth. + * If &IWL_INITIATOR_AP_FLAGS_USE_CALIB is set, the fw will use the + * calibration value that corresponds to the rx bandwidth of the FTM + * frame. + * @beacon_interval: beacon interval of the AP in TUs. Only required if + * &IWL_INITIATOR_AP_FLAGS_TB is set. + * @rx_pn: the next expected PN for protected management frames Rx. LE byte + * order. Only valid if &IWL_INITIATOR_AP_FLAGS_SECURED is set and sta_id + * is set to &IWL_MVM_INVALID_STA. + * @tx_pn: the next PN to use for protected management frames Tx. LE byte + * order. Only valid if &IWL_INITIATOR_AP_FLAGS_SECURED is set and sta_id + * is set to &IWL_MVM_INVALID_STA. + * @r2i_ndp_params: parameters for R2I NDP ranging negotiation. + * bits 0 - 2: max LTF repetitions + * bits 3 - 5: max number of spatial streams + * bits 6 - 7: max total LTFs. One of + * &enum ieee80211_range_params_max_total_ltf. + * @i2r_ndp_params: parameters for I2R NDP ranging negotiation. + * bits 0 - 2: max LTF repetitions + * bits 3 - 5: max number of spatial streams (supported values are < 2) + * bits 6 - 7: max total LTFs. One of + * &enum ieee80211_range_params_max_total_ltf. + * @min_time_between_msr: For non trigger based NDP ranging, the minimum time + * between measurements in units of milliseconds + */ +struct iwl_tof_range_req_ap_entry_v10 { + __le32 initiator_ap_flags; + u8 band; + u8 channel_num; + u8 format_bw; + u8 ctrl_ch_position; + u8 bssid[ETH_ALEN]; + __le16 burst_period; + u8 samples_per_burst; + u8 num_of_bursts; + u8 sta_id; + u8 cipher; + u8 hltk[HLTK_11AZ_LEN]; + u8 tk[TK_11AZ_LEN]; + __le16 calib[IWL_TOF_BW_NUM]; + __le16 beacon_interval; + u8 rx_pn[IEEE80211_CCMP_PN_LEN]; + u8 tx_pn[IEEE80211_CCMP_PN_LEN]; + u8 r2i_ndp_params; + u8 i2r_ndp_params; + __le16 min_time_between_msr; +} __packed; /* LOCATION_RANGE_REQ_AP_ENTRY_CMD_API_S_VER_9 */ + +/** * enum iwl_tof_response_mode * @IWL_MVM_TOF_RESPONSE_ASAP: report each AP measurement separately as soon as * possible (not supported for this release) @@ -1230,6 +1359,34 @@ struct iwl_tof_range_req_cmd_v13 { struct iwl_tof_range_req_ap_entry_v9 ap[IWL_MVM_TOF_MAX_APS]; } __packed; /* LOCATION_RANGE_REQ_CMD_API_S_VER_13 */ +/** + * struct iwl_tof_range_req_cmd_v14 - start measurement cmd + * @initiator_flags: see flags @ iwl_tof_initiator_flags + * @request_id: A Token incremented per request. The same Token will be + * sent back in the range response + * @num_of_ap: Number of APs to measure (error if > IWL_MVM_TOF_MAX_APS) + * @range_req_bssid: ranging request BSSID + * @macaddr_mask: Bits set to 0 shall be copied from the MAC address template. + * Bits set to 1 shall be randomized by the UMAC + * @macaddr_template: MAC address template to use for non-randomized bits + * @req_timeout_ms: Requested timeout of the response in units of milliseconds. + * This is the session time for completing the measurement. + * @tsf_mac_id: report the measurement start time for each ap in terms of the + * TSF of this mac id. 0xff to disable TSF reporting. + * @ap: per-AP request data, see &struct iwl_tof_range_req_ap_entry_v10. + */ +struct iwl_tof_range_req_cmd_v14 { + __le32 initiator_flags; + u8 request_id; + u8 num_of_ap; + u8 range_req_bssid[ETH_ALEN]; + u8 macaddr_mask[ETH_ALEN]; + u8 macaddr_template[ETH_ALEN]; + __le32 req_timeout_ms; + __le32 tsf_mac_id; + struct iwl_tof_range_req_ap_entry_v10 ap[IWL_MVM_TOF_MAX_APS]; +} __packed; /* LOCATION_RANGE_REQ_CMD_API_S_VER_13 */ + /* * enum iwl_tof_range_request_status - status of the sent request * @IWL_TOF_RANGE_REQUEST_STATUS_SUCCESSFUL - FW successfully received the diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h index 754c5d655ad0..ca6fa66d1917 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h @@ -144,7 +144,7 @@ struct iwl_missed_vap_notif { } __packed; /* MISSED_VAP_NTFY_API_S_VER_1 */ /** - * struct iwl_channel_switch_start_notif - Channel switch start notification + * struct iwl_channel_switch_start_notif_v1 - Channel switch start notification * * @id_and_color: ID and color of the MAC */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h b/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h index 545826973a80..bcbbf8c4a297 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h @@ -310,6 +310,13 @@ struct iwl_ac_qos { * @filter_flags: combination of &enum iwl_mac_filter_flags * @qos_flags: from &enum iwl_mac_qos_flags * @ac: one iwl_mac_qos configuration for each AC + * @ap: AP specific config data, see &struct iwl_mac_data_ap + * @go: GO specific config data, see &struct iwl_mac_data_go + * @sta: BSS client specific config data, see &struct iwl_mac_data_sta + * @p2p_sta: P2P client specific config data, see &struct iwl_mac_data_p2p_sta + * @p2p_dev: P2P-device specific config data, see &struct iwl_mac_data_p2p_dev + * @pibss: Pseudo-IBSS specific data, unused; see struct iwl_mac_data_pibss + * @ibss: IBSS specific config data, see &struct iwl_mac_data_ibss */ struct iwl_mac_ctx_cmd { /* COMMON_INDEX_HDR_API_S_VER_1 */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h index a08497a04733..d424d0126367 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h @@ -23,7 +23,8 @@ enum iwl_regulatory_and_nvm_subcmd_ids { * &struct iwl_lari_config_change_cmd_v4, * &struct iwl_lari_config_change_cmd_v5, * &struct iwl_lari_config_change_cmd_v6, - * &struct iwl_lari_config_change_cmd_v7 or + * &struct iwl_lari_config_change_cmd_v7, + * &struct iwl_lari_config_change_cmd_v10 or * &struct iwl_lari_config_change_cmd */ LARI_CONFIG_CHANGE = 0x1, @@ -119,7 +120,7 @@ struct iwl_nvm_access_cmd { } __packed; /* NVM_ACCESS_CMD_API_S_VER_2 */ /** - * struct iwl_nvm_access_resp_ver2 - response to NVM_ACCESS_CMD + * struct iwl_nvm_access_resp - response to NVM_ACCESS_CMD * @offset: offset in bytes into the section * @length: in bytes, either how much was written or read * @type: NVM_SECTION_TYPE_* @@ -211,7 +212,7 @@ struct iwl_nvm_get_info_phy { #define IWL_NUM_CHANNELS 110 /** - * struct iwl_nvm_get_info_regulatory - regulatory information + * struct iwl_nvm_get_info_regulatory_v1 - regulatory information * @lar_enabled: is LAR enabled * @channel_profile: regulatory data of this channel * @reserved: reserved @@ -648,7 +649,7 @@ struct iwl_lari_config_change_cmd_v7 { /* LARI_CHANGE_CONF_CMD_S_VER_9 */ /** - * struct iwl_lari_config_change_cmd - change LARI configuration + * struct iwl_lari_config_change_cmd_v10 - change LARI configuration * @config_bitmap: Bitmap of the config commands. Each bit will trigger a * different predefined FW config operation. * @oem_uhb_allow_bitmap: Bitmap of UHB enabled MCC sets. @@ -674,7 +675,7 @@ struct iwl_lari_config_change_cmd_v7 { * bit1: enable 320Mhz in South Korea. * bit 2 - 31: reserved. */ -struct iwl_lari_config_change_cmd { +struct iwl_lari_config_change_cmd_v10 { __le32 config_bitmap; __le32 oem_uhb_allow_bitmap; __le32 oem_11ax_allow_bitmap; @@ -686,8 +687,57 @@ struct iwl_lari_config_change_cmd { } __packed; /* LARI_CHANGE_CONF_CMD_S_VER_10 */ +/** + * struct iwl_lari_config_change_cmd - change LARI configuration + * @config_bitmap: Bitmap of the config commands. Each bit will trigger a + * different predefined FW config operation. + * @oem_uhb_allow_bitmap: Bitmap of UHB enabled MCC sets. + * @oem_11ax_allow_bitmap: Bitmap of 11ax allowed MCCs. There are two bits + * per country, one to indicate whether to override and the other to + * indicate the value to use. + * @oem_unii4_allow_bitmap: Bitmap of unii4 allowed MCCs.There are two bits + * per country, one to indicate whether to override and the other to + * indicate allow/disallow unii4 channels. + * For LARI cmd version 11 - bits 0:5 are supported. + * @chan_state_active_bitmap: Bitmap to enable different bands per country + * or region. + * Each bit represents a country or region, and a band to activate + * according to the BIOS definitions. + * For LARI cmd version 11 - bits 0:4 are supported. + * For LARI cmd version 12 - bits 0:6 are supported and bits 7:31 are + * reserved. No need to mask out the reserved bits. + * @force_disable_channels_bitmap: Bitmap of disabled bands/channels. + * Each bit represents a set of channels in a specific band that should be + * disabled + * @edt_bitmap: Bitmap of energy detection threshold table. + * Disable/enable the EDT optimization method for different band. + * @oem_320mhz_allow_bitmap: 320Mhz bandwidth enablement bitmap per MCC. + * bit0: enable 320Mhz in Japan. + * bit1: enable 320Mhz in South Korea. + * bit 2 - 31: reserved. + * @oem_11be_allow_bitmap: Bitmap of 11be allowed MCCs. No need to mask out the + * unsupported bits + * bit0: enable 11be in China(CB/CN). + * bit1: enable 11be in South Korea. + * bit 2 - 31: reserved. + */ +struct iwl_lari_config_change_cmd { + __le32 config_bitmap; + __le32 oem_uhb_allow_bitmap; + __le32 oem_11ax_allow_bitmap; + __le32 oem_unii4_allow_bitmap; + __le32 chan_state_active_bitmap; + __le32 force_disable_channels_bitmap; + __le32 edt_bitmap; + __le32 oem_320mhz_allow_bitmap; + __le32 oem_11be_allow_bitmap; +} __packed; +/* LARI_CHANGE_CONF_CMD_S_VER_11 */ +/* LARI_CHANGE_CONF_CMD_S_VER_12 */ + /* Activate UNII-1 (5.2GHz) for World Wide */ -#define ACTIVATE_5G2_IN_WW_MASK BIT(4) +#define ACTIVATE_5G2_IN_WW_MASK BIT(4) +#define CHAN_STATE_ACTIVE_BITMAP_CMD_V11 0x1F /** * struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h b/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h index 2ed7acc09e5a..6a7bbfd6b2b7 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h @@ -60,7 +60,7 @@ struct iwl_stored_beacon_notif_common { } __packed; /** - * struct iwl_stored_beacon_notif - Stored beacon notification + * struct iwl_stored_beacon_notif_v2 - Stored beacon notification * * @common: fields common for all versions * @data: beacon data, length in @byte_count diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h index 08a2c416ce60..4d8a12799c4d 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018, 2020-2023 Intel Corporation + * Copyright (C) 2012-2014, 2018, 2020-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -113,7 +113,7 @@ struct iwl_phy_context_cmd_tail { } __packed; /** - * struct iwl_phy_context_cmd - config of the PHY context + * struct iwl_phy_context_cmd_v1 - config of the PHY context * ( PHY_CONTEXT_CMD = 0x8 ) * @id_and_color: ID and color of the relevant Binding * @action: action to perform, see &enum iwl_ctxt_action @@ -144,6 +144,7 @@ struct iwl_phy_context_cmd_v1 { * @rxchain_info: ??? * @sbb_bandwidth: 0 disabled, 1 - 40Mhz ... 4 - 320MHz * @sbb_ctrl_channel_loc: location of the control channel + * @puncture_mask: bitmap of punctured subchannels * @dsp_cfg_flags: set to 0 * @reserved: reserved to align to 64 bit */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h b/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h index 92e4b62c119f..c73d4d597857 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h @@ -195,7 +195,7 @@ struct ct_kill_notif { } __packed; /* CT_KILL_NOTIFICATION_API_S_VER_1, CT_KILL_NOTIFICATION_API_S_VER_2 */ /** -* enum ctdp_cmd_operation - CTDP command operations +* enum iwl_mvm_ctdp_cmd_operation - CTDP command operations * @CTDP_CMD_OPERATION_START: update the current budget * @CTDP_CMD_OPERATION_STOP: stop ctdp * @CTDP_CMD_OPERATION_REPORT: get the average budget diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h index 532d5cfa9162..6e6a92d173cc 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h @@ -462,7 +462,7 @@ struct iwl_per_chain_offset { } __packed; /* PER_CHAIN_LIMIT_OFFSET_PER_CHAIN_S_VER_1 */ /** - * struct iwl_geo_tx_power_profile_cmd_v1 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. + * struct iwl_geo_tx_power_profiles_cmd_v1 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation * @table: offset profile per band. */ @@ -472,7 +472,7 @@ struct iwl_geo_tx_power_profiles_cmd_v1 { } __packed; /* PER_CHAIN_LIMIT_OFFSET_CMD_VER_1 */ /** - * struct iwl_geo_tx_power_profile_cmd_v2 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. + * struct iwl_geo_tx_power_profiles_cmd_v2 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation * @table: offset profile per band. * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading) @@ -484,7 +484,7 @@ struct iwl_geo_tx_power_profiles_cmd_v2 { } __packed; /* PER_CHAIN_LIMIT_OFFSET_CMD_VER_2 */ /** - * struct iwl_geo_tx_power_profile_cmd_v3 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. + * struct iwl_geo_tx_power_profiles_cmd_v3 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation * @table: offset profile per band. * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading) @@ -496,7 +496,7 @@ struct iwl_geo_tx_power_profiles_cmd_v3 { } __packed; /* PER_CHAIN_LIMIT_OFFSET_CMD_VER_3 */ /** - * struct iwl_geo_tx_power_profile_cmd_v4 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. + * struct iwl_geo_tx_power_profiles_cmd_v4 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation * @table: offset profile per band. * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading) @@ -508,7 +508,7 @@ struct iwl_geo_tx_power_profiles_cmd_v4 { } __packed; /* PER_CHAIN_LIMIT_OFFSET_CMD_VER_4 */ /** - * struct iwl_geo_tx_power_profile_cmd_v5 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. + * struct iwl_geo_tx_power_profiles_cmd_v5 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd. * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation * @table: offset profile per band. * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading) @@ -569,9 +569,12 @@ enum iwl_ppag_flags { * @v2: version 2 * version 3, 4, 5 and 6 are the same structure as v2, * but has a different format of the flags bitmap - * @flags: values from &enum iwl_ppag_flags - * @gain: table of antenna gain values per chain and sub-band - * @reserved: reserved + * @v1.flags: values from &enum iwl_ppag_flags + * @v1.gain: table of antenna gain values per chain and sub-band + * @v1.reserved: reserved + * @v2.flags: values from &enum iwl_ppag_flags + * @v2.gain: table of antenna gain values per chain and sub-band + * @v2.reserved: reserved */ union iwl_ppag_table_cmd { struct { diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h index a1a272433b09..1a60f0cdf972 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018-2022 Intel Corporation + * Copyright (C) 2012-2014, 2018-2022, 2024 Intel Corporation * Copyright (C) 2017 Intel Deutschland GmbH */ #ifndef __iwl_fw_api_rs_h__ @@ -9,7 +9,7 @@ #include "mac.h" /** - * enum iwl_tlc_mng_cfg_flags_enum - options for TLC config flags + * enum iwl_tlc_mng_cfg_flags - options for TLC config flags * @IWL_TLC_MNG_CFG_FLAGS_STBC_MSK: enable STBC. For HE this enables STBC for * bandwidths <= 80MHz * @IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK: enable LDPC diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h index e71f29d0c694..691c879cb90d 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018-2023 Intel Corporation + * Copyright (C) 2012-2014, 2018-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2015-2017 Intel Deutschland GmbH */ @@ -710,7 +710,15 @@ struct iwl_rx_mpdu_desc { __le32 reorder_data; union { + /** + * @v1: version 1 of the remaining RX descriptor, + * see &struct iwl_rx_mpdu_desc_v1 + */ struct iwl_rx_mpdu_desc_v1 v1; + /** + * @v3: version 3 of the remaining RX descriptor, + * see &struct iwl_rx_mpdu_desc_v3 + */ struct iwl_rx_mpdu_desc_v3 v3; }; } __packed; /* RX_MPDU_RES_START_API_S_VER_3, @@ -976,7 +984,7 @@ struct iwl_ba_window_status_notif { } __packed; /* BA_WINDOW_STATUS_NTFY_API_S_VER_1 */ /** - * struct iwl_rfh_queue_config - RX queue configuration + * struct iwl_rfh_queue_data - RX queue configuration * @q_num: Q num * @enable: enable queue * @reserved: alignment diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h index 6684506f4fc4..8598031567bb 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h @@ -149,7 +149,7 @@ struct iwl_scan_offload_profile_cfg_data { } __packed; /** - * struct iwl_scan_offload_profile_cfg + * struct iwl_scan_offload_profile_cfg_v1 - scan offload profile config * @profiles: profiles to search for match * @data: the rest of the data for profile_cfg */ @@ -423,7 +423,7 @@ struct iwl_lmac_scan_complete_notif { } __packed; /** - * struct iwl_scan_offload_complete - PERIODIC_SCAN_COMPLETE_NTF_API_S_VER_2 + * struct iwl_periodic_scan_complete - PERIODIC_SCAN_COMPLETE_NTF_API_S_VER_2 * @last_schedule_line: last schedule line executed (fast or regular) * @last_schedule_iteration: last scan iteration executed before scan abort * @status: &enum iwl_scan_offload_complete_status @@ -443,10 +443,10 @@ struct iwl_periodic_scan_complete { /* UMAC Scan API */ /* The maximum of either of these cannot exceed 8, because we use an - * 8-bit mask (see IWL_MVM_SCAN_MASK in mvm.h). + * 8-bit mask (see enum iwl_scan_status). */ -#define IWL_MVM_MAX_UMAC_SCANS 4 -#define IWL_MVM_MAX_LMAC_SCANS 1 +#define IWL_MAX_UMAC_SCANS 4 +#define IWL_MAX_LMAC_SCANS 1 enum scan_config_flags { SCAN_CONFIG_FLAG_ACTIVATE = BIT(0), @@ -789,7 +789,7 @@ struct iwl_scan_req_umac_tail_v1 { } __packed; /** - * struct iwl_scan_req_umac_tail - the rest of the UMAC scan request command + * struct iwl_scan_req_umac_tail_v2 - the rest of the UMAC scan request command * parameters following channels configuration array. * @schedule: two scheduling plans. * @delay: delay in TUs before starting the first scan iteration @@ -1085,7 +1085,7 @@ struct iwl_scan_req_params_v12 { } __packed; /* SCAN_REQUEST_PARAMS_API_S_VER_12 */ /** - * struct iwl_scan_req_params_v16 + * struct iwl_scan_req_params_v17 - scan request parameters (v17) * @general_params: &struct iwl_scan_general_params_v11 * @channel_params: &struct iwl_scan_channel_params_v7 * @periodic_params: &struct iwl_scan_periodic_parms_v1 @@ -1111,7 +1111,7 @@ struct iwl_scan_req_umac_v12 { } __packed; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_12 */ /** - * struct iwl_scan_req_umac_v16 + * struct iwl_scan_req_umac_v17 - scan request command (v17) * @uid: scan id, &enum iwl_umac_scan_uid_offsets * @ooc_priority: out of channel priority - &enum iwl_scan_priority * @scan_params: scan parameters diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h b/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h index 2e15be71c957..f4b827b58bd3 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2012-2014, 2018-2020, 2022-2023 Intel Corporation + * Copyright (C) 2012-2014, 2018-2020, 2022-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -340,11 +340,13 @@ struct iwl_hs20_roc_res { * @ROC_ACTIVITY_HOTSPOT: ROC for hs20 activity * @ROC_ACTIVITY_P2P_DISC: ROC for p2p discoverability activity * @ROC_ACTIVITY_P2P_TXRX: ROC for p2p action frames activity + * @ROC_ACTIVITY_P2P_NEG: ROC for p2p negotiation (used also for TX) */ enum iwl_roc_activity { ROC_ACTIVITY_HOTSPOT, ROC_ACTIVITY_P2P_DISC, ROC_ACTIVITY_P2P_TXRX, + ROC_ACTIVITY_P2P_NEG, ROC_NUM_ACTIVITIES }; /* ROC_ACTIVITY_API_E_VER_1 */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/tx.h b/drivers/net/wireless/intel/iwlwifi/fw/api/tx.h index bbd176d88820..c5277e2f8cd4 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/tx.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/tx.h @@ -698,6 +698,7 @@ enum iwl_mvm_ba_resp_flags { * @query_frame_cnt: SCD query frame count * @txed: number of frames sent in the aggregation (all-TIDs) * @done: number of frames that were Acked by the BA (all-TIDs) + * @rts_retry_cnt: RTS retry count * @reserved: reserved (for alignment) * @wireless_time: Wireless-media time * @tx_rate: the rate the aggregation was sent at @@ -718,7 +719,8 @@ struct iwl_mvm_compressed_ba_notif { __le16 query_frame_cnt; __le16 txed; __le16 done; - __le16 reserved; + u8 rts_retry_cnt; + u8 reserved; __le32 wireless_time; __le32 tx_rate; __le16 tfd_cnt; @@ -864,7 +866,7 @@ enum iwl_dump_control { }; /** - * struct iwl_tx_path_flush_cmd -- queue/FIFO flush command + * struct iwl_tx_path_flush_cmd_v1 -- queue/FIFO flush command * @queues_ctl: bitmap of queues to flush * @flush_ctl: control flags * @reserved: reserved diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c index 945ffc083d25..fa57df336785 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -1168,17 +1168,13 @@ static int iwl_dump_ini_config_iter(struct iwl_fw_runtime *fwrt, le32_to_cpu(reg->dev_addr.offset); int i; - /* we shouldn't get here if the trans doesn't have read_config32 */ - if (WARN_ON_ONCE(!trans->ops->read_config32)) - return -EOPNOTSUPP; - range->internal_base_addr = cpu_to_le32(addr); range->range_data_size = reg->dev_addr.size; for (i = 0; i < le32_to_cpu(reg->dev_addr.size); i += 4) { int ret; u32 tmp; - ret = trans->ops->read_config32(trans, addr + i, &tmp); + ret = iwl_trans_read_config32(trans, addr + i, &tmp); if (ret < 0) return ret; diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c index 751a125a1566..893b21fcaf87 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c @@ -230,8 +230,7 @@ static ssize_t iwl_dbgfs_send_hcmd_write(struct iwl_fw_runtime *fwrt, char *buf, .data = { NULL, }, }; - if (fwrt->ops && fwrt->ops->fw_running && - !fwrt->ops->fw_running(fwrt->ops_ctx)) + if (!iwl_trans_fw_running(fwrt->trans)) return -EIO; if (count < header_size + 1 || count > 1024 * 4) diff --git a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h index 5c76e3b94968..e63b08b7d336 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h @@ -248,7 +248,7 @@ struct iwl_fw_error_dump_mem { #define IWL_INI_DUMP_NAME_TYPE (BIT(31) | BIT(24)) /** - * struct iwl_fw_error_dump_data - data for one type + * struct iwl_fw_ini_error_dump_data - data for one type * @type: &enum iwl_fw_ini_region_type * @sub_type: sub type id * @sub_type_ver: sub type version @@ -278,7 +278,7 @@ struct iwl_fw_ini_dump_entry { } __packed; /** - * struct iwl_fw_error_dump_file - header of dump file + * struct iwl_fw_ini_dump_file_hdr - header of dump file * @barker: must be %IWL_FW_INI_ERROR_DUMP_BARKER * @file_len: the length of all the file including the header */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/init.c b/drivers/net/wireless/intel/iwlwifi/fw/init.c index 135bd48bfe9f..d8b083be5b6b 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/init.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/init.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2019-2021 Intel Corporation + * Copyright (C) 2019-2021, 2024 Intel Corporation */ #include "iwl-drv.h" #include "runtime.h" @@ -135,7 +135,9 @@ int iwl_configure_rxq(struct iwl_fw_runtime *fwrt) struct iwl_trans_rxq_dma_data data; cmd->data[i].q_num = i + 1; - iwl_trans_get_rxq_dma_data(fwrt->trans, i + 1, &data); + ret = iwl_trans_get_rxq_dma_data(fwrt->trans, i + 1, &data); + if (ret) + goto out; cmd->data[i].fr_bd_cb = cpu_to_le64(data.fr_bd_cb); cmd->data[i].urbd_stts_wrptr = @@ -149,6 +151,7 @@ int iwl_configure_rxq(struct iwl_fw_runtime *fwrt) ret = iwl_trans_send_cmd(fwrt->trans, &hcmd); +out: kfree(cmd); if (ret) diff --git a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c index b9bb3636e88f..560a91998cc4 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.c @@ -497,9 +497,13 @@ static size_t iwl_get_lari_config_cmd_size(u8 cmd_ver) size_t cmd_size; switch (cmd_ver) { - case 10: + case 12: + case 11: cmd_size = sizeof(struct iwl_lari_config_change_cmd); break; + case 10: + cmd_size = sizeof(struct iwl_lari_config_change_cmd_v10); + break; case 9: case 8: case 7: @@ -560,6 +564,9 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt, if (!ret) { if (cmd_ver < 8) value &= ~ACTIVATE_5G2_IN_WW_MASK; + if (cmd_ver < 12) + value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V11; + cmd->chan_state_active_bitmap = cpu_to_le32(value); } @@ -580,6 +587,10 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt, if (!ret) cmd->oem_320mhz_allow_bitmap = cpu_to_le32(value); + ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ENABLE_11BE, &value); + if (!ret) + cmd->oem_11be_allow_bitmap = cpu_to_le32(value); + if (cmd->config_bitmap || cmd->oem_uhb_allow_bitmap || cmd->oem_11ax_allow_bitmap || @@ -587,7 +598,8 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt, cmd->chan_state_active_bitmap || cmd->force_disable_channels_bitmap || cmd->edt_bitmap || - cmd->oem_320mhz_allow_bitmap) { + cmd->oem_320mhz_allow_bitmap || + cmd->oem_11be_allow_bitmap) { IWL_DEBUG_RADIO(fwrt, "sending LARI_CONFIG_CHANGE, config_bitmap=0x%x, oem_11ax_allow_bitmap=0x%x\n", le32_to_cpu(cmd->config_bitmap), @@ -605,6 +617,9 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt, "sending LARI_CONFIG_CHANGE, edt_bitmap=0x%x, oem_320mhz_allow_bitmap=0x%x\n", le32_to_cpu(cmd->edt_bitmap), le32_to_cpu(cmd->oem_320mhz_allow_bitmap)); + IWL_DEBUG_RADIO(fwrt, + "sending LARI_CONFIG_CHANGE, oem_11be_allow_bitmap=0x%x\n", + le32_to_cpu(cmd->oem_11be_allow_bitmap)); } else { return 1; } diff --git a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h index 633c9ad9af84..e2c056f483c1 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h @@ -115,7 +115,8 @@ enum iwl_dsm_funcs { DSM_FUNC_FORCE_DISABLE_CHANNELS = 9, DSM_FUNC_ENERGY_DETECTION_THRESHOLD = 10, DSM_FUNC_RFI_CONFIG = 11, - DSM_FUNC_NUM_FUNCS = 12, + DSM_FUNC_ENABLE_11BE = 12, + DSM_FUNC_NUM_FUNCS = 13, }; enum iwl_dsm_values_srd { diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h index 9122f9a1260a..048877fa7c71 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h @@ -12,14 +12,13 @@ #include "fw/api/debug.h" #include "fw/api/paging.h" #include "fw/api/power.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "fw/acpi.h" #include "fw/regulatory.h" struct iwl_fw_runtime_ops { void (*dump_start)(void *ctx); void (*dump_end)(void *ctx); - bool (*fw_running)(void *ctx); int (*send_hcmd)(void *ctx, struct iwl_host_cmd *host_cmd); bool (*d3_debug_enable)(void *ctx); }; @@ -104,7 +103,6 @@ struct iwl_txf_iter_data { * @cur_fw_img: current firmware image, must be maintained by * the driver by calling &iwl_fw_set_current_image() * @dump: debug dump data - * @uats_enabled: VLP or AFC AP is enabled * @uats_table: AP type table * @uefi_tables_lock_status: The status of the WIFI GUID UEFI variables lock: * 0: Unlocked, 1 and 2: Locked. @@ -184,7 +182,6 @@ struct iwl_fw_runtime { bool sgom_enabled; struct iwl_mcc_allowed_ap_type_cmd uats_table; u8 uefi_tables_lock_status; - bool uats_enabled; }; void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans, diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h index 732889f96ca2..b2abd4fd1944 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h @@ -241,7 +241,7 @@ enum iwl_cfg_trans_ltr_delay { }; /** - * struct iwl_cfg_trans - information needed to start the trans + * struct iwl_cfg_trans_params - information needed to start the trans * * These values are specific to the device ID and do not change when * multiple configs are used for a single device ID. They values are @@ -258,6 +258,7 @@ enum iwl_cfg_trans_ltr_delay { * @mq_rx_supported: multi-queue rx support * @integrated: discrete or integrated * @low_latency_xtal: use the low latency xtal if supported + * @bisr_workaround: BISR hardware workaround (for 22260 series devices) * @ltr_delay: LTR delay parameter, &enum iwl_cfg_trans_ltr_delay. * @imr_enabled: use the IMR if supported. */ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h index 4511d7fb2279..98563757ce2c 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2018-2023 Intel Corporation + * Copyright (C) 2005-2014, 2018-2024 Intel Corporation * Copyright (C) 2013-2014 Intel Mobile Communications GmbH * Copyright (C) 2016 Intel Deutschland GmbH */ @@ -304,9 +304,7 @@ #define CSR_HW_RFID_IS_CDB(_val) (((_val) & 0x10000000) >> 28) #define CSR_HW_RFID_IS_JACKET(_val) (((_val) & 0x20000000) >> 29) -/** - * hw_rev values - */ +/* hw_rev values */ enum { SILICON_A_STEP = 0, SILICON_B_STEP, diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c index 561d0c261123..08d990ba8a79 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c @@ -223,12 +223,6 @@ static int iwl_dbg_tlv_alloc_region(struct iwl_trans *trans, return -EINVAL; } - if (type == IWL_FW_INI_REGION_PCI_IOSF_CONFIG && - !trans->ops->read_config32) { - IWL_ERR(trans, "WRT: Unsupported region type %u\n", type); - return -EOPNOTSUPP; - } - if (type == IWL_FW_INI_REGION_INTERNAL_BUFFER) { trans->dbg.imr_data.sram_addr = le32_to_cpu(reg->internal_buffer.base_addr); @@ -1246,12 +1240,6 @@ iwl_dbg_tlv_tp_trigger(struct iwl_fw_runtime *fwrt, bool sync, } fwrt->trans->dbg.restart_required = false; - IWL_DEBUG_FW(fwrt, "WRT: tp %d, reset_fw %d\n", - tp, dump_data.trig->reset_fw); - IWL_DEBUG_FW(fwrt, - "WRT: restart_required %d, last_tp_resetfw %d\n", - fwrt->trans->dbg.restart_required, - fwrt->trans->dbg.last_tp_resetfw); if (fwrt->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_9000) { @@ -1261,22 +1249,17 @@ iwl_dbg_tlv_tp_trigger(struct iwl_fw_runtime *fwrt, bool sync, IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) { fwrt->trans->dbg.restart_required = false; fwrt->trans->dbg.last_tp_resetfw = 0xFF; - IWL_DEBUG_FW(fwrt, "WRT: FW_ASSERT due to reset_fw_mode-no restart\n"); } else if (le32_to_cpu(dump_data.trig->reset_fw) == IWL_FW_INI_RESET_FW_MODE_STOP_AND_RELOAD_FW) { - IWL_DEBUG_FW(fwrt, "WRT: stop and reload firmware\n"); fwrt->trans->dbg.restart_required = true; } else if (le32_to_cpu(dump_data.trig->reset_fw) == IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) { - IWL_DEBUG_FW(fwrt, - "WRT: stop only and no reload firmware\n"); fwrt->trans->dbg.restart_required = false; fwrt->trans->dbg.last_tp_resetfw = le32_to_cpu(dump_data.trig->reset_fw); } else if (le32_to_cpu(dump_data.trig->reset_fw) == IWL_FW_INI_RESET_FW_MODE_NOTHING) { - IWL_DEBUG_FW(fwrt, - "WRT: nothing need to be done after debug collection\n"); + /* nothing */ } else { IWL_ERR(fwrt, "WRT: wrong resetfw %d\n", le32_to_cpu(dump_data.trig->reset_fw)); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h index 2c280a2fe3df..0d4a0896a2c5 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-data.h @@ -3,7 +3,7 @@ * * Copyright(c) 2009 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2015 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019, 2023 Intel Corporation + * Copyright(c) 2018 - 2019, 2023-2024 Intel Corporation *****************************************************************************/ #if !defined(__IWLWIFI_DEVICE_TRACE_DATA) || defined(TRACE_HEADER_MULTI_READ) @@ -28,7 +28,7 @@ TRACE_EVENT(iwlwifi_dev_tx_tb, TP_fast_assign( DEV_ASSIGN; __entry->phys = phys; - if (iwl_trace_data(skb)) + if (__get_dynamic_array_len(data)) memcpy(__get_dynamic_array(data), data_src, data_len); ), TP_printk("[%s] TX frame data", __get_str(dev)) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h index e656bf6bc003..ead72c3d33bd 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace-iwlwifi.h @@ -4,7 +4,7 @@ * Copyright(c) 2009 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH - * Copyright(c) 2018, 2023 Intel Corporation + * Copyright(c) 2018, 2023-2024 Intel Corporation *****************************************************************************/ #if !defined(__IWLWIFI_DEVICE_TRACE_IWLWIFI) || defined(TRACE_HEADER_MULTI_READ) @@ -88,8 +88,8 @@ TRACE_EVENT(iwlwifi_dev_tx, * for the possible padding). */ __dynamic_array(u8, buf0, buf0_len) - __dynamic_array(u8, buf1, hdr_len > 0 && iwl_trace_data(skb) ? - 0 : skb->len - hdr_len) + __dynamic_array(u8, buf1, hdr_len > 0 && !iwl_trace_data(skb) ? + skb->len - hdr_len : 0) ), TP_fast_assign( DEV_ASSIGN; @@ -99,7 +99,7 @@ TRACE_EVENT(iwlwifi_dev_tx, __entry->framelen += skb->len - hdr_len; memcpy(__get_dynamic_array(tfd), tfd, tfdlen); memcpy(__get_dynamic_array(buf0), buf0, buf0_len); - if (hdr_len > 0 && !iwl_trace_data(skb)) + if (__get_dynamic_array_len(buf1)) skb_copy_bits(skb, hdr_len, __get_dynamic_array(buf1), skb->len - hdr_len); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c index d156a9c64194..aaaabd67f959 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c @@ -982,16 +982,10 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv, minor = le32_to_cpup(ptr++); local_comp = le32_to_cpup(ptr); - if (major >= 35) - snprintf(drv->fw.fw_version, - sizeof(drv->fw.fw_version), - "%u.%08x.%u %s", major, minor, - local_comp, iwl_reduced_fw_name(drv)); - else - snprintf(drv->fw.fw_version, - sizeof(drv->fw.fw_version), - "%u.%u.%u %s", major, minor, - local_comp, iwl_reduced_fw_name(drv)); + snprintf(drv->fw.fw_version, + sizeof(drv->fw.fw_version), + "%u.%08x.%u %s", major, minor, + local_comp, iwl_reduced_fw_name(drv)); break; } case IWL_UCODE_TLV_FW_DBG_DEST: { @@ -1842,7 +1836,7 @@ void iwl_drv_stop(struct iwl_drv *drv) mutex_unlock(&iwlwifi_opmode_table_mtx); #ifdef CONFIG_IWLWIFI_DEBUGFS - drv->trans->ops->debugfs_cleanup(drv->trans); + iwl_trans_debugfs_cleanup(drv->trans); debugfs_remove_recursive(drv->dbgfs_drv); #endif diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.c b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.c deleted file mode 100644 index 5f386bb1a353..000000000000 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.c +++ /dev/null @@ -1,394 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause -/* - * Copyright (C) 2005-2014, 2018-2019, 2021 Intel Corporation - */ -#include <linux/types.h> -#include <linux/slab.h> -#include <linux/export.h> - -#include "iwl-drv.h" -#include "iwl-debug.h" -#include "iwl-eeprom-read.h" -#include "iwl-io.h" -#include "iwl-prph.h" -#include "iwl-csr.h" - -/* - * EEPROM access time values: - * - * Driver initiates EEPROM read by writing byte address << 1 to CSR_EEPROM_REG. - * Driver then polls CSR_EEPROM_REG for CSR_EEPROM_REG_READ_VALID_MSK (0x1). - * When polling, wait 10 uSec between polling loops, up to a maximum 5000 uSec. - * Driver reads 16-bit value from bits 31-16 of CSR_EEPROM_REG. - */ -#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */ - -/* - * The device's EEPROM semaphore prevents conflicts between driver and uCode - * when accessing the EEPROM; each access is a series of pulses to/from the - * EEPROM chip, not a single event, so even reads could conflict if they - * weren't arbitrated by the semaphore. - */ -#define IWL_EEPROM_SEM_TIMEOUT 10 /* microseconds */ -#define IWL_EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ - - -static int iwl_eeprom_acquire_semaphore(struct iwl_trans *trans) -{ - u16 count; - int ret; - - for (count = 0; count < IWL_EEPROM_SEM_RETRY_LIMIT; count++) { - /* Request semaphore */ - iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); - - /* See if we got it */ - ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, - IWL_EEPROM_SEM_TIMEOUT); - if (ret >= 0) { - IWL_DEBUG_EEPROM(trans->dev, - "Acquired semaphore after %d tries.\n", - count+1); - return ret; - } - } - - return ret; -} - -static void iwl_eeprom_release_semaphore(struct iwl_trans *trans) -{ - iwl_clear_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); -} - -static int iwl_eeprom_verify_signature(struct iwl_trans *trans, bool nvm_is_otp) -{ - u32 gp = iwl_read32(trans, CSR_EEPROM_GP) & CSR_EEPROM_GP_VALID_MSK; - - IWL_DEBUG_EEPROM(trans->dev, "EEPROM signature=0x%08x\n", gp); - - switch (gp) { - case CSR_EEPROM_GP_BAD_SIG_EEP_GOOD_SIG_OTP: - if (!nvm_is_otp) { - IWL_ERR(trans, "EEPROM with bad signature: 0x%08x\n", - gp); - return -ENOENT; - } - return 0; - case CSR_EEPROM_GP_GOOD_SIG_EEP_LESS_THAN_4K: - case CSR_EEPROM_GP_GOOD_SIG_EEP_MORE_THAN_4K: - if (nvm_is_otp) { - IWL_ERR(trans, "OTP with bad signature: 0x%08x\n", gp); - return -ENOENT; - } - return 0; - case CSR_EEPROM_GP_BAD_SIGNATURE_BOTH_EEP_AND_OTP: - default: - IWL_ERR(trans, - "bad EEPROM/OTP signature, type=%s, EEPROM_GP=0x%08x\n", - nvm_is_otp ? "OTP" : "EEPROM", gp); - return -ENOENT; - } -} - -/****************************************************************************** - * - * OTP related functions - * -******************************************************************************/ - -static void iwl_set_otp_access_absolute(struct iwl_trans *trans) -{ - iwl_read32(trans, CSR_OTP_GP_REG); - - iwl_clear_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_OTP_ACCESS_MODE); -} - -static int iwl_nvm_is_otp(struct iwl_trans *trans) -{ - u32 otpgp; - - /* OTP only valid for CP/PP and after */ - switch (trans->hw_rev & CSR_HW_REV_TYPE_MSK) { - case CSR_HW_REV_TYPE_NONE: - IWL_ERR(trans, "Unknown hardware type\n"); - return -EIO; - case CSR_HW_REV_TYPE_5300: - case CSR_HW_REV_TYPE_5350: - case CSR_HW_REV_TYPE_5100: - case CSR_HW_REV_TYPE_5150: - return 0; - default: - otpgp = iwl_read32(trans, CSR_OTP_GP_REG); - if (otpgp & CSR_OTP_GP_REG_DEVICE_SELECT) - return 1; - return 0; - } -} - -static int iwl_init_otp_access(struct iwl_trans *trans) -{ - int ret; - - ret = iwl_finish_nic_init(trans); - if (ret) - return ret; - - iwl_set_bits_prph(trans, APMG_PS_CTRL_REG, - APMG_PS_CTRL_VAL_RESET_REQ); - udelay(5); - iwl_clear_bits_prph(trans, APMG_PS_CTRL_REG, - APMG_PS_CTRL_VAL_RESET_REQ); - - /* - * CSR auto clock gate disable bit - - * this is only applicable for HW with OTP shadow RAM - */ - if (trans->trans_cfg->base_params->shadow_ram_support) - iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG, - CSR_RESET_LINK_PWR_MGMT_DISABLED); - - return 0; -} - -static int iwl_read_otp_word(struct iwl_trans *trans, u16 addr, - __le16 *eeprom_data) -{ - int ret = 0; - u32 r; - u32 otpgp; - - iwl_write32(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); - ret = iwl_poll_bit(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_READ_VALID_MSK, - CSR_EEPROM_REG_READ_VALID_MSK, - IWL_EEPROM_ACCESS_TIMEOUT); - if (ret < 0) { - IWL_ERR(trans, "Time out reading OTP[%d]\n", addr); - return ret; - } - r = iwl_read32(trans, CSR_EEPROM_REG); - /* check for ECC errors: */ - otpgp = iwl_read32(trans, CSR_OTP_GP_REG); - if (otpgp & CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK) { - /* stop in this case */ - /* set the uncorrectable OTP ECC bit for acknowledgment */ - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); - IWL_ERR(trans, "Uncorrectable OTP ECC error, abort OTP read\n"); - return -EINVAL; - } - if (otpgp & CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK) { - /* continue in this case */ - /* set the correctable OTP ECC bit for acknowledgment */ - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK); - IWL_ERR(trans, "Correctable OTP ECC error, continue read\n"); - } - *eeprom_data = cpu_to_le16(r >> 16); - return 0; -} - -/* - * iwl_is_otp_empty: check for empty OTP - */ -static bool iwl_is_otp_empty(struct iwl_trans *trans) -{ - u16 next_link_addr = 0; - __le16 link_value; - bool is_empty = false; - - /* locate the beginning of OTP link list */ - if (!iwl_read_otp_word(trans, next_link_addr, &link_value)) { - if (!link_value) { - IWL_ERR(trans, "OTP is empty\n"); - is_empty = true; - } - } else { - IWL_ERR(trans, "Unable to read first block of OTP list.\n"); - is_empty = true; - } - - return is_empty; -} - - -/* - * iwl_find_otp_image: find EEPROM image in OTP - * finding the OTP block that contains the EEPROM image. - * the last valid block on the link list (the block _before_ the last block) - * is the block we should read and used to configure the device. - * If all the available OTP blocks are full, the last block will be the block - * we should read and used to configure the device. - * only perform this operation if shadow RAM is disabled - */ -static int iwl_find_otp_image(struct iwl_trans *trans, - u16 *validblockaddr) -{ - u16 next_link_addr = 0, valid_addr; - __le16 link_value = 0; - int usedblocks = 0; - - /* set addressing mode to absolute to traverse the link list */ - iwl_set_otp_access_absolute(trans); - - /* checking for empty OTP or error */ - if (iwl_is_otp_empty(trans)) - return -EINVAL; - - /* - * start traverse link list - * until reach the max number of OTP blocks - * different devices have different number of OTP blocks - */ - do { - /* save current valid block address - * check for more block on the link list - */ - valid_addr = next_link_addr; - next_link_addr = le16_to_cpu(link_value) * sizeof(u16); - IWL_DEBUG_EEPROM(trans->dev, "OTP blocks %d addr 0x%x\n", - usedblocks, next_link_addr); - if (iwl_read_otp_word(trans, next_link_addr, &link_value)) - return -EINVAL; - if (!link_value) { - /* - * reach the end of link list, return success and - * set address point to the starting address - * of the image - */ - *validblockaddr = valid_addr; - /* skip first 2 bytes (link list pointer) */ - *validblockaddr += 2; - return 0; - } - /* more in the link list, continue */ - usedblocks++; - } while (usedblocks <= trans->trans_cfg->base_params->max_ll_items); - - /* OTP has no valid blocks */ - IWL_DEBUG_EEPROM(trans->dev, "OTP has no valid blocks\n"); - return -EINVAL; -} - -/* - * iwl_read_eeprom - read EEPROM contents - * - * Load the EEPROM contents from adapter and return it - * and its size. - * - * NOTE: This routine uses the non-debug IO access functions. - */ -int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size) -{ - __le16 *e; - u32 gp = iwl_read32(trans, CSR_EEPROM_GP); - int sz; - int ret; - u16 addr; - u16 validblockaddr = 0; - u16 cache_addr = 0; - int nvm_is_otp; - - if (!eeprom || !eeprom_size) - return -EINVAL; - - nvm_is_otp = iwl_nvm_is_otp(trans); - if (nvm_is_otp < 0) - return nvm_is_otp; - - sz = trans->trans_cfg->base_params->eeprom_size; - IWL_DEBUG_EEPROM(trans->dev, "NVM size = %d\n", sz); - - e = kmalloc(sz, GFP_KERNEL); - if (!e) - return -ENOMEM; - - ret = iwl_eeprom_verify_signature(trans, nvm_is_otp); - if (ret < 0) { - IWL_ERR(trans, "EEPROM not found, EEPROM_GP=0x%08x\n", gp); - goto err_free; - } - - /* Make sure driver (instead of uCode) is allowed to read EEPROM */ - ret = iwl_eeprom_acquire_semaphore(trans); - if (ret < 0) { - IWL_ERR(trans, "Failed to acquire EEPROM semaphore.\n"); - goto err_free; - } - - if (nvm_is_otp) { - ret = iwl_init_otp_access(trans); - if (ret) { - IWL_ERR(trans, "Failed to initialize OTP access.\n"); - goto err_unlock; - } - - iwl_write32(trans, CSR_EEPROM_GP, - iwl_read32(trans, CSR_EEPROM_GP) & - ~CSR_EEPROM_GP_IF_OWNER_MSK); - - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK | - CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); - /* traversing the linked list if no shadow ram supported */ - if (!trans->trans_cfg->base_params->shadow_ram_support) { - ret = iwl_find_otp_image(trans, &validblockaddr); - if (ret) - goto err_unlock; - } - for (addr = validblockaddr; addr < validblockaddr + sz; - addr += sizeof(u16)) { - __le16 eeprom_data; - - ret = iwl_read_otp_word(trans, addr, &eeprom_data); - if (ret) - goto err_unlock; - e[cache_addr / 2] = eeprom_data; - cache_addr += sizeof(u16); - } - } else { - /* eeprom is an array of 16bit values */ - for (addr = 0; addr < sz; addr += sizeof(u16)) { - u32 r; - - iwl_write32(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); - - ret = iwl_poll_bit(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_READ_VALID_MSK, - CSR_EEPROM_REG_READ_VALID_MSK, - IWL_EEPROM_ACCESS_TIMEOUT); - if (ret < 0) { - IWL_ERR(trans, - "Time out reading EEPROM[%d]\n", addr); - goto err_unlock; - } - r = iwl_read32(trans, CSR_EEPROM_REG); - e[addr / 2] = cpu_to_le16(r >> 16); - } - } - - IWL_DEBUG_EEPROM(trans->dev, "NVM Type: %s\n", - nvm_is_otp ? "OTP" : "EEPROM"); - - iwl_eeprom_release_semaphore(trans); - - *eeprom_size = sz; - *eeprom = (u8 *)e; - return 0; - - err_unlock: - iwl_eeprom_release_semaphore(trans); - err_free: - kfree(e); - - return ret; -} -IWL_EXPORT_SYMBOL(iwl_read_eeprom); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.h b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.h deleted file mode 100644 index 63b8e6c6659b..000000000000 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-read.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ -/* - * Copyright (C) 2005-2014 Intel Corporation - */ -#ifndef __iwl_eeprom_h__ -#define __iwl_eeprom_h__ - -#include "iwl-trans.h" - -int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size); - -#endif /* __iwl_eeprom_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-fh.h b/drivers/net/wireless/intel/iwlwifi/iwl-fh.h index 6ba374efaacb..5c8f1868db64 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-fh.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-fh.h @@ -15,7 +15,7 @@ /* Flow Handler Definitions */ /****************************/ -/** +/* * This I/O area is directly read/writable by driver (e.g. Linux uses writel()) * Addresses are offsets from device's PCI hardware base address. */ @@ -24,7 +24,7 @@ #define FH_MEM_LOWER_BOUND_GEN2 (0xa06000) #define FH_MEM_UPPER_BOUND_GEN2 (0xa08000) -/** +/* * Keep-Warm (KW) buffer base address. * * Driver must allocate a 4KByte buffer that is for keeping the @@ -44,7 +44,7 @@ #define FH_KW_MEM_ADDR_REG (FH_MEM_LOWER_BOUND + 0x97C) -/** +/* * TFD Circular Buffers Base (CBBC) addresses * * Device has 16 base pointer registers, one for each of 16 host-DRAM-resident @@ -143,7 +143,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, */ #define TFH_SRV_DMA_CHNL0_BC (0x1F70) -/** +/* * Rx SRAM Control and Status Registers (RSCSR) * * These registers provide handshake between driver and device for the Rx queue @@ -216,21 +216,21 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FH_MEM_RSCSR_UPPER_BOUND (FH_MEM_LOWER_BOUND + 0xC00) #define FH_MEM_RSCSR_CHNL0 (FH_MEM_RSCSR_LOWER_BOUND) -/** +/* * Physical base address of 8-byte Rx Status buffer. * Bit fields: * 31-0: Rx status buffer physical base address [35:4], must 16-byte aligned. */ #define FH_RSCSR_CHNL0_STTS_WPTR_REG (FH_MEM_RSCSR_CHNL0) -/** +/* * Physical base address of Rx Buffer Descriptor Circular Buffer. * Bit fields: * 27-0: RBD CD physical base address [35:8], must be 256-byte aligned. */ #define FH_RSCSR_CHNL0_RBDCB_BASE_REG (FH_MEM_RSCSR_CHNL0 + 0x004) -/** +/* * Rx write pointer (index, really!). * Bit fields: * 11-0: Index of driver's most recent prepared-to-be-filled RBD, + 1. @@ -242,7 +242,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FW_RSCSR_CHNL0_RXDCB_RDPTR_REG (FH_MEM_RSCSR_CHNL0 + 0x00c) #define FH_RSCSR_CHNL0_RDPTR FW_RSCSR_CHNL0_RXDCB_RDPTR_REG -/** +/* * Rx Config/Status Registers (RCSR) * Rx Config Reg for channel 0 (only channel used) * @@ -300,7 +300,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_NO_INT_VAL (0x00000000) #define FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL (0x00001000) -/** +/* * Rx Shared Status Registers (RSSR) * * After stopping Rx DMA channel (writing 0 to @@ -356,7 +356,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define RFH_RBDBUF_RBD0_LSB 0xA08300 #define RFH_RBDBUF_RBD_LSB(q) (RFH_RBDBUF_RBD0_LSB + (q) * 8) -/** +/* * RFH Status Register * * Bit fields: @@ -440,7 +440,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FH_TFDIB_CTRL0_REG(_chnl) (FH_TFDIB_LOWER_BOUND + 0x8 * (_chnl)) #define FH_TFDIB_CTRL1_REG(_chnl) (FH_TFDIB_LOWER_BOUND + 0x8 * (_chnl) + 0x4) -/** +/* * Transmit DMA Channel Control/Status Registers (TCSR) * * Device has one configuration register for each of 8 Tx DMA/FIFO channels @@ -501,7 +501,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_NUM (20) #define FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_IDX (12) -/** +/* * Tx Shared Status Registers (TSSR) * * After stopping Tx DMA channel (writing 0 to @@ -518,7 +518,7 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, #define FH_TSSR_TX_STATUS_REG (FH_TSSR_LOWER_BOUND + 0x010) -/** +/* * Bit fields for TSSR(Tx Shared Status & Control) error status register: * 31: Indicates an address error when accessed to internal memory * uCode/driver must write "1" in order to clear this flag @@ -634,7 +634,7 @@ enum iwl_tfd_tb_hi_n_len { }; /** - * struct iwl_tfd_tb transmit buffer descriptor within transmit frame descriptor + * struct iwl_tfd_tb - transmit buffer descriptor within transmit frame descriptor * * This structure contains dma address and length of transmission address * @@ -648,7 +648,7 @@ struct iwl_tfd_tb { } __packed; /** - * struct iwl_tfh_tb transmit buffer descriptor within transmit frame descriptor + * struct iwl_tfh_tb - transmit buffer descriptor within transmit frame descriptor * * This structure contains dma address and length of transmission address * @@ -717,7 +717,7 @@ struct iwl_tfh_tfd { /* Fixed (non-configurable) rx data from phy */ /** - * struct iwlagn_schedq_bc_tbl scheduler byte count table + * struct iwlagn_scd_bc_tbl - scheduler byte count table * base physical address provided by SCD_DRAM_BASE_ADDR * For devices up to 22000: * @tfd_offset: @@ -734,7 +734,7 @@ struct iwlagn_scd_bc_tbl { } __packed; /** - * struct iwl_gen3_bc_tbl_entry scheduler byte count table entry gen3 + * struct iwl_gen3_bc_tbl_entry - scheduler byte count table entry gen3 * For AX210 and on: * @tfd_offset: 0-12 - tx command byte count * 12-13 - number of 64 byte chunks diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-io.c b/drivers/net/wireless/intel/iwlwifi/iwl-io.c index c60f9466c5fd..060becfd64f3 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-io.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-io.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2003-2014, 2018-2022 Intel Corporation + * Copyright (C) 2003-2014, 2018-2022, 2024 Intel Corporation * Copyright (C) 2015-2016 Intel Deutschland GmbH */ #include <linux/delay.h> @@ -460,7 +460,7 @@ int iwl_finish_nic_init(struct iwl_trans *trans) */ if (cfg_trans->device_family >= IWL_DEVICE_FAMILY_BZ) { iwl_set_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY | + CSR_GP_CNTRL_REG_FLAG_BZ_MAC_ACCESS_REQ | CSR_GP_CNTRL_REG_FLAG_MAC_INIT); poll_ready = CSR_GP_CNTRL_REG_FLAG_MAC_STATUS; } else { diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h b/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h index 1cf26ab4f488..21eabfc3ffc8 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2018-2022 Intel Corporation + * Copyright (C) 2005-2014, 2018-2022, 2024 Intel Corporation */ #ifndef __iwl_modparams_h__ #define __iwl_modparams_h__ @@ -106,4 +106,23 @@ static inline bool iwl_enable_tx_ampdu(void) return true; } +/* Verify amsdu_size module parameter and convert it to a rxb size */ +static inline enum iwl_amsdu_size +iwl_amsdu_size_to_rxb_size(void) +{ + switch (iwlwifi_mod_params.amsdu_size) { + case IWL_AMSDU_8K: + return IWL_AMSDU_8K; + case IWL_AMSDU_12K: + return IWL_AMSDU_12K; + default: + pr_err("%s: Unsupported amsdu_size: %d\n", KBUILD_MODNAME, + iwlwifi_mod_params.amsdu_size); + fallthrough; + case IWL_AMSDU_DEF: + case IWL_AMSDU_4K: + return IWL_AMSDU_4K; + } +} + #endif /* #__iwl_modparams_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c index 149903f52567..d902121da009 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c @@ -38,16 +38,13 @@ enum nvm_offsets { N_HW_ADDRS = 3, NVM_CHANNELS = 0x1E0 - NVM_SW_SECTION, - /* NVM calibration section offset (in words) definitions */ - NVM_CALIB_SECTION = 0x2B8, - XTAL_CALIB = 0x316 - NVM_CALIB_SECTION, - /* NVM REGULATORY -Section offset (in words) definitions */ NVM_CHANNELS_SDP = 0, }; enum ext_nvm_offsets { /* NVM HW-Section offset (in words) definitions */ + MAC_ADDRESS_OVERRIDE_EXT_NVM = 1, /* NVM SW-Section offset (in words) definitions */ @@ -373,7 +370,9 @@ static u32 iwl_get_channel_flags(u8 ch_num, int ch_idx, enum nl80211_band band, flags |= IEEE80211_CHAN_IR_CONCURRENT; /* Set the AP type for the UHB case. */ - if (!(nvm_flags & NVM_CHANNEL_VLP)) + if (nvm_flags & NVM_CHANNEL_VLP) + flags |= IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP; + else flags |= IEEE80211_CHAN_NO_6GHZ_VLP_CLIENT; if (!(nvm_flags & NVM_CHANNEL_AFC)) flags |= IEEE80211_CHAN_NO_6GHZ_AFC_CLIENT; @@ -1574,9 +1573,6 @@ iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, ®ulatory[NVM_CHANNELS_SDP] : &nvm_sw[NVM_CHANNELS]; - /* in family 8000 Xtal calibration values moved to OTP */ - data->xtal_calib[0] = *(nvm_calib + XTAL_CALIB); - data->xtal_calib[1] = *(nvm_calib + XTAL_CALIB + 1); lar_enabled = true; } else { u16 lar_offset = data->nvm_version < 0xE39 ? @@ -1614,8 +1610,7 @@ IWL_EXPORT_SYMBOL(iwl_parse_nvm_data); static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, int ch_idx, u16 nvm_flags, struct iwl_reg_capa reg_capa, - const struct iwl_cfg *cfg, - bool uats_enabled) + const struct iwl_cfg *cfg) { u32 flags = NL80211_RRF_NO_HT40; @@ -1625,11 +1620,15 @@ static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, flags &= ~NL80211_RRF_NO_HT40PLUS; if (nvm_chan[ch_idx] >= FIRST_2GHZ_HT_MINUS) flags &= ~NL80211_RRF_NO_HT40MINUS; - } else if (nvm_flags & NVM_CHANNEL_40MHZ) { + } else if (ch_idx < NUM_2GHZ_CHANNELS + NUM_5GHZ_CHANNELS && + nvm_flags & NVM_CHANNEL_40MHZ) { if ((ch_idx - NUM_2GHZ_CHANNELS) % 2 == 0) flags &= ~NL80211_RRF_NO_HT40PLUS; else flags &= ~NL80211_RRF_NO_HT40MINUS; + } else if (nvm_flags & NVM_CHANNEL_40MHZ) { + flags &= ~NL80211_RRF_NO_HT40PLUS; + flags &= ~NL80211_RRF_NO_HT40MINUS; } if (!(nvm_flags & NVM_CHANNEL_80MHZ)) @@ -1662,13 +1661,13 @@ static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, } /* Set the AP type for the UHB case. */ - if (uats_enabled) { - if (!(nvm_flags & NVM_CHANNEL_VLP)) - flags |= NL80211_RRF_NO_6GHZ_VLP_CLIENT; + if (nvm_flags & NVM_CHANNEL_VLP) + flags |= NL80211_RRF_ALLOW_6GHZ_VLP_AP; + else + flags |= NL80211_RRF_NO_6GHZ_VLP_CLIENT; - if (!(nvm_flags & NVM_CHANNEL_AFC)) - flags |= NL80211_RRF_NO_6GHZ_AFC_CLIENT; - } + if (!(nvm_flags & NVM_CHANNEL_AFC)) + flags |= NL80211_RRF_NO_6GHZ_AFC_CLIENT; /* * reg_capa is per regulatory domain so apply it for every channel @@ -1724,7 +1723,7 @@ static struct iwl_reg_capa iwl_get_reg_capa(u32 flags, u8 resp_ver) struct ieee80211_regdomain * iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, int num_of_ch, __le32 *channels, u16 fw_mcc, - u16 geo_info, u32 cap, u8 resp_ver, bool uats_enabled) + u16 geo_info, u32 cap, u8 resp_ver) { int ch_idx; u16 ch_flags; @@ -1732,7 +1731,6 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, const u16 *nvm_chan; struct ieee80211_regdomain *regd, *copy_rd; struct ieee80211_reg_rule *rule; - enum nl80211_band band; int center_freq, prev_center_freq = 0; int valid_rules = 0; bool new_rule; @@ -1776,8 +1774,10 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, reg_capa = iwl_get_reg_capa(cap, resp_ver); for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) { + enum nl80211_band band = + iwl_nl80211_band_from_channel_idx(ch_idx); + ch_flags = (u16)__le32_to_cpup(channels + ch_idx); - band = iwl_nl80211_band_from_channel_idx(ch_idx); center_freq = ieee80211_channel_to_frequency(nvm_chan[ch_idx], band); new_rule = false; @@ -1790,7 +1790,7 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, reg_rule_flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, ch_flags, reg_capa, - cfg, uats_enabled); + cfg); /* we can't continue the same rule */ if (ch_idx == 0 || prev_reg_rule_flags != reg_rule_flags || diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h index fd9c3bed9407..0c6c3fb8c6dd 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2015, 2018-2023 Intel Corporation + * Copyright (C) 2005-2015, 2018-2024 Intel Corporation * Copyright (C) 2016-2017 Intel Deutschland GmbH */ #ifndef __iwl_nvm_parse_h__ #define __iwl_nvm_parse_h__ #include <net/cfg80211.h> -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "mei/iwl-mei.h" /** @@ -38,7 +38,7 @@ iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, u8 tx_chains, u8 rx_chains); /** - * iwl_parse_mcc_info - parse MCC (mobile country code) info coming from FW + * iwl_parse_nvm_mcc_info - parse MCC (mobile country code) info coming from FW * * This function parses the regulatory channel data received as a * MCC_UPDATE_CMD command. It returns a newly allocation regulatory domain, @@ -50,7 +50,7 @@ iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, struct ieee80211_regdomain * iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, int num_of_ch, __le32 *channels, u16 fw_mcc, - u16 geo_info, u32 cap, u8 resp_ver, bool uats_enabled); + u16 geo_info, u32 cap, u8 resp_ver); /** * struct iwl_nvm_section - describes an NVM section in memory. diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-utils.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-utils.c new file mode 100644 index 000000000000..b3c25acd3691 --- /dev/null +++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-utils.c @@ -0,0 +1,118 @@ +// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause +/* + * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation + * Copyright (C) 2015 Intel Mobile Communications GmbH + */ +#include <linux/types.h> +#include <linux/slab.h> +#include <linux/export.h> +#include "iwl-drv.h" +#include "iwl-modparams.h" +#include "iwl-nvm-utils.h" + +int iwl_init_sband_channels(struct iwl_nvm_data *data, + struct ieee80211_supported_band *sband, + int n_channels, enum nl80211_band band) +{ + struct ieee80211_channel *chan = &data->channels[0]; + int n = 0, idx = 0; + + while (idx < n_channels && chan->band != band) + chan = &data->channels[++idx]; + + sband->channels = &data->channels[idx]; + + while (idx < n_channels && chan->band == band) { + chan = &data->channels[++idx]; + n++; + } + + sband->n_channels = n; + + return n; +} +IWL_EXPORT_SYMBOL(iwl_init_sband_channels); + +#define MAX_BIT_RATE_40_MHZ 150 /* Mbps */ +#define MAX_BIT_RATE_20_MHZ 72 /* Mbps */ + +void iwl_init_ht_hw_capab(struct iwl_trans *trans, + struct iwl_nvm_data *data, + struct ieee80211_sta_ht_cap *ht_info, + enum nl80211_band band, + u8 tx_chains, u8 rx_chains) +{ + const struct iwl_cfg *cfg = trans->cfg; + int max_bit_rate = 0; + + tx_chains = hweight8(tx_chains); + if (cfg->rx_with_siso_diversity) + rx_chains = 1; + else + rx_chains = hweight8(rx_chains); + + if (!(data->sku_cap_11n_enable) || + (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_ALL) || + !cfg->ht_params) { + ht_info->ht_supported = false; + return; + } + + if (data->sku_cap_mimo_disabled) + rx_chains = 1; + + ht_info->ht_supported = true; + ht_info->cap = IEEE80211_HT_CAP_DSSSCCK40; + + if (cfg->ht_params->stbc) { + ht_info->cap |= (1 << IEEE80211_HT_CAP_RX_STBC_SHIFT); + + if (tx_chains > 1) + ht_info->cap |= IEEE80211_HT_CAP_TX_STBC; + } + + if (cfg->ht_params->ldpc) + ht_info->cap |= IEEE80211_HT_CAP_LDPC_CODING; + + if (trans->trans_cfg->mq_rx_supported || + iwlwifi_mod_params.amsdu_size >= IWL_AMSDU_8K) + ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU; + + ht_info->ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K; + ht_info->ampdu_density = IEEE80211_HT_MPDU_DENSITY_4; + + ht_info->mcs.rx_mask[0] = 0xFF; + ht_info->mcs.rx_mask[1] = 0x00; + ht_info->mcs.rx_mask[2] = 0x00; + + if (rx_chains >= 2) + ht_info->mcs.rx_mask[1] = 0xFF; + if (rx_chains >= 3) + ht_info->mcs.rx_mask[2] = 0xFF; + + if (cfg->ht_params->ht_greenfield_support) + ht_info->cap |= IEEE80211_HT_CAP_GRN_FLD; + ht_info->cap |= IEEE80211_HT_CAP_SGI_20; + + max_bit_rate = MAX_BIT_RATE_20_MHZ; + + if (cfg->ht_params->ht40_bands & BIT(band)) { + ht_info->cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40; + ht_info->cap |= IEEE80211_HT_CAP_SGI_40; + max_bit_rate = MAX_BIT_RATE_40_MHZ; + } + + /* Highest supported Rx data rate */ + max_bit_rate *= rx_chains; + WARN_ON(max_bit_rate & ~IEEE80211_HT_MCS_RX_HIGHEST_MASK); + ht_info->mcs.rx_highest = cpu_to_le16(max_bit_rate); + + /* Tx MCS capabilities */ + ht_info->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED; + if (tx_chains != rx_chains) { + ht_info->mcs.tx_params |= IEEE80211_HT_MCS_TX_RX_DIFF; + ht_info->mcs.tx_params |= ((tx_chains - 1) << + IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT); + } +} +IWL_EXPORT_SYMBOL(iwl_init_ht_hw_capab); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-utils.h index 34a178a2eb5d..ac0a29a1c31f 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-utils.h @@ -58,23 +58,6 @@ struct iwl_nvm_data { struct ieee80211_channel channels[]; }; -/** - * iwl_parse_eeprom_data - parse EEPROM data and return values - * - * @trans: ransport we're parsing for, for debug only - * @cfg: device configuration for parsing and overrides - * @eeprom: the EEPROM data - * @eeprom_size: length of the EEPROM data - * - * This function parses all EEPROM values we need and then - * returns a (newly allocated) struct containing all the - * relevant values for driver use. The struct must be freed - * later with iwl_free_nvm_data(). - */ -struct iwl_nvm_data * -iwl_parse_eeprom_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, - const u8 *eeprom, size_t eeprom_size); - int iwl_init_sband_channels(struct iwl_nvm_data *data, struct ieee80211_supported_band *sband, int n_channels, enum nl80211_band band); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-op-mode.h b/drivers/net/wireless/intel/iwlwifi/iwl-op-mode.h index 1ca82f3e4ebf..595fa6ddf084 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-op-mode.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-op-mode.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2005-2014, 2018-2021 Intel Corporation + * Copyright (C) 2005-2014, 2018-2021, 2024 Intel Corporation * Copyright (C) 2013-2014 Intel Mobile Communications GmbH * Copyright (C) 2015 Intel Deutschland GmbH */ @@ -185,7 +185,8 @@ static inline void iwl_op_mode_cmd_queue_full(struct iwl_op_mode *op_mode) static inline void iwl_op_mode_nic_config(struct iwl_op_mode *op_mode) { might_sleep(); - op_mode->ops->nic_config(op_mode); + if (op_mode->ops->nic_config) + op_mode->ops->nic_config(op_mode); } static inline void iwl_op_mode_wimax_active(struct iwl_op_mode *op_mode) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h index 898e22e0d1ab..dc171c29eb7b 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h @@ -96,7 +96,7 @@ #define DTSC_PTAT_AVG (0x00a10650) -/** +/* * Tx Scheduler * * The Tx Scheduler selects the next frame to be transmitted, choosing TFDs @@ -169,7 +169,7 @@ */ #define SCD_MEM_LOWER_BOUND (0x0000) -/** +/* * Max Tx window size is the max number of contiguous TFDs that the scheduler * can keep track of at one time when creating block-ack chains of frames. * Note that "64" matches the number of ack bits in a block-ack packet. diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c index f95098c21c7d..3c9d91496c82 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c @@ -2,7 +2,7 @@ /* * Copyright (C) 2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH - * Copyright (C) 2019-2021, 2023 Intel Corporation + * Copyright (C) 2019-2021, 2023-2024 Intel Corporation */ #include <linux/kernel.h> #include <linux/bsearch.h> @@ -11,13 +11,13 @@ #include "iwl-trans.h" #include "iwl-drv.h" #include "iwl-fh.h" -#include "queue/tx.h" #include <linux/dmapool.h> #include "fw/api/commands.h" +#include "pcie/internal.h" +#include "iwl-context-info-gen3.h" struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, struct device *dev, - const struct iwl_trans_ops *ops, const struct iwl_cfg_trans_params *cfg_trans) { struct iwl_trans *trans; @@ -37,22 +37,8 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, #endif trans->dev = dev; - trans->ops = ops; trans->num_rx_queues = 1; - WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty); - - if (trans->trans_cfg->gen2) { - trans->txqs.tfd.addr_size = 64; - trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; - trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); - } else { - trans->txqs.tfd.addr_size = 36; - trans->txqs.tfd.max_tbs = IWL_NUM_OF_TBS; - trans->txqs.tfd.size = sizeof(struct iwl_tfd); - } - trans->max_skb_frags = IWL_TRANS_MAX_FRAGS(trans); - return trans; } @@ -78,31 +64,6 @@ int iwl_trans_init(struct iwl_trans *trans) if (WARN_ON(trans->trans_cfg->gen2 && txcmd_size >= txcmd_align)) return -EINVAL; - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) - trans->txqs.bc_tbl_size = - sizeof(struct iwl_gen3_bc_tbl_entry) * TFD_QUEUE_BC_SIZE_GEN3_BZ; - else if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) - trans->txqs.bc_tbl_size = - sizeof(struct iwl_gen3_bc_tbl_entry) * TFD_QUEUE_BC_SIZE_GEN3_AX210; - else - trans->txqs.bc_tbl_size = sizeof(struct iwlagn_scd_bc_tbl); - /* - * For gen2 devices, we use a single allocation for each byte-count - * table, but they're pretty small (1k) so use a DMA pool that we - * allocate here. - */ - if (trans->trans_cfg->gen2) { - trans->txqs.bc_pool = dmam_pool_create("iwlwifi:bc", trans->dev, - trans->txqs.bc_tbl_size, - 256, 0); - if (!trans->txqs.bc_pool) - return -ENOMEM; - } - - /* Some things must not change even if the config does */ - WARN_ON(trans->txqs.tfd.addr_size != - (trans->trans_cfg->gen2 ? 64 : 36)); - snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name), "iwl_cmd_pool:%s", dev_name(trans->dev)); trans->dev_cmd_pool = @@ -112,12 +73,6 @@ int iwl_trans_init(struct iwl_trans *trans) if (!trans->dev_cmd_pool) return -ENOMEM; - trans->txqs.tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page); - if (!trans->txqs.tso_hdr_page) { - kmem_cache_destroy(trans->dev_cmd_pool); - return -ENOMEM; - } - /* Initialize the wait queue for commands */ init_waitqueue_head(&trans->wait_command_queue); @@ -126,20 +81,6 @@ int iwl_trans_init(struct iwl_trans *trans) void iwl_trans_free(struct iwl_trans *trans) { - int i; - - if (trans->txqs.tso_hdr_page) { - for_each_possible_cpu(i) { - struct iwl_tso_hdr_page *p = - per_cpu_ptr(trans->txqs.tso_hdr_page, i); - - if (p && p->page) - __free_page(p->page); - } - - free_percpu(trans->txqs.tso_hdr_page); - } - kmem_cache_destroy(trans->dev_cmd_pool); } @@ -167,10 +108,9 @@ int iwl_trans_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) return -EIO; - if (unlikely(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) return -EIO; - } if (!(cmd->flags & CMD_ASYNC)) lock_map_acquire_read(&trans->sync_cmd_lockdep_map); @@ -180,7 +120,7 @@ int iwl_trans_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) cmd->id = DEF_ID(cmd->id); } - ret = iwl_trans_txq_send_hcmd(trans, cmd); + ret = iwl_trans_pcie_send_hcmd(trans, cmd); if (!(cmd->flags & CMD_ASYNC)) lock_map_release(&trans->sync_cmd_lockdep_map); @@ -247,3 +187,379 @@ int iwl_cmd_groups_verify_sorted(const struct iwl_trans_config *trans) return 0; } IWL_EXPORT_SYMBOL(iwl_cmd_groups_verify_sorted); + +void iwl_trans_configure(struct iwl_trans *trans, + const struct iwl_trans_config *trans_cfg) +{ + trans->op_mode = trans_cfg->op_mode; + + iwl_trans_pcie_configure(trans, trans_cfg); + WARN_ON(iwl_cmd_groups_verify_sorted(trans_cfg)); +} +IWL_EXPORT_SYMBOL(iwl_trans_configure); + +int iwl_trans_start_hw(struct iwl_trans *trans) +{ + might_sleep(); + + return iwl_trans_pcie_start_hw(trans); +} +IWL_EXPORT_SYMBOL(iwl_trans_start_hw); + +void iwl_trans_op_mode_leave(struct iwl_trans *trans) +{ + might_sleep(); + + iwl_trans_pcie_op_mode_leave(trans); + + trans->op_mode = NULL; + + trans->state = IWL_TRANS_NO_FW; +} +IWL_EXPORT_SYMBOL(iwl_trans_op_mode_leave); + +void iwl_trans_write8(struct iwl_trans *trans, u32 ofs, u8 val) +{ + iwl_trans_pcie_write8(trans, ofs, val); +} +IWL_EXPORT_SYMBOL(iwl_trans_write8); + +void iwl_trans_write32(struct iwl_trans *trans, u32 ofs, u32 val) +{ + iwl_trans_pcie_write32(trans, ofs, val); +} +IWL_EXPORT_SYMBOL(iwl_trans_write32); + +u32 iwl_trans_read32(struct iwl_trans *trans, u32 ofs) +{ + return iwl_trans_pcie_read32(trans, ofs); +} +IWL_EXPORT_SYMBOL(iwl_trans_read32); + +u32 iwl_trans_read_prph(struct iwl_trans *trans, u32 ofs) +{ + return iwl_trans_pcie_read_prph(trans, ofs); +} +IWL_EXPORT_SYMBOL(iwl_trans_read_prph); + +void iwl_trans_write_prph(struct iwl_trans *trans, u32 ofs, u32 val) +{ + return iwl_trans_pcie_write_prph(trans, ofs, val); +} +IWL_EXPORT_SYMBOL(iwl_trans_write_prph); + +int iwl_trans_read_mem(struct iwl_trans *trans, u32 addr, + void *buf, int dwords) +{ + return iwl_trans_pcie_read_mem(trans, addr, buf, dwords); +} +IWL_EXPORT_SYMBOL(iwl_trans_read_mem); + +int iwl_trans_write_mem(struct iwl_trans *trans, u32 addr, + const void *buf, int dwords) +{ + return iwl_trans_pcie_write_mem(trans, addr, buf, dwords); +} +IWL_EXPORT_SYMBOL(iwl_trans_write_mem); + +void iwl_trans_set_pmi(struct iwl_trans *trans, bool state) +{ + if (state) + set_bit(STATUS_TPOWER_PMI, &trans->status); + else + clear_bit(STATUS_TPOWER_PMI, &trans->status); +} +IWL_EXPORT_SYMBOL(iwl_trans_set_pmi); + +int iwl_trans_sw_reset(struct iwl_trans *trans, bool retake_ownership) +{ + return iwl_trans_pcie_sw_reset(trans, retake_ownership); +} +IWL_EXPORT_SYMBOL(iwl_trans_sw_reset); + +struct iwl_trans_dump_data * +iwl_trans_dump_data(struct iwl_trans *trans, u32 dump_mask, + const struct iwl_dump_sanitize_ops *sanitize_ops, + void *sanitize_ctx) +{ + return iwl_trans_pcie_dump_data(trans, dump_mask, + sanitize_ops, sanitize_ctx); +} +IWL_EXPORT_SYMBOL(iwl_trans_dump_data); + +int iwl_trans_d3_suspend(struct iwl_trans *trans, bool test, bool reset) +{ + might_sleep(); + + return iwl_trans_pcie_d3_suspend(trans, test, reset); +} +IWL_EXPORT_SYMBOL(iwl_trans_d3_suspend); + +int iwl_trans_d3_resume(struct iwl_trans *trans, enum iwl_d3_status *status, + bool test, bool reset) +{ + might_sleep(); + + return iwl_trans_pcie_d3_resume(trans, status, test, reset); +} +IWL_EXPORT_SYMBOL(iwl_trans_d3_resume); + +void iwl_trans_interrupts(struct iwl_trans *trans, bool enable) +{ + iwl_trans_pci_interrupts(trans, enable); +} +IWL_EXPORT_SYMBOL(iwl_trans_interrupts); + +void iwl_trans_sync_nmi(struct iwl_trans *trans) +{ + iwl_trans_pcie_sync_nmi(trans); +} +IWL_EXPORT_SYMBOL(iwl_trans_sync_nmi); + +int iwl_trans_write_imr_mem(struct iwl_trans *trans, u32 dst_addr, + u64 src_addr, u32 byte_cnt) +{ + return iwl_trans_pcie_copy_imr(trans, dst_addr, src_addr, byte_cnt); +} +IWL_EXPORT_SYMBOL(iwl_trans_write_imr_mem); + +void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, + u32 mask, u32 value) +{ + iwl_trans_pcie_set_bits_mask(trans, reg, mask, value); +} +IWL_EXPORT_SYMBOL(iwl_trans_set_bits_mask); + +int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs, + u32 *val) +{ + return iwl_trans_pcie_read_config32(trans, ofs, val); +} +IWL_EXPORT_SYMBOL(iwl_trans_read_config32); + +bool _iwl_trans_grab_nic_access(struct iwl_trans *trans) +{ + return iwl_trans_pcie_grab_nic_access(trans); +} +IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access); + +void __releases(nic_access) +iwl_trans_release_nic_access(struct iwl_trans *trans) +{ + iwl_trans_pcie_release_nic_access(trans); + __release(nic_access); +} +IWL_EXPORT_SYMBOL(iwl_trans_release_nic_access); + +void iwl_trans_fw_alive(struct iwl_trans *trans, u32 scd_addr) +{ + might_sleep(); + + trans->state = IWL_TRANS_FW_ALIVE; + + if (trans->trans_cfg->gen2) + iwl_trans_pcie_gen2_fw_alive(trans); + else + iwl_trans_pcie_fw_alive(trans, scd_addr); +} +IWL_EXPORT_SYMBOL(iwl_trans_fw_alive); + +int iwl_trans_start_fw(struct iwl_trans *trans, const struct fw_img *fw, + bool run_in_rfkill) +{ + int ret; + + might_sleep(); + + WARN_ON_ONCE(!trans->rx_mpdu_cmd); + + clear_bit(STATUS_FW_ERROR, &trans->status); + + if (trans->trans_cfg->gen2) + ret = iwl_trans_pcie_gen2_start_fw(trans, fw, run_in_rfkill); + else + ret = iwl_trans_pcie_start_fw(trans, fw, run_in_rfkill); + + if (ret == 0) + trans->state = IWL_TRANS_FW_STARTED; + + return ret; +} +IWL_EXPORT_SYMBOL(iwl_trans_start_fw); + +void iwl_trans_stop_device(struct iwl_trans *trans) +{ + might_sleep(); + + if (trans->trans_cfg->gen2) + iwl_trans_pcie_gen2_stop_device(trans); + else + iwl_trans_pcie_stop_device(trans); + + trans->state = IWL_TRANS_NO_FW; +} +IWL_EXPORT_SYMBOL(iwl_trans_stop_device); + +int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_device_tx_cmd *dev_cmd, int queue) +{ + if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) + return -EIO; + + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return -EIO; + + if (trans->trans_cfg->gen2) + return iwl_txq_gen2_tx(trans, skb, dev_cmd, queue); + + return iwl_trans_pcie_tx(trans, skb, dev_cmd, queue); +} +IWL_EXPORT_SYMBOL(iwl_trans_tx); + +void iwl_trans_reclaim(struct iwl_trans *trans, int queue, int ssn, + struct sk_buff_head *skbs, bool is_flush) +{ + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return; + + iwl_pcie_reclaim(trans, queue, ssn, skbs, is_flush); +} +IWL_EXPORT_SYMBOL(iwl_trans_reclaim); + +void iwl_trans_txq_disable(struct iwl_trans *trans, int queue, + bool configure_scd) +{ + iwl_trans_pcie_txq_disable(trans, queue, configure_scd); +} +IWL_EXPORT_SYMBOL(iwl_trans_txq_disable); + +bool iwl_trans_txq_enable_cfg(struct iwl_trans *trans, int queue, u16 ssn, + const struct iwl_trans_txq_scd_cfg *cfg, + unsigned int queue_wdg_timeout) +{ + might_sleep(); + + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return false; + + return iwl_trans_pcie_txq_enable(trans, queue, ssn, + cfg, queue_wdg_timeout); +} +IWL_EXPORT_SYMBOL(iwl_trans_txq_enable_cfg); + +int iwl_trans_wait_txq_empty(struct iwl_trans *trans, int queue) +{ + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return -EIO; + + return iwl_trans_pcie_wait_txq_empty(trans, queue); +} +IWL_EXPORT_SYMBOL(iwl_trans_wait_txq_empty); + +int iwl_trans_wait_tx_queues_empty(struct iwl_trans *trans, u32 txqs) +{ + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return -EIO; + + return iwl_trans_pcie_wait_txqs_empty(trans, txqs); +} +IWL_EXPORT_SYMBOL(iwl_trans_wait_tx_queues_empty); + +void iwl_trans_freeze_txq_timer(struct iwl_trans *trans, + unsigned long txqs, bool freeze) +{ + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return; + + iwl_pcie_freeze_txq_timer(trans, txqs, freeze); +} +IWL_EXPORT_SYMBOL(iwl_trans_freeze_txq_timer); + +void iwl_trans_txq_set_shared_mode(struct iwl_trans *trans, + int txq_id, bool shared_mode) +{ + iwl_trans_pcie_txq_set_shared_mode(trans, txq_id, shared_mode); +} +IWL_EXPORT_SYMBOL(iwl_trans_txq_set_shared_mode); + +#ifdef CONFIG_IWLWIFI_DEBUGFS +void iwl_trans_debugfs_cleanup(struct iwl_trans *trans) +{ + iwl_trans_pcie_debugfs_cleanup(trans); +} +IWL_EXPORT_SYMBOL(iwl_trans_debugfs_cleanup); +#endif + +void iwl_trans_set_q_ptrs(struct iwl_trans *trans, int queue, int ptr) +{ + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return; + + iwl_pcie_set_q_ptrs(trans, queue, ptr); +} +IWL_EXPORT_SYMBOL(iwl_trans_set_q_ptrs); + +int iwl_trans_txq_alloc(struct iwl_trans *trans, u32 flags, u32 sta_mask, + u8 tid, int size, unsigned int wdg_timeout) +{ + might_sleep(); + + if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, + "bad state = %d\n", trans->state)) + return -EIO; + + return iwl_txq_dyn_alloc(trans, flags, sta_mask, tid, + size, wdg_timeout); +} +IWL_EXPORT_SYMBOL(iwl_trans_txq_alloc); + +void iwl_trans_txq_free(struct iwl_trans *trans, int queue) +{ + iwl_txq_dyn_free(trans, queue); +} +IWL_EXPORT_SYMBOL(iwl_trans_txq_free); + +int iwl_trans_get_rxq_dma_data(struct iwl_trans *trans, int queue, + struct iwl_trans_rxq_dma_data *data) +{ + return iwl_trans_pcie_rxq_dma_data(trans, queue, data); +} +IWL_EXPORT_SYMBOL(iwl_trans_get_rxq_dma_data); + +int iwl_trans_load_pnvm(struct iwl_trans *trans, + const struct iwl_pnvm_image *pnvm_data, + const struct iwl_ucode_capabilities *capa) +{ + return iwl_trans_pcie_ctx_info_gen3_load_pnvm(trans, pnvm_data, capa); +} +IWL_EXPORT_SYMBOL(iwl_trans_load_pnvm); + +void iwl_trans_set_pnvm(struct iwl_trans *trans, + const struct iwl_ucode_capabilities *capa) +{ + iwl_trans_pcie_ctx_info_gen3_set_pnvm(trans, capa); +} +IWL_EXPORT_SYMBOL(iwl_trans_set_pnvm); + +int iwl_trans_load_reduce_power(struct iwl_trans *trans, + const struct iwl_pnvm_image *payloads, + const struct iwl_ucode_capabilities *capa) +{ + return iwl_trans_pcie_ctx_info_gen3_load_reduce_power(trans, payloads, + capa); +} +IWL_EXPORT_SYMBOL(iwl_trans_load_reduce_power); + +void iwl_trans_set_reduce_power(struct iwl_trans *trans, + const struct iwl_ucode_capabilities *capa) +{ + iwl_trans_pcie_ctx_info_gen3_set_reduce_power(trans, capa); +} +IWL_EXPORT_SYMBOL(iwl_trans_set_reduce_power); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h index b93cef7b2330..6148acbac6af 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -26,11 +26,9 @@ * DOC: Transport layer - what is it ? * * The transport layer is the layer that deals with the HW directly. It provides - * an abstraction of the underlying HW to the upper layer. The transport layer - * doesn't provide any policy, algorithm or anything of this kind, but only - * mechanisms to make the HW do something. It is not completely stateless but - * close to it. - * We will have an implementation for each different supported bus. + * the PCIe access to the underlying hardwarwe. The transport layer doesn't + * provide any policy, algorithm or anything of this kind, but only mechanisms + * to make the HW do something. It is not completely stateless but close to it. */ /** @@ -122,6 +120,7 @@ enum CMD_MODE { CMD_BLOCK_TXQS = BIT(3), CMD_SEND_IN_D3 = BIT(4), }; +#define CMD_MODE_BITS 5 #define DEF_CMD_PAYLOAD_SIZE 320 @@ -131,6 +130,11 @@ enum CMD_MODE { * For allocation of the command and tx queues, this establishes the overall * size of the largest command we send to uCode, except for commands that * aren't fully copied and use other TFD space. + * + * @hdr: command header + * @payload: payload for the command + * @hdr_wide: wide command header + * @payload_wide: payload for the wide command */ struct iwl_device_cmd { union { @@ -167,12 +171,6 @@ struct iwl_device_tx_cmd { */ #define IWL_MAX_CMD_TBS_PER_TFD 2 -/* We need 2 entries for the TX command and header, and another one might - * be needed for potential data in the SKB's head. The remaining ones can - * be used for frags. - */ -#define IWL_TRANS_MAX_FRAGS(trans) ((trans)->txqs.tfd.max_tbs - 3) - /** * enum iwl_hcmd_dataflag - flag for each one of the chunks of the command * @@ -281,7 +279,7 @@ static inline void iwl_free_rxb(struct iwl_rx_cmd_buffer *r) #define IWL_9000_MAX_RX_HW_QUEUES 1 /** - * enum iwl_wowlan_status - WoWLAN image/device status + * enum iwl_d3_status - WoWLAN image/device status * @IWL_D3_STATUS_ALIVE: firmware is still running after resume * @IWL_D3_STATUS_RESET: device was reset while suspended */ @@ -299,9 +297,6 @@ enum iwl_d3_status { * @STATUS_RFKILL_HW: the actual HW state of the RF-kill switch * @STATUS_RFKILL_OPMODE: RF-kill state reported to opmode * @STATUS_FW_ERROR: the fw is in error state - * @STATUS_TRANS_GOING_IDLE: shutting down the trans, only special commands - * are sent - * @STATUS_TRANS_IDLE: the trans is idle - general commands are not to be sent * @STATUS_TRANS_DEAD: trans is dead - avoid any read/write operation * @STATUS_SUPPRESS_CMD_ERROR_ONCE: suppress "FW error in SYNC CMD" once, * e.g. for testing @@ -314,8 +309,6 @@ enum iwl_trans_status { STATUS_RFKILL_HW, STATUS_RFKILL_OPMODE, STATUS_FW_ERROR, - STATUS_TRANS_GOING_IDLE, - STATUS_TRANS_IDLE, STATUS_TRANS_DEAD, STATUS_SUPPRESS_CMD_ERROR_ONCE, }; @@ -482,183 +475,6 @@ struct iwl_pnvm_image { }; /** - * struct iwl_trans_ops - transport specific operations - * - * All the handlers MUST be implemented - * - * @start_hw: starts the HW. From that point on, the HW can send interrupts. - * May sleep. - * @op_mode_leave: Turn off the HW RF kill indication if on - * May sleep - * @start_fw: allocates and inits all the resources for the transport - * layer. Also kick a fw image. - * May sleep - * @fw_alive: called when the fw sends alive notification. If the fw provides - * the SCD base address in SRAM, then provide it here, or 0 otherwise. - * May sleep - * @stop_device: stops the whole device (embedded CPU put to reset) and stops - * the HW. From that point on, the HW will be stopped but will still issue - * an interrupt if the HW RF kill switch is triggered. - * This callback must do the right thing and not crash even if %start_hw() - * was called but not &start_fw(). May sleep. - * @d3_suspend: put the device into the correct mode for WoWLAN during - * suspend. This is optional, if not implemented WoWLAN will not be - * supported. This callback may sleep. - * @d3_resume: resume the device after WoWLAN, enabling the opmode to - * talk to the WoWLAN image to get its status. This is optional, if not - * implemented WoWLAN will not be supported. This callback may sleep. - * @send_cmd:send a host command. Must return -ERFKILL if RFkill is asserted. - * If RFkill is asserted in the middle of a SYNC host command, it must - * return -ERFKILL straight away. - * May sleep only if CMD_ASYNC is not set - * @tx: send an skb. The transport relies on the op_mode to zero the - * the ieee80211_tx_info->driver_data. If the MPDU is an A-MSDU, all - * the CSUM will be taken care of (TCP CSUM and IP header in case of - * IPv4). If the MPDU is a single MSDU, the op_mode must compute the IP - * header if it is IPv4. - * Must be atomic - * @reclaim: free packet until ssn. Returns a list of freed packets. - * Must be atomic - * @set_q_ptrs: set queue pointers internally, after D3 when HW state changed - * @txq_enable: setup a queue. To setup an AC queue, use the - * iwl_trans_ac_txq_enable wrapper. fw_alive must have been called before - * this one. The op_mode must not configure the HCMD queue. The scheduler - * configuration may be %NULL, in which case the hardware will not be - * configured. If true is returned, the operation mode needs to increment - * the sequence number of the packets routed to this queue because of a - * hardware scheduler bug. May sleep. - * @txq_disable: de-configure a Tx queue to send AMPDUs - * Must be atomic - * @txq_alloc: Allocate a new TX queue, may sleep. - * @txq_free: Free a previously allocated TX queue. - * @txq_set_shared_mode: change Tx queue shared/unshared marking - * @wait_tx_queues_empty: wait until tx queues are empty. May sleep. - * @wait_txq_empty: wait until specific tx queue is empty. May sleep. - * @freeze_txq_timer: prevents the timer of the queue from firing until the - * queue is set to awake. Must be atomic. - * @write8: write a u8 to a register at offset ofs from the BAR - * @write32: write a u32 to a register at offset ofs from the BAR - * @read32: read a u32 register at offset ofs from the BAR - * @read_prph: read a DWORD from a periphery register - * @write_prph: write a DWORD to a periphery register - * @read_mem: read device's SRAM in DWORD - * @write_mem: write device's SRAM in DWORD. If %buf is %NULL, then the memory - * will be zeroed. - * @read_config32: read a u32 value from the device's config space at - * the given offset. - * @configure: configure parameters required by the transport layer from - * the op_mode. May be called several times before start_fw, can't be - * called after that. - * @set_pmi: set the power pmi state - * @sw_reset: trigger software reset of the NIC - * @grab_nic_access: wake the NIC to be able to access non-HBUS regs. - * Sleeping is not allowed between grab_nic_access and - * release_nic_access. - * @release_nic_access: let the NIC go to sleep. The "flags" parameter - * must be the same one that was sent before to the grab_nic_access. - * @set_bits_mask: set SRAM register according to value and mask. - * @dump_data: return a vmalloc'ed buffer with debug data, maybe containing last - * TX'ed commands and similar. The buffer will be vfree'd by the caller. - * Note that the transport must fill in the proper file headers. - * @debugfs_cleanup: used in the driver unload flow to make a proper cleanup - * of the trans debugfs - * @sync_nmi: trigger a firmware NMI and wait for it to complete - * @load_pnvm: save the pnvm data in DRAM - * @set_pnvm: set the pnvm data in the prph scratch buffer, inside the - * context info. - * @load_reduce_power: copy reduce power table to the corresponding DRAM memory - * @set_reduce_power: set reduce power table addresses in the sratch buffer - * @interrupts: disable/enable interrupts to transport - * @imr_dma_data: set up IMR DMA - * @rxq_dma_data: retrieve RX queue DMA data, see @struct iwl_trans_rxq_dma_data - */ -struct iwl_trans_ops { - - int (*start_hw)(struct iwl_trans *iwl_trans); - void (*op_mode_leave)(struct iwl_trans *iwl_trans); - int (*start_fw)(struct iwl_trans *trans, const struct fw_img *fw, - bool run_in_rfkill); - void (*fw_alive)(struct iwl_trans *trans, u32 scd_addr); - void (*stop_device)(struct iwl_trans *trans); - - int (*d3_suspend)(struct iwl_trans *trans, bool test, bool reset); - int (*d3_resume)(struct iwl_trans *trans, enum iwl_d3_status *status, - bool test, bool reset); - - int (*send_cmd)(struct iwl_trans *trans, struct iwl_host_cmd *cmd); - - int (*tx)(struct iwl_trans *trans, struct sk_buff *skb, - struct iwl_device_tx_cmd *dev_cmd, int queue); - void (*reclaim)(struct iwl_trans *trans, int queue, int ssn, - struct sk_buff_head *skbs, bool is_flush); - - void (*set_q_ptrs)(struct iwl_trans *trans, int queue, int ptr); - - bool (*txq_enable)(struct iwl_trans *trans, int queue, u16 ssn, - const struct iwl_trans_txq_scd_cfg *cfg, - unsigned int queue_wdg_timeout); - void (*txq_disable)(struct iwl_trans *trans, int queue, - bool configure_scd); - /* 22000 functions */ - int (*txq_alloc)(struct iwl_trans *trans, u32 flags, - u32 sta_mask, u8 tid, - int size, unsigned int queue_wdg_timeout); - void (*txq_free)(struct iwl_trans *trans, int queue); - int (*rxq_dma_data)(struct iwl_trans *trans, int queue, - struct iwl_trans_rxq_dma_data *data); - - void (*txq_set_shared_mode)(struct iwl_trans *trans, u32 txq_id, - bool shared); - - int (*wait_tx_queues_empty)(struct iwl_trans *trans, u32 txq_bm); - int (*wait_txq_empty)(struct iwl_trans *trans, int queue); - void (*freeze_txq_timer)(struct iwl_trans *trans, unsigned long txqs, - bool freeze); - - void (*write8)(struct iwl_trans *trans, u32 ofs, u8 val); - void (*write32)(struct iwl_trans *trans, u32 ofs, u32 val); - u32 (*read32)(struct iwl_trans *trans, u32 ofs); - u32 (*read_prph)(struct iwl_trans *trans, u32 ofs); - void (*write_prph)(struct iwl_trans *trans, u32 ofs, u32 val); - int (*read_mem)(struct iwl_trans *trans, u32 addr, - void *buf, int dwords); - int (*write_mem)(struct iwl_trans *trans, u32 addr, - const void *buf, int dwords); - int (*read_config32)(struct iwl_trans *trans, u32 ofs, u32 *val); - void (*configure)(struct iwl_trans *trans, - const struct iwl_trans_config *trans_cfg); - void (*set_pmi)(struct iwl_trans *trans, bool state); - int (*sw_reset)(struct iwl_trans *trans, bool retake_ownership); - bool (*grab_nic_access)(struct iwl_trans *trans); - void (*release_nic_access)(struct iwl_trans *trans); - void (*set_bits_mask)(struct iwl_trans *trans, u32 reg, u32 mask, - u32 value); - - struct iwl_trans_dump_data *(*dump_data)(struct iwl_trans *trans, - u32 dump_mask, - const struct iwl_dump_sanitize_ops *sanitize_ops, - void *sanitize_ctx); - void (*debugfs_cleanup)(struct iwl_trans *trans); - void (*sync_nmi)(struct iwl_trans *trans); - int (*load_pnvm)(struct iwl_trans *trans, - const struct iwl_pnvm_image *pnvm_payloads, - const struct iwl_ucode_capabilities *capa); - void (*set_pnvm)(struct iwl_trans *trans, - const struct iwl_ucode_capabilities *capa); - int (*load_reduce_power)(struct iwl_trans *trans, - const struct iwl_pnvm_image *payloads, - const struct iwl_ucode_capabilities *capa); - void (*set_reduce_power)(struct iwl_trans *trans, - const struct iwl_ucode_capabilities *capa); - - void (*interrupts)(struct iwl_trans *trans, bool enable); - int (*imr_dma_data)(struct iwl_trans *trans, - u32 dst_addr, u64 src_addr, - u32 byte_cnt); - -}; - -/** * enum iwl_trans_state - state of the transport layer * * @IWL_TRANS_NO_FW: firmware wasn't started yet, or crashed @@ -897,7 +713,9 @@ struct iwl_dma_ptr { struct iwl_cmd_meta { /* only for SYNC commands, iff the reply skb is wanted */ struct iwl_host_cmd *source; - u32 flags; + u32 flags: CMD_MODE_BITS; + /* sg_offset is valid if it is non-zero */ + u32 sg_offset: PAGE_SHIFT; u32 tbs; }; @@ -934,6 +752,7 @@ struct iwl_pcie_first_tb_buf { * @first_tb_dma: DMA address for the first_tb_bufs start * @entries: transmit entries (driver state) * @lock: queue lock + * @reclaim_lock: reclaim lock * @stuck_timer: timer that fires if queue gets stuck * @trans: pointer back to transport (for timer) * @need_update: indicates need to update read/write index @@ -976,6 +795,8 @@ struct iwl_txq { struct iwl_pcie_txq_entry *entries; /* lock for syncing changes on the queue */ spinlock_t lock; + /* lock to prevent concurrent reclaim */ + spinlock_t reclaim_lock; unsigned long frozen_expiry_remainder; struct timer_list stuck_timer; struct iwl_trans *trans; @@ -999,58 +820,9 @@ struct iwl_txq { }; /** - * struct iwl_trans_txqs - transport tx queues data - * - * @bc_table_dword: true if the BC table expects DWORD (as opposed to bytes) - * @page_offs: offset from skb->cb to mac header page pointer - * @dev_cmd_offs: offset from skb->cb to iwl_device_tx_cmd pointer - * @queue_used: bit mask of used queues - * @queue_stopped: bit mask of stopped queues - * @txq: array of TXQ data structures representing the TXQs - * @scd_bc_tbls: gen1 pointer to the byte count table of the scheduler - * @queue_alloc_cmd_ver: queue allocation command version - * @bc_pool: bytecount DMA allocations pool - * @bc_tbl_size: bytecount table size - * @tso_hdr_page: page allocated (per CPU) for A-MSDU headers when doing TSO - * (and similar usage) - * @tfd: TFD data - * @tfd.max_tbs: max number of buffers per TFD - * @tfd.size: TFD size - * @tfd.addr_size: TFD/TB address size - */ -struct iwl_trans_txqs { - unsigned long queue_used[BITS_TO_LONGS(IWL_MAX_TVQM_QUEUES)]; - unsigned long queue_stopped[BITS_TO_LONGS(IWL_MAX_TVQM_QUEUES)]; - struct iwl_txq *txq[IWL_MAX_TVQM_QUEUES]; - struct dma_pool *bc_pool; - size_t bc_tbl_size; - bool bc_table_dword; - u8 page_offs; - u8 dev_cmd_offs; - struct iwl_tso_hdr_page __percpu *tso_hdr_page; - - struct { - u8 fifo; - u8 q_id; - unsigned int wdg_timeout; - } cmd; - - struct { - u8 max_tbs; - u16 size; - u8 addr_size; - } tfd; - - struct iwl_dma_ptr scd_bc_tbls; - - u8 queue_alloc_cmd_ver; -}; - -/** * struct iwl_trans - transport common data * * @csme_own: true if we couldn't get ownership on the device - * @ops: pointer to iwl_trans_ops * @op_mode: pointer to the op_mode * @trans_cfg: the trans-specific configuration part * @cfg: pointer to the configuration @@ -1099,7 +871,6 @@ struct iwl_trans_txqs { * This mode is set dynamically, depending on the WoWLAN values * configured from the userspace at runtime. * @name: the device name - * @txqs: transport tx queues data. * @mbx_addr_0_step: step address data 0 * @mbx_addr_1_step: step address data 1 * @pcie_link_speed: current PCIe link speed (%PCI_EXP_LNKSTA_CLS_*), @@ -1112,7 +883,6 @@ struct iwl_trans_txqs { */ struct iwl_trans { bool csme_own; - const struct iwl_trans_ops *ops; struct iwl_op_mode *op_mode; const struct iwl_cfg_trans_params *trans_cfg; const struct iwl_cfg *cfg; @@ -1169,7 +939,6 @@ struct iwl_trans { enum iwl_plat_pm_mode system_pm_mode; const char *name; - struct iwl_trans_txqs txqs; u32 mbx_addr_0_step; u32 mbx_addr_1_step; @@ -1185,101 +954,29 @@ struct iwl_trans { const char *iwl_get_cmd_string(struct iwl_trans *trans, u32 id); int iwl_cmd_groups_verify_sorted(const struct iwl_trans_config *trans); -static inline void iwl_trans_configure(struct iwl_trans *trans, - const struct iwl_trans_config *trans_cfg) -{ - trans->op_mode = trans_cfg->op_mode; +void iwl_trans_configure(struct iwl_trans *trans, + const struct iwl_trans_config *trans_cfg); - trans->ops->configure(trans, trans_cfg); - WARN_ON(iwl_cmd_groups_verify_sorted(trans_cfg)); -} +int iwl_trans_start_hw(struct iwl_trans *trans); -static inline int iwl_trans_start_hw(struct iwl_trans *trans) -{ - might_sleep(); +void iwl_trans_op_mode_leave(struct iwl_trans *trans); - return trans->ops->start_hw(trans); -} +void iwl_trans_fw_alive(struct iwl_trans *trans, u32 scd_addr); -static inline void iwl_trans_op_mode_leave(struct iwl_trans *trans) -{ - might_sleep(); +int iwl_trans_start_fw(struct iwl_trans *trans, const struct fw_img *fw, + bool run_in_rfkill); - if (trans->ops->op_mode_leave) - trans->ops->op_mode_leave(trans); +void iwl_trans_stop_device(struct iwl_trans *trans); - trans->op_mode = NULL; - - trans->state = IWL_TRANS_NO_FW; -} - -static inline void iwl_trans_fw_alive(struct iwl_trans *trans, u32 scd_addr) -{ - might_sleep(); - - trans->state = IWL_TRANS_FW_ALIVE; - - trans->ops->fw_alive(trans, scd_addr); -} - -static inline int iwl_trans_start_fw(struct iwl_trans *trans, - const struct fw_img *fw, - bool run_in_rfkill) -{ - int ret; +int iwl_trans_d3_suspend(struct iwl_trans *trans, bool test, bool reset); - might_sleep(); +int iwl_trans_d3_resume(struct iwl_trans *trans, enum iwl_d3_status *status, + bool test, bool reset); - WARN_ON_ONCE(!trans->rx_mpdu_cmd); - - clear_bit(STATUS_FW_ERROR, &trans->status); - ret = trans->ops->start_fw(trans, fw, run_in_rfkill); - if (ret == 0) - trans->state = IWL_TRANS_FW_STARTED; - - return ret; -} - -static inline void iwl_trans_stop_device(struct iwl_trans *trans) -{ - might_sleep(); - - trans->ops->stop_device(trans); - - trans->state = IWL_TRANS_NO_FW; -} - -static inline int iwl_trans_d3_suspend(struct iwl_trans *trans, bool test, - bool reset) -{ - might_sleep(); - if (!trans->ops->d3_suspend) - return -EOPNOTSUPP; - - return trans->ops->d3_suspend(trans, test, reset); -} - -static inline int iwl_trans_d3_resume(struct iwl_trans *trans, - enum iwl_d3_status *status, - bool test, bool reset) -{ - might_sleep(); - if (!trans->ops->d3_resume) - return -EOPNOTSUPP; - - return trans->ops->d3_resume(trans, status, test, reset); -} - -static inline struct iwl_trans_dump_data * +struct iwl_trans_dump_data * iwl_trans_dump_data(struct iwl_trans *trans, u32 dump_mask, const struct iwl_dump_sanitize_ops *sanitize_ops, - void *sanitize_ctx) -{ - if (!trans->ops->dump_data) - return NULL; - return trans->ops->dump_data(trans, dump_mask, - sanitize_ops, sanitize_ctx); -} + void *sanitize_ctx); static inline struct iwl_device_tx_cmd * iwl_trans_alloc_tx_cmd(struct iwl_trans *trans) @@ -1295,109 +992,31 @@ static inline void iwl_trans_free_tx_cmd(struct iwl_trans *trans, kmem_cache_free(trans->dev_cmd_pool, dev_cmd); } -static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, - struct iwl_device_tx_cmd *dev_cmd, int queue) -{ - if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) - return -EIO; - - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return -EIO; - } - - return trans->ops->tx(trans, skb, dev_cmd, queue); -} - -static inline void iwl_trans_reclaim(struct iwl_trans *trans, int queue, - int ssn, struct sk_buff_head *skbs, - bool is_flush) -{ - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return; - } - - trans->ops->reclaim(trans, queue, ssn, skbs, is_flush); -} - -static inline void iwl_trans_set_q_ptrs(struct iwl_trans *trans, int queue, - int ptr) -{ - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return; - } - - trans->ops->set_q_ptrs(trans, queue, ptr); -} - -static inline void iwl_trans_txq_disable(struct iwl_trans *trans, int queue, - bool configure_scd) -{ - trans->ops->txq_disable(trans, queue, configure_scd); -} +int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_device_tx_cmd *dev_cmd, int queue); -static inline bool -iwl_trans_txq_enable_cfg(struct iwl_trans *trans, int queue, u16 ssn, - const struct iwl_trans_txq_scd_cfg *cfg, - unsigned int queue_wdg_timeout) -{ - might_sleep(); +void iwl_trans_reclaim(struct iwl_trans *trans, int queue, int ssn, + struct sk_buff_head *skbs, bool is_flush); - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return false; - } +void iwl_trans_set_q_ptrs(struct iwl_trans *trans, int queue, int ptr); - return trans->ops->txq_enable(trans, queue, ssn, - cfg, queue_wdg_timeout); -} +void iwl_trans_txq_disable(struct iwl_trans *trans, int queue, + bool configure_scd); -static inline int -iwl_trans_get_rxq_dma_data(struct iwl_trans *trans, int queue, - struct iwl_trans_rxq_dma_data *data) -{ - if (WARN_ON_ONCE(!trans->ops->rxq_dma_data)) - return -EOPNOTSUPP; +bool iwl_trans_txq_enable_cfg(struct iwl_trans *trans, int queue, u16 ssn, + const struct iwl_trans_txq_scd_cfg *cfg, + unsigned int queue_wdg_timeout); - return trans->ops->rxq_dma_data(trans, queue, data); -} +int iwl_trans_get_rxq_dma_data(struct iwl_trans *trans, int queue, + struct iwl_trans_rxq_dma_data *data); -static inline void -iwl_trans_txq_free(struct iwl_trans *trans, int queue) -{ - if (WARN_ON_ONCE(!trans->ops->txq_free)) - return; +void iwl_trans_txq_free(struct iwl_trans *trans, int queue); - trans->ops->txq_free(trans, queue); -} - -static inline int -iwl_trans_txq_alloc(struct iwl_trans *trans, - u32 flags, u32 sta_mask, u8 tid, - int size, unsigned int wdg_timeout) -{ - might_sleep(); - - if (WARN_ON_ONCE(!trans->ops->txq_alloc)) - return -EOPNOTSUPP; - - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return -EIO; - } - - return trans->ops->txq_alloc(trans, flags, sta_mask, tid, - size, wdg_timeout); -} +int iwl_trans_txq_alloc(struct iwl_trans *trans, u32 flags, u32 sta_mask, + u8 tid, int size, unsigned int wdg_timeout); -static inline void iwl_trans_txq_set_shared_mode(struct iwl_trans *trans, - int queue, bool shared_mode) -{ - if (trans->ops->txq_set_shared_mode) - trans->ops->txq_set_shared_mode(trans, queue, shared_mode); -} +void iwl_trans_txq_set_shared_mode(struct iwl_trans *trans, + int txq_id, bool shared_mode); static inline void iwl_trans_txq_enable(struct iwl_trans *trans, int queue, int fifo, int sta_id, int tid, @@ -1430,78 +1049,32 @@ void iwl_trans_ac_txq_enable(struct iwl_trans *trans, int queue, int fifo, iwl_trans_txq_enable_cfg(trans, queue, 0, &cfg, queue_wdg_timeout); } -static inline void iwl_trans_freeze_txq_timer(struct iwl_trans *trans, - unsigned long txqs, - bool freeze) -{ - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return; - } +void iwl_trans_freeze_txq_timer(struct iwl_trans *trans, + unsigned long txqs, bool freeze); - if (trans->ops->freeze_txq_timer) - trans->ops->freeze_txq_timer(trans, txqs, freeze); -} +int iwl_trans_wait_tx_queues_empty(struct iwl_trans *trans, u32 txqs); -static inline int iwl_trans_wait_tx_queues_empty(struct iwl_trans *trans, - u32 txqs) -{ - if (WARN_ON_ONCE(!trans->ops->wait_tx_queues_empty)) - return -EOPNOTSUPP; +int iwl_trans_wait_txq_empty(struct iwl_trans *trans, int queue); - /* No need to wait if the firmware is not alive */ - if (trans->state != IWL_TRANS_FW_ALIVE) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return -EIO; - } +void iwl_trans_write8(struct iwl_trans *trans, u32 ofs, u8 val); - return trans->ops->wait_tx_queues_empty(trans, txqs); -} +void iwl_trans_write32(struct iwl_trans *trans, u32 ofs, u32 val); -static inline int iwl_trans_wait_txq_empty(struct iwl_trans *trans, int queue) -{ - if (WARN_ON_ONCE(!trans->ops->wait_txq_empty)) - return -EOPNOTSUPP; +u32 iwl_trans_read32(struct iwl_trans *trans, u32 ofs); - if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) { - IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state); - return -EIO; - } +u32 iwl_trans_read_prph(struct iwl_trans *trans, u32 ofs); - return trans->ops->wait_txq_empty(trans, queue); -} +void iwl_trans_write_prph(struct iwl_trans *trans, u32 ofs, u32 val); -static inline void iwl_trans_write8(struct iwl_trans *trans, u32 ofs, u8 val) -{ - trans->ops->write8(trans, ofs, val); -} +int iwl_trans_read_mem(struct iwl_trans *trans, u32 addr, + void *buf, int dwords); -static inline void iwl_trans_write32(struct iwl_trans *trans, u32 ofs, u32 val) -{ - trans->ops->write32(trans, ofs, val); -} - -static inline u32 iwl_trans_read32(struct iwl_trans *trans, u32 ofs) -{ - return trans->ops->read32(trans, ofs); -} +int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs, + u32 *val); -static inline u32 iwl_trans_read_prph(struct iwl_trans *trans, u32 ofs) -{ - return trans->ops->read_prph(trans, ofs); -} - -static inline void iwl_trans_write_prph(struct iwl_trans *trans, u32 ofs, - u32 val) -{ - return trans->ops->write_prph(trans, ofs, val); -} - -static inline int iwl_trans_read_mem(struct iwl_trans *trans, u32 addr, - void *buf, int dwords) -{ - return trans->ops->read_mem(trans, addr, buf, dwords); -} +#ifdef CONFIG_IWLWIFI_DEBUGFS +void iwl_trans_debugfs_cleanup(struct iwl_trans *trans); +#endif #define iwl_trans_read_mem_bytes(trans, addr, buf, bufsize) \ do { \ @@ -1510,14 +1083,8 @@ static inline int iwl_trans_read_mem(struct iwl_trans *trans, u32 addr, iwl_trans_read_mem(trans, addr, buf, (bufsize) / sizeof(u32));\ } while (0) -static inline int iwl_trans_write_imr_mem(struct iwl_trans *trans, - u32 dst_addr, u64 src_addr, - u32 byte_cnt) -{ - if (trans->ops->imr_dma_data) - return trans->ops->imr_dma_data(trans, dst_addr, src_addr, byte_cnt); - return 0; -} +int iwl_trans_write_imr_mem(struct iwl_trans *trans, u32 dst_addr, + u64 src_addr, u32 byte_cnt); static inline u32 iwl_trans_read_mem32(struct iwl_trans *trans, u32 addr) { @@ -1529,11 +1096,8 @@ static inline u32 iwl_trans_read_mem32(struct iwl_trans *trans, u32 addr) return value; } -static inline int iwl_trans_write_mem(struct iwl_trans *trans, u32 addr, - const void *buf, int dwords) -{ - return trans->ops->write_mem(trans, addr, buf, dwords); -} +int iwl_trans_write_mem(struct iwl_trans *trans, u32 addr, + const void *buf, int dwords); static inline u32 iwl_trans_write_mem32(struct iwl_trans *trans, u32 addr, u32 val) @@ -1541,36 +1105,21 @@ static inline u32 iwl_trans_write_mem32(struct iwl_trans *trans, u32 addr, return iwl_trans_write_mem(trans, addr, &val, 1); } -static inline void iwl_trans_set_pmi(struct iwl_trans *trans, bool state) -{ - if (trans->ops->set_pmi) - trans->ops->set_pmi(trans, state); -} +void iwl_trans_set_pmi(struct iwl_trans *trans, bool state); -static inline int iwl_trans_sw_reset(struct iwl_trans *trans, - bool retake_ownership) -{ - if (trans->ops->sw_reset) - return trans->ops->sw_reset(trans, retake_ownership); - return 0; -} +int iwl_trans_sw_reset(struct iwl_trans *trans, bool retake_ownership); -static inline void -iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value) -{ - trans->ops->set_bits_mask(trans, reg, mask, value); -} +void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, + u32 mask, u32 value); + +bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); #define iwl_trans_grab_nic_access(trans) \ __cond_lock(nic_access, \ - likely((trans)->ops->grab_nic_access(trans))) + likely(_iwl_trans_grab_nic_access(trans))) -static inline void __releases(nic_access) -iwl_trans_release_nic_access(struct iwl_trans *trans) -{ - trans->ops->release_nic_access(trans); - __release(nic_access); -} +void __releases(nic_access) +iwl_trans_release_nic_access(struct iwl_trans *trans); static inline void iwl_trans_fw_error(struct iwl_trans *trans, bool sync) { @@ -1589,44 +1138,24 @@ static inline bool iwl_trans_fw_running(struct iwl_trans *trans) return trans->state == IWL_TRANS_FW_ALIVE; } -static inline void iwl_trans_sync_nmi(struct iwl_trans *trans) -{ - if (trans->ops->sync_nmi) - trans->ops->sync_nmi(trans); -} +void iwl_trans_sync_nmi(struct iwl_trans *trans); void iwl_trans_sync_nmi_with_addr(struct iwl_trans *trans, u32 inta_addr, u32 sw_err_bit); -static inline int iwl_trans_load_pnvm(struct iwl_trans *trans, - const struct iwl_pnvm_image *pnvm_data, - const struct iwl_ucode_capabilities *capa) -{ - return trans->ops->load_pnvm(trans, pnvm_data, capa); -} +int iwl_trans_load_pnvm(struct iwl_trans *trans, + const struct iwl_pnvm_image *pnvm_data, + const struct iwl_ucode_capabilities *capa); -static inline void iwl_trans_set_pnvm(struct iwl_trans *trans, - const struct iwl_ucode_capabilities *capa) -{ - if (trans->ops->set_pnvm) - trans->ops->set_pnvm(trans, capa); -} +void iwl_trans_set_pnvm(struct iwl_trans *trans, + const struct iwl_ucode_capabilities *capa); -static inline int iwl_trans_load_reduce_power - (struct iwl_trans *trans, - const struct iwl_pnvm_image *payloads, - const struct iwl_ucode_capabilities *capa) -{ - return trans->ops->load_reduce_power(trans, payloads, capa); -} +int iwl_trans_load_reduce_power(struct iwl_trans *trans, + const struct iwl_pnvm_image *payloads, + const struct iwl_ucode_capabilities *capa); -static inline void -iwl_trans_set_reduce_power(struct iwl_trans *trans, - const struct iwl_ucode_capabilities *capa) -{ - if (trans->ops->set_reduce_power) - trans->ops->set_reduce_power(trans, capa); -} +void iwl_trans_set_reduce_power(struct iwl_trans *trans, + const struct iwl_ucode_capabilities *capa); static inline bool iwl_trans_dbg_ini_valid(struct iwl_trans *trans) { @@ -1634,18 +1163,13 @@ static inline bool iwl_trans_dbg_ini_valid(struct iwl_trans *trans) trans->dbg.external_ini_cfg != IWL_INI_CFG_STATE_NOT_LOADED; } -static inline void iwl_trans_interrupts(struct iwl_trans *trans, bool enable) -{ - if (trans->ops->interrupts) - trans->ops->interrupts(trans, enable); -} +void iwl_trans_interrupts(struct iwl_trans *trans, bool enable); /***************************************************** * transport helper functions *****************************************************/ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, struct device *dev, - const struct iwl_trans_ops *ops, const struct iwl_cfg_trans_params *cfg_trans); int iwl_trans_init(struct iwl_trans *trans); void iwl_trans_free(struct iwl_trans *trans); @@ -1656,10 +1180,13 @@ static inline bool iwl_trans_is_hw_error_value(u32 val) } /***************************************************** -* driver (transport) register/unregister functions -******************************************************/ + * PCIe handling + *****************************************************/ int __must_check iwl_pci_register_driver(void); void iwl_pci_unregister_driver(void); void iwl_trans_pcie_remove(struct iwl_trans *trans, bool rescan); +int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, + struct iwl_host_cmd *cmd); + #endif /* __iwl_trans_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h index 1f3c885aeb65..4900de3cc0d3 100644 --- a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h +++ b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2021-2023 Intel Corporation + * Copyright (C) 2021-2024 Intel Corporation */ #ifndef __iwl_mei_h__ @@ -456,8 +456,11 @@ void iwl_mei_device_state(bool up); /** * iwl_mei_pldr_req() - must be called before loading the fw * - * Return: 0 if the PLDR flow was successful and the fw can be loaded, negative - * value otherwise. + * Requests from the ME that it releases its potential bus access to + * the WiFi NIC so that the device can safely undergo product reset. + * + * Return: 0 if the request was successful and the device can be + * reset, a negative error value otherwise */ int iwl_mei_pldr_req(void); @@ -488,7 +491,7 @@ static inline void iwl_mei_set_nic_info(const u8 *mac_address, const u8 *nvm_add static inline void iwl_mei_set_country_code(u16 mcc) {} -static inline void iwl_mei_set_power_limit(__le16 *power_limit) +static inline void iwl_mei_set_power_limit(const __le16 *power_limit) {} static inline int iwl_mei_register(void *priv, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h index 3cbeaddf4358..c4c1e67b9ac7 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h @@ -23,7 +23,7 @@ #define IWL_MVM_WOWLAN_PS_RX_DATA_TIMEOUT (10 * USEC_PER_MSEC) #define IWL_MVM_SHORT_PS_TX_DATA_TIMEOUT (2 * 1024) /* defined in TU */ #define IWL_MVM_SHORT_PS_RX_DATA_TIMEOUT (40 * 1024) /* defined in TU */ -#define IWL_MVM_P2P_LOWLATENCY_PS_ENABLE 0 +#define IWL_MVM_P2P_LOWLATENCY_PS_ENABLE 1 #define IWL_MVM_UAPSD_RX_DATA_TIMEOUT (50 * USEC_PER_MSEC) #define IWL_MVM_UAPSD_TX_DATA_TIMEOUT (50 * USEC_PER_MSEC) #define IWL_MVM_UAPSD_QUEUES (IEEE80211_WMM_IE_STA_QOSINFO_AC_VO |\ @@ -56,7 +56,6 @@ #define IWL_MVM_RS_80_20_FAR_RANGE_TWEAK 1 #define IWL_MVM_TOF_IS_RESPONDER 0 #define IWL_MVM_HW_CSUM_DISABLE 0 -#define IWL_MVM_PARSE_NVM 0 #define IWL_MVM_ADWELL_ENABLE 1 #define IWL_MVM_ADWELL_MAX_BUDGET 0 #define IWL_MVM_TCM_LOAD_MEDIUM_THRESH 10 /* percentage */ @@ -100,6 +99,7 @@ #define IWL_MVM_FTM_INITIATOR_ALGO IWL_TOF_ALGO_TYPE_MAX_LIKE #define IWL_MVM_FTM_INITIATOR_DYNACK true #define IWL_MVM_FTM_LMR_FEEDBACK_TERMINATE false +#define IWL_MVM_FTM_TEST_INCORRECT_SAC false #define IWL_MVM_FTM_R2I_MAX_REP 7 #define IWL_MVM_FTM_I2R_MAX_REP 7 #define IWL_MVM_FTM_R2I_MAX_STS 1 @@ -114,7 +114,6 @@ #define IWL_MVM_D3_DEBUG false #define IWL_MVM_USE_TWT true #define IWL_MVM_AMPDU_CONSEC_DROPS_DELBA 20 -#define IWL_MVM_USE_NSSN_SYNC 0 #define IWL_MVM_FTM_INITIATOR_ENABLE_SMOOTH false #define IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA 40 /* 20016 pSec is 6 meter RTT, meaning 3 meter range */ @@ -124,6 +123,7 @@ #define IWL_MVM_DISABLE_AP_FILS false #define IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT 3000 /* in seconds */ #define IWL_MVM_6GHZ_PASSIVE_SCAN_ASSOC_TIMEOUT 60 /* in seconds */ +#define IWL_MVM_MIN_BEACON_INTERVAL_TU 16 #define IWL_MVM_AUTO_EML_ENABLE true #define IWL_MVM_MISSED_BEACONS_EXIT_ESR_THRESH 7 diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c index 54f4acbbd05b..b4d650583ac2 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c @@ -2493,6 +2493,9 @@ static void iwl_mvm_parse_wowlan_info_notif(struct iwl_mvm *mvm, return; } + if (mvm->fast_resume) + return; + iwl_mvm_convert_key_counters_v5(status, &data->gtk[0].sc); iwl_mvm_convert_gtk_v3(status, data->gtk); iwl_mvm_convert_igtk(status, &data->igtk[0]); @@ -3049,7 +3052,7 @@ static bool iwl_mvm_check_rt_status(struct iwl_mvm *mvm, if (iwl_mvm_rt_status(mvm->trans, mvm->trans->dbg.lmac_error_event_table[0], &err_id)) { - if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN) { + if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN && vif) { struct cfg80211_wowlan_wakeup wakeup = { .rfkill_release = true, }; @@ -3366,7 +3369,7 @@ static int iwl_mvm_resume_firmware(struct iwl_mvm *mvm, bool test) return ret; } -#define IWL_MVM_D3_NOTIF_TIMEOUT (HZ / 5) +#define IWL_MVM_D3_NOTIF_TIMEOUT (HZ / 3) static int iwl_mvm_d3_notif_wait(struct iwl_mvm *mvm, struct iwl_d3_data *d3_data) @@ -3377,12 +3380,22 @@ static int iwl_mvm_d3_notif_wait(struct iwl_mvm *mvm, WIDE_ID(SCAN_GROUP, OFFLOAD_MATCH_INFO_NOTIF), WIDE_ID(PROT_OFFLOAD_GROUP, D3_END_NOTIFICATION) }; + static const u16 d3_fast_resume_notif[] = { + WIDE_ID(PROT_OFFLOAD_GROUP, D3_END_NOTIFICATION) + }; struct iwl_notification_wait wait_d3_notif; int ret; - iwl_init_notification_wait(&mvm->notif_wait, &wait_d3_notif, - d3_resume_notif, ARRAY_SIZE(d3_resume_notif), - iwl_mvm_wait_d3_notif, d3_data); + if (mvm->fast_resume) + iwl_init_notification_wait(&mvm->notif_wait, &wait_d3_notif, + d3_fast_resume_notif, + ARRAY_SIZE(d3_fast_resume_notif), + iwl_mvm_wait_d3_notif, d3_data); + else + iwl_init_notification_wait(&mvm->notif_wait, &wait_d3_notif, + d3_resume_notif, + ARRAY_SIZE(d3_resume_notif), + iwl_mvm_wait_d3_notif, d3_data); ret = iwl_mvm_resume_firmware(mvm, d3_data->test); if (ret) { @@ -3567,6 +3580,68 @@ void iwl_mvm_set_wakeup(struct ieee80211_hw *hw, bool enabled) device_set_wakeup_enable(mvm->trans->dev, enabled); } +void iwl_mvm_fast_suspend(struct iwl_mvm *mvm) +{ + struct iwl_d3_manager_config d3_cfg_cmd_data = {}; + int ret; + + lockdep_assert_held(&mvm->mutex); + + IWL_DEBUG_WOWLAN(mvm, "Starting fast suspend flow\n"); + + mvm->fast_resume = true; + set_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); + + WARN_ON(iwl_mvm_power_update_device(mvm)); + mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_D3; + ret = iwl_mvm_send_cmd_pdu(mvm, D3_CONFIG_CMD, CMD_SEND_IN_D3, + sizeof(d3_cfg_cmd_data), &d3_cfg_cmd_data); + if (ret) + IWL_ERR(mvm, + "fast suspend: couldn't send D3_CONFIG_CMD %d\n", ret); + + WARN_ON(iwl_mvm_power_update_mac(mvm)); + + ret = iwl_trans_d3_suspend(mvm->trans, false, false); + if (ret) + IWL_ERR(mvm, "fast suspend: trans_d3_suspend failed %d\n", ret); +} + +int iwl_mvm_fast_resume(struct iwl_mvm *mvm) +{ + struct iwl_d3_data d3_data = { + .notif_expected = + IWL_D3_NOTIF_D3_END_NOTIF, + }; + int ret; + + lockdep_assert_held(&mvm->mutex); + + IWL_DEBUG_WOWLAN(mvm, "Starting the fast resume flow\n"); + + mvm->last_reset_or_resume_time_jiffies = jiffies; + iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt); + + if (iwl_mvm_check_rt_status(mvm, NULL)) { + set_bit(STATUS_FW_ERROR, &mvm->trans->status); + iwl_mvm_dump_nic_error_log(mvm); + iwl_dbg_tlv_time_point(&mvm->fwrt, + IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL); + iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, + false, 0); + return -ENODEV; + } + ret = iwl_mvm_d3_notif_wait(mvm, &d3_data); + clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); + mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED; + mvm->fast_resume = false; + + if (ret) + IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret); + + return ret; +} + #ifdef CONFIG_IWLWIFI_DEBUGFS static int iwl_mvm_d3_test_open(struct inode *inode, struct file *file) { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c index 17c97dfbc62a..25f07e00db42 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c @@ -692,6 +692,42 @@ static ssize_t iwl_dbgfs_quota_min_read(struct file *file, return simple_read_from_buffer(user_buf, count, ppos, buf, len); } +static ssize_t iwl_dbgfs_max_tx_op_write(struct ieee80211_vif *vif, char *buf, + size_t count, loff_t *ppos) +{ + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_mvm *mvm = mvmvif->mvm; + u16 value; + int ret; + + ret = kstrtou16(buf, 0, &value); + if (ret) + return ret; + + mutex_lock(&mvm->mutex); + mvmvif->max_tx_op = value; + mutex_unlock(&mvm->mutex); + + return count; +} + +static ssize_t iwl_dbgfs_max_tx_op_read(struct file *file, + char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct ieee80211_vif *vif = file->private_data; + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_mvm *mvm = mvmvif->mvm; + char buf[10]; + int len; + + mutex_lock(&mvm->mutex); + len = scnprintf(buf, sizeof(buf), "%hu\n", mvmvif->max_tx_op); + mutex_unlock(&mvm->mutex); + + return simple_read_from_buffer(user_buf, count, ppos, buf, len); +} + static ssize_t iwl_dbgfs_int_mlo_scan_write(struct ieee80211_vif *vif, char *buf, size_t count, loff_t *ppos) @@ -801,6 +837,7 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(uapsd_misbehaving, 20); MVM_DEBUGFS_READ_WRITE_FILE_OPS(rx_phyinfo, 10); MVM_DEBUGFS_READ_WRITE_FILE_OPS(quota_min, 32); MVM_DEBUGFS_READ_FILE_OPS(os_device_timediff); +MVM_DEBUGFS_READ_WRITE_FILE_OPS(max_tx_op, 10); MVM_DEBUGFS_WRITE_FILE_OPS(int_mlo_scan, 32); MVM_DEBUGFS_READ_WRITE_FILE_OPS(esr_disable_reason, 32); @@ -830,6 +867,7 @@ void iwl_mvm_vif_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif) MVM_DEBUGFS_ADD_FILE_VIF(rx_phyinfo, mvmvif->dbgfs_dir, 0600); MVM_DEBUGFS_ADD_FILE_VIF(quota_min, mvmvif->dbgfs_dir, 0600); MVM_DEBUGFS_ADD_FILE_VIF(os_device_timediff, mvmvif->dbgfs_dir, 0400); + MVM_DEBUGFS_ADD_FILE_VIF(max_tx_op, mvmvif->dbgfs_dir, 0600); debugfs_create_bool("ftm_unprotected", 0200, mvmvif->dbgfs_dir, &mvmvif->ftm_unprotected); MVM_DEBUGFS_ADD_FILE_VIF(int_mlo_scan, mvmvif->dbgfs_dir, 0200); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c index 8101ecbb478b..91ca830a7b60 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c @@ -151,37 +151,6 @@ static ssize_t iwl_dbgfs_tx_flush_write(struct iwl_mvm *mvm, char *buf, return ret; } -static ssize_t iwl_dbgfs_sta_drain_write(struct iwl_mvm *mvm, char *buf, - size_t count, loff_t *ppos) -{ - struct iwl_mvm_sta *mvmsta; - int sta_id, drain, ret; - - if (!iwl_mvm_firmware_running(mvm) || - mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) - return -EIO; - - if (sscanf(buf, "%d %d", &sta_id, &drain) != 2) - return -EINVAL; - if (sta_id < 0 || sta_id >= mvm->fw->ucode_capa.num_stations) - return -EINVAL; - if (drain < 0 || drain > 1) - return -EINVAL; - - mutex_lock(&mvm->mutex); - - mvmsta = iwl_mvm_sta_from_staid_protected(mvm, sta_id); - - if (!mvmsta) - ret = -ENOENT; - else - ret = iwl_mvm_drain_sta(mvm, mvmsta, drain) ? : count; - - mutex_unlock(&mvm->mutex); - - return ret; -} - static ssize_t iwl_dbgfs_sram_read(struct file *file, char __user *user_buf, size_t count, loff_t *ppos) { @@ -568,193 +537,12 @@ static ssize_t iwl_dbgfs_disable_power_off_write(struct iwl_mvm *mvm, char *buf, return ret ?: count; } -static -int iwl_mvm_coex_dump_mbox(struct iwl_bt_coex_profile_notif *notif, char *buf, - int pos, int bufsz) -{ - pos += scnprintf(buf+pos, bufsz-pos, "MBOX dw0:\n"); - - BT_MBOX_PRINT(0, LE_SLAVE_LAT, false); - BT_MBOX_PRINT(0, LE_PROF1, false); - BT_MBOX_PRINT(0, LE_PROF2, false); - BT_MBOX_PRINT(0, LE_PROF_OTHER, false); - BT_MBOX_PRINT(0, CHL_SEQ_N, false); - BT_MBOX_PRINT(0, INBAND_S, false); - BT_MBOX_PRINT(0, LE_MIN_RSSI, false); - BT_MBOX_PRINT(0, LE_SCAN, false); - BT_MBOX_PRINT(0, LE_ADV, false); - BT_MBOX_PRINT(0, LE_MAX_TX_POWER, false); - BT_MBOX_PRINT(0, OPEN_CON_1, true); - - pos += scnprintf(buf+pos, bufsz-pos, "MBOX dw1:\n"); - - BT_MBOX_PRINT(1, BR_MAX_TX_POWER, false); - BT_MBOX_PRINT(1, IP_SR, false); - BT_MBOX_PRINT(1, LE_MSTR, false); - BT_MBOX_PRINT(1, AGGR_TRFC_LD, false); - BT_MBOX_PRINT(1, MSG_TYPE, false); - BT_MBOX_PRINT(1, SSN, true); - - pos += scnprintf(buf+pos, bufsz-pos, "MBOX dw2:\n"); - - BT_MBOX_PRINT(2, SNIFF_ACT, false); - BT_MBOX_PRINT(2, PAG, false); - BT_MBOX_PRINT(2, INQUIRY, false); - BT_MBOX_PRINT(2, CONN, false); - BT_MBOX_PRINT(2, SNIFF_INTERVAL, false); - BT_MBOX_PRINT(2, DISC, false); - BT_MBOX_PRINT(2, SCO_TX_ACT, false); - BT_MBOX_PRINT(2, SCO_RX_ACT, false); - BT_MBOX_PRINT(2, ESCO_RE_TX, false); - BT_MBOX_PRINT(2, SCO_DURATION, true); - - pos += scnprintf(buf+pos, bufsz-pos, "MBOX dw3:\n"); - - BT_MBOX_PRINT(3, SCO_STATE, false); - BT_MBOX_PRINT(3, SNIFF_STATE, false); - BT_MBOX_PRINT(3, A2DP_STATE, false); - BT_MBOX_PRINT(3, A2DP_SRC, false); - BT_MBOX_PRINT(3, ACL_STATE, false); - BT_MBOX_PRINT(3, MSTR_STATE, false); - BT_MBOX_PRINT(3, OBX_STATE, false); - BT_MBOX_PRINT(3, OPEN_CON_2, false); - BT_MBOX_PRINT(3, TRAFFIC_LOAD, false); - BT_MBOX_PRINT(3, CHL_SEQN_LSB, false); - BT_MBOX_PRINT(3, INBAND_P, false); - BT_MBOX_PRINT(3, MSG_TYPE_2, false); - BT_MBOX_PRINT(3, SSN_2, false); - BT_MBOX_PRINT(3, UPDATE_REQUEST, true); - - return pos; -} - -static ssize_t iwl_dbgfs_bt_notif_read(struct file *file, char __user *user_buf, - size_t count, loff_t *ppos) -{ - struct iwl_mvm *mvm = file->private_data; - struct iwl_bt_coex_profile_notif *notif = &mvm->last_bt_notif; - char *buf; - int ret, pos = 0, bufsz = sizeof(char) * 1024; - - buf = kmalloc(bufsz, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - mutex_lock(&mvm->mutex); - - pos += iwl_mvm_coex_dump_mbox(notif, buf, pos, bufsz); - - pos += scnprintf(buf + pos, bufsz - pos, "bt_ci_compliance = %d\n", - notif->bt_ci_compliance); - pos += scnprintf(buf + pos, bufsz - pos, "primary_ch_lut = %d\n", - le32_to_cpu(notif->primary_ch_lut)); - pos += scnprintf(buf + pos, bufsz - pos, "secondary_ch_lut = %d\n", - le32_to_cpu(notif->secondary_ch_lut)); - pos += scnprintf(buf + pos, - bufsz - pos, "bt_activity_grading = %d\n", - le32_to_cpu(notif->bt_activity_grading)); - pos += scnprintf(buf + pos, bufsz - pos, "bt_rrc = %d\n", - notif->rrc_status & 0xF); - pos += scnprintf(buf + pos, bufsz - pos, "bt_ttc = %d\n", - notif->ttc_status & 0xF); - - pos += scnprintf(buf + pos, bufsz - pos, "sync_sco = %d\n", - IWL_MVM_BT_COEX_SYNC2SCO); - pos += scnprintf(buf + pos, bufsz - pos, "mplut = %d\n", - IWL_MVM_BT_COEX_MPLUT); - - mutex_unlock(&mvm->mutex); - - ret = simple_read_from_buffer(user_buf, count, ppos, buf, pos); - kfree(buf); - - return ret; -} -#undef BT_MBOX_PRINT - -static ssize_t iwl_dbgfs_bt_cmd_read(struct file *file, char __user *user_buf, - size_t count, loff_t *ppos) -{ - struct iwl_mvm *mvm = file->private_data; - struct iwl_bt_coex_ci_cmd *cmd = &mvm->last_bt_ci_cmd; - char buf[256]; - int bufsz = sizeof(buf); - int pos = 0; - - mutex_lock(&mvm->mutex); - - pos += scnprintf(buf + pos, bufsz - pos, "Channel inhibition CMD\n"); - pos += scnprintf(buf + pos, bufsz - pos, - "\tPrimary Channel Bitmap 0x%016llx\n", - le64_to_cpu(cmd->bt_primary_ci)); - pos += scnprintf(buf + pos, bufsz - pos, - "\tSecondary Channel Bitmap 0x%016llx\n", - le64_to_cpu(cmd->bt_secondary_ci)); - - mutex_unlock(&mvm->mutex); - - return simple_read_from_buffer(user_buf, count, ppos, buf, pos); -} - -static ssize_t -iwl_dbgfs_bt_tx_prio_write(struct iwl_mvm *mvm, char *buf, - size_t count, loff_t *ppos) -{ - u32 bt_tx_prio; - - if (sscanf(buf, "%u", &bt_tx_prio) != 1) - return -EINVAL; - if (bt_tx_prio > 4) - return -EINVAL; - - mvm->bt_tx_prio = bt_tx_prio; - - return count; -} - -static ssize_t -iwl_dbgfs_bt_force_ant_write(struct iwl_mvm *mvm, char *buf, - size_t count, loff_t *ppos) -{ - static const char * const modes_str[BT_FORCE_ANT_MAX] = { - [BT_FORCE_ANT_DIS] = "dis", - [BT_FORCE_ANT_AUTO] = "auto", - [BT_FORCE_ANT_BT] = "bt", - [BT_FORCE_ANT_WIFI] = "wifi", - }; - int ret, bt_force_ant_mode; - - ret = match_string(modes_str, ARRAY_SIZE(modes_str), buf); - if (ret < 0) - return ret; - - bt_force_ant_mode = ret; - ret = 0; - mutex_lock(&mvm->mutex); - if (mvm->bt_force_ant_mode == bt_force_ant_mode) - goto out; - - mvm->bt_force_ant_mode = bt_force_ant_mode; - IWL_DEBUG_COEX(mvm, "Force mode: %s\n", - modes_str[mvm->bt_force_ant_mode]); - - if (iwl_mvm_firmware_running(mvm)) - ret = iwl_mvm_send_bt_init_conf(mvm); - else - ret = 0; - -out: - mutex_unlock(&mvm->mutex); - return ret ?: count; -} - static ssize_t iwl_dbgfs_fw_ver_read(struct file *file, char __user *user_buf, size_t count, loff_t *ppos) { struct iwl_mvm *mvm = file->private_data; char *buff, *pos, *endpos; static const size_t bufsz = 1024; - char _fw_name_pre[FW_NAME_PRE_BUFSIZE]; int ret; buff = kmalloc(bufsz, GFP_KERNEL); @@ -764,8 +552,8 @@ static ssize_t iwl_dbgfs_fw_ver_read(struct file *file, char __user *user_buf, pos = buff; endpos = pos + bufsz; - pos += scnprintf(pos, endpos - pos, "FW prefix: %s\n", - iwl_drv_get_fwname_pre(mvm->trans, _fw_name_pre)); + pos += scnprintf(pos, endpos - pos, "FW id: %s\n", + mvm->fwrt.fw->fw_version); pos += scnprintf(pos, endpos - pos, "FW: %s\n", mvm->fwrt.fw->human_readable); pos += scnprintf(pos, endpos - pos, "Device: %s\n", @@ -1396,6 +1184,8 @@ static ssize_t iwl_dbgfs_fw_nmi_write(struct iwl_mvm *mvm, char *buf, if (!iwl_mvm_firmware_running(mvm)) return -EIO; + IWL_ERR(mvm, "Triggering an NMI from debugfs\n"); + if (count == 6 && !strcmp(buf, "nolog\n")) set_bit(IWL_MVM_STATUS_SUPPRESS_ERROR_LOG_ONCE, &mvm->status); @@ -2164,15 +1954,12 @@ MVM_DEBUGFS_WRITE_FILE_OPS(stop_ctdp, 8); MVM_DEBUGFS_WRITE_FILE_OPS(start_ctdp, 8); MVM_DEBUGFS_WRITE_FILE_OPS(force_ctkill, 8); MVM_DEBUGFS_WRITE_FILE_OPS(tx_flush, 16); -MVM_DEBUGFS_WRITE_FILE_OPS(sta_drain, 8); MVM_DEBUGFS_WRITE_FILE_OPS(send_echo_cmd, 8); MVM_DEBUGFS_READ_WRITE_FILE_OPS(sram, 64); MVM_DEBUGFS_READ_WRITE_FILE_OPS(set_nic_temperature, 64); MVM_DEBUGFS_READ_FILE_OPS(nic_temp); MVM_DEBUGFS_READ_FILE_OPS(stations); MVM_DEBUGFS_READ_LINK_STA_FILE_OPS(rs_data); -MVM_DEBUGFS_READ_FILE_OPS(bt_notif); -MVM_DEBUGFS_READ_FILE_OPS(bt_cmd); MVM_DEBUGFS_READ_WRITE_FILE_OPS(disable_power_off, 64); MVM_DEBUGFS_READ_FILE_OPS(fw_rx_stats); MVM_DEBUGFS_READ_FILE_OPS(drv_rx_stats); @@ -2182,8 +1969,6 @@ MVM_DEBUGFS_READ_FILE_OPS(phy_integration_ver); MVM_DEBUGFS_READ_FILE_OPS(tas_get_status); MVM_DEBUGFS_WRITE_FILE_OPS(fw_restart, 10); MVM_DEBUGFS_WRITE_FILE_OPS(fw_nmi, 10); -MVM_DEBUGFS_WRITE_FILE_OPS(bt_tx_prio, 10); -MVM_DEBUGFS_WRITE_FILE_OPS(bt_force_ant, 10); MVM_DEBUGFS_READ_WRITE_FILE_OPS(scan_ant_rxchain, 8); MVM_DEBUGFS_READ_WRITE_FILE_OPS(fw_dbg_conf, 8); MVM_DEBUGFS_WRITE_FILE_OPS(fw_dbg_collect, 64); @@ -2370,7 +2155,6 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm) spin_lock_init(&mvm->drv_stats_lock); MVM_DEBUGFS_ADD_FILE(tx_flush, mvm->debugfs_dir, 0200); - MVM_DEBUGFS_ADD_FILE(sta_drain, mvm->debugfs_dir, 0200); MVM_DEBUGFS_ADD_FILE(sram, mvm->debugfs_dir, 0600); MVM_DEBUGFS_ADD_FILE(set_nic_temperature, mvm->debugfs_dir, 0600); MVM_DEBUGFS_ADD_FILE(nic_temp, mvm->debugfs_dir, 0400); @@ -2379,8 +2163,6 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm) MVM_DEBUGFS_ADD_FILE(start_ctdp, mvm->debugfs_dir, 0200); MVM_DEBUGFS_ADD_FILE(force_ctkill, mvm->debugfs_dir, 0200); MVM_DEBUGFS_ADD_FILE(stations, mvm->debugfs_dir, 0400); - MVM_DEBUGFS_ADD_FILE(bt_notif, mvm->debugfs_dir, 0400); - MVM_DEBUGFS_ADD_FILE(bt_cmd, mvm->debugfs_dir, 0400); MVM_DEBUGFS_ADD_FILE(disable_power_off, mvm->debugfs_dir, 0600); MVM_DEBUGFS_ADD_FILE(fw_ver, mvm->debugfs_dir, 0400); MVM_DEBUGFS_ADD_FILE(fw_rx_stats, mvm->debugfs_dir, 0400); @@ -2388,8 +2170,6 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm) MVM_DEBUGFS_ADD_FILE(fw_system_stats, mvm->debugfs_dir, 0400); MVM_DEBUGFS_ADD_FILE(fw_restart, mvm->debugfs_dir, 0200); MVM_DEBUGFS_ADD_FILE(fw_nmi, mvm->debugfs_dir, 0200); - MVM_DEBUGFS_ADD_FILE(bt_tx_prio, mvm->debugfs_dir, 0200); - MVM_DEBUGFS_ADD_FILE(bt_force_ant, mvm->debugfs_dir, 0200); MVM_DEBUGFS_ADD_FILE(scan_ant_rxchain, mvm->debugfs_dir, 0600); MVM_DEBUGFS_ADD_FILE(prph_reg, mvm->debugfs_dir, 0600); MVM_DEBUGFS_ADD_FILE(fw_dbg_conf, mvm->debugfs_dir, 0600); @@ -2448,6 +2228,9 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm) debugfs_create_file("mem", 0600, mvm->debugfs_dir, mvm, &iwl_dbgfs_mem_ops); + debugfs_create_bool("rx_ts_ptp", 0600, mvm->debugfs_dir, + &mvm->rx_ts_ptp); + /* * Create a symlink with mac80211. It will be removed when mac80211 * exists (before the opmode exists which removes the target.) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c index 72a3d71f46f0..afd90a52d4ec 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c @@ -40,6 +40,12 @@ struct iwl_mvm_ftm_pasn_entry { u32 flags; }; +struct iwl_mvm_ftm_iter_data { + u8 *cipher; + u8 *bssid; + u8 *tk; +}; + int iwl_mvm_ftm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, u8 *addr, u32 cipher, u8 *tk, u32 tk_len, u8 *hltk, u32 hltk_len) @@ -431,47 +437,55 @@ iwl_mvm_ftm_put_target_v2(struct iwl_mvm *mvm, return 0; } -#define FTM_PUT_FLAG(flag) (target->initiator_ap_flags |= \ +#define FTM_SET_FLAG(flag) (*flags |= \ cpu_to_le32(IWL_INITIATOR_AP_FLAGS_##flag)) static void -iwl_mvm_ftm_put_target_common(struct iwl_mvm *mvm, - struct cfg80211_pmsr_request_peer *peer, - struct iwl_tof_range_req_ap_entry_v6 *target) +iwl_mvm_ftm_set_target_flags(struct iwl_mvm *mvm, + struct cfg80211_pmsr_request_peer *peer, + __le32 *flags) { - memcpy(target->bssid, peer->addr, ETH_ALEN); - target->burst_period = - cpu_to_le16(peer->ftm.burst_period); - target->samples_per_burst = peer->ftm.ftms_per_burst; - target->num_of_bursts = peer->ftm.num_bursts_exp; - target->ftmr_max_retries = peer->ftm.ftmr_retries; - target->initiator_ap_flags = cpu_to_le32(0); + *flags = cpu_to_le32(0); if (peer->ftm.asap) - FTM_PUT_FLAG(ASAP); + FTM_SET_FLAG(ASAP); if (peer->ftm.request_lci) - FTM_PUT_FLAG(LCI_REQUEST); + FTM_SET_FLAG(LCI_REQUEST); if (peer->ftm.request_civicloc) - FTM_PUT_FLAG(CIVIC_REQUEST); + FTM_SET_FLAG(CIVIC_REQUEST); if (IWL_MVM_FTM_INITIATOR_DYNACK) - FTM_PUT_FLAG(DYN_ACK); + FTM_SET_FLAG(DYN_ACK); if (IWL_MVM_FTM_INITIATOR_ALGO == IWL_TOF_ALGO_TYPE_LINEAR_REG) - FTM_PUT_FLAG(ALGO_LR); + FTM_SET_FLAG(ALGO_LR); else if (IWL_MVM_FTM_INITIATOR_ALGO == IWL_TOF_ALGO_TYPE_FFT) - FTM_PUT_FLAG(ALGO_FFT); + FTM_SET_FLAG(ALGO_FFT); if (peer->ftm.trigger_based) - FTM_PUT_FLAG(TB); + FTM_SET_FLAG(TB); else if (peer->ftm.non_trigger_based) - FTM_PUT_FLAG(NON_TB); + FTM_SET_FLAG(NON_TB); if ((peer->ftm.trigger_based || peer->ftm.non_trigger_based) && peer->ftm.lmr_feedback) - FTM_PUT_FLAG(LMR_FEEDBACK); + FTM_SET_FLAG(LMR_FEEDBACK); +} + +static void +iwl_mvm_ftm_put_target_common(struct iwl_mvm *mvm, + struct cfg80211_pmsr_request_peer *peer, + struct iwl_tof_range_req_ap_entry_v6 *target) +{ + memcpy(target->bssid, peer->addr, ETH_ALEN); + target->burst_period = + cpu_to_le16(peer->ftm.burst_period); + target->samples_per_burst = peer->ftm.ftms_per_burst; + target->num_of_bursts = peer->ftm.num_bursts_exp; + target->ftmr_max_retries = peer->ftm.ftmr_retries; + iwl_mvm_ftm_set_target_flags(mvm, peer, &target->initiator_ap_flags); } static int @@ -514,21 +528,10 @@ iwl_mvm_ftm_put_target_v4(struct iwl_mvm *mvm, return 0; } -static int -iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif, - struct cfg80211_pmsr_request_peer *peer, - struct iwl_tof_range_req_ap_entry_v6 *target) +static int iwl_mvm_ftm_set_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, + struct cfg80211_pmsr_request_peer *peer, + u8 *sta_id, __le32 *flags) { - int ret; - - ret = iwl_mvm_ftm_target_chandef_v2(mvm, peer, &target->channel_num, - &target->format_bw, - &target->ctrl_ch_position); - if (ret) - return ret; - - iwl_mvm_ftm_put_target_common(mvm, peer, target); - if (vif->cfg.assoc) { struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); struct ieee80211_sta *sta; @@ -540,8 +543,8 @@ iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif, if (memcmp(peer->addr, link_conf->bssid, ETH_ALEN)) continue; - target->sta_id = mvmvif->link[link_id]->ap_sta_id; - sta = rcu_dereference(mvm->fw_id_to_mac_id[target->sta_id]); + *sta_id = mvmvif->link[link_id]->ap_sta_id; + sta = rcu_dereference(mvm->fw_id_to_mac_id[*sta_id]); if (WARN_ON_ONCE(IS_ERR_OR_NULL(sta))) { rcu_read_unlock(); return PTR_ERR_OR_ZERO(sta); @@ -549,23 +552,42 @@ iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif, if (sta->mfp && (peer->ftm.trigger_based || peer->ftm.non_trigger_based)) - FTM_PUT_FLAG(PMF); + FTM_SET_FLAG(PMF); break; } rcu_read_unlock(); #ifdef CONFIG_IWLWIFI_DEBUGFS if (mvmvif->ftm_unprotected) { - target->sta_id = IWL_MVM_INVALID_STA; - target->initiator_ap_flags &= - ~cpu_to_le32(IWL_INITIATOR_AP_FLAGS_PMF); + *sta_id = IWL_MVM_INVALID_STA; + *flags &= ~cpu_to_le32(IWL_INITIATOR_AP_FLAGS_PMF); } - #endif } else { - target->sta_id = IWL_MVM_INVALID_STA; + *sta_id = IWL_MVM_INVALID_STA; } + return 0; +} + +static int +iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif, + struct cfg80211_pmsr_request_peer *peer, + struct iwl_tof_range_req_ap_entry_v6 *target) +{ + int ret; + + ret = iwl_mvm_ftm_target_chandef_v2(mvm, peer, &target->channel_num, + &target->format_bw, + &target->ctrl_ch_position); + if (ret) + return ret; + + iwl_mvm_ftm_put_target_common(mvm, peer, target); + + iwl_mvm_ftm_set_sta(mvm, vif, peer, &target->sta_id, + &target->initiator_ap_flags); + /* * TODO: Beacon interval is currently unknown, so use the common value * of 100 TUs. @@ -703,27 +725,24 @@ static void iter(struct ieee80211_hw *hw, struct ieee80211_key_conf *key, void *data) { - struct iwl_tof_range_req_ap_entry_v6 *target = data; + struct iwl_mvm_ftm_iter_data *target = data; if (!sta || memcmp(sta->addr, target->bssid, ETH_ALEN)) return; WARN_ON(!sta->mfp); - if (WARN_ON(key->keylen > sizeof(target->tk))) - return; - - memcpy(target->tk, key->key, key->keylen); - target->cipher = iwl_mvm_cipher_to_location_cipher(key->cipher); - WARN_ON(target->cipher == IWL_LOCATION_CIPHER_INVALID); + target->tk = key->key; + *target->cipher = iwl_mvm_cipher_to_location_cipher(key->cipher); + WARN_ON(*target->cipher == IWL_LOCATION_CIPHER_INVALID); } static void iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif, - struct iwl_tof_range_req_ap_entry_v7 *target) + u8 *bssid, u8 *cipher, u8 *hltk, u8 *tk, + u8 *rx_pn, u8 *tx_pn, __le32 *flags) { struct iwl_mvm_ftm_pasn_entry *entry; - u32 flags = le32_to_cpu(target->initiator_ap_flags); #ifdef CONFIG_IWLWIFI_DEBUGFS struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); @@ -731,35 +750,37 @@ iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif, return; #endif - if (!(flags & (IWL_INITIATOR_AP_FLAGS_NON_TB | + if (!(le32_to_cpu(*flags) & (IWL_INITIATOR_AP_FLAGS_NON_TB | IWL_INITIATOR_AP_FLAGS_TB))) return; lockdep_assert_held(&mvm->mutex); list_for_each_entry(entry, &mvm->ftm_initiator.pasn_list, list) { - if (memcmp(entry->addr, target->bssid, sizeof(entry->addr))) + if (memcmp(entry->addr, bssid, sizeof(entry->addr))) continue; - target->cipher = entry->cipher; + *cipher = entry->cipher; if (entry->flags & IWL_MVM_PASN_FLAG_HAS_HLTK) - memcpy(target->hltk, entry->hltk, sizeof(target->hltk)); + memcpy(hltk, entry->hltk, sizeof(entry->hltk)); else - memset(target->hltk, 0, sizeof(target->hltk)); + memset(hltk, 0, sizeof(entry->hltk)); if (vif->cfg.assoc && - !memcmp(vif->bss_conf.bssid, target->bssid, - sizeof(target->bssid))) - ieee80211_iter_keys(mvm->hw, vif, iter, target); - else - memcpy(target->tk, entry->tk, sizeof(target->tk)); + !memcmp(vif->bss_conf.bssid, bssid, ETH_ALEN)) { + struct iwl_mvm_ftm_iter_data target; + + target.bssid = bssid; + ieee80211_iter_keys(mvm->hw, vif, iter, &target); + } else { + memcpy(tk, entry->tk, sizeof(entry->tk)); + } - memcpy(target->rx_pn, entry->rx_pn, sizeof(target->rx_pn)); - memcpy(target->tx_pn, entry->tx_pn, sizeof(target->tx_pn)); + memcpy(rx_pn, entry->rx_pn, sizeof(entry->rx_pn)); + memcpy(tx_pn, entry->tx_pn, sizeof(entry->tx_pn)); - target->initiator_ap_flags |= - cpu_to_le32(IWL_INITIATOR_AP_FLAGS_SECURED); + FTM_SET_FLAG(SECURED); return; } } @@ -773,7 +794,11 @@ iwl_mvm_ftm_put_target_v7(struct iwl_mvm *mvm, struct ieee80211_vif *vif, if (err) return err; - iwl_mvm_ftm_set_secured_ranging(mvm, vif, target); + iwl_mvm_ftm_set_secured_ranging(mvm, vif, target->bssid, + &target->cipher, target->hltk, + target->tk, target->rx_pn, + target->tx_pn, + &target->initiator_ap_flags); return err; } @@ -920,6 +945,105 @@ static int iwl_mvm_ftm_start_v13(struct iwl_mvm *mvm, return iwl_mvm_ftm_send_cmd(mvm, &hcmd); } +static int +iwl_mvm_ftm_put_target_v10(struct iwl_mvm *mvm, struct ieee80211_vif *vif, + struct cfg80211_pmsr_request_peer *peer, + struct iwl_tof_range_req_ap_entry_v10 *target) +{ + u32 i2r_max_sts, flags; + int ret; + + ret = iwl_mvm_ftm_target_chandef_v2(mvm, peer, &target->channel_num, + &target->format_bw, + &target->ctrl_ch_position); + if (ret) + return ret; + + memcpy(target->bssid, peer->addr, ETH_ALEN); + target->burst_period = + cpu_to_le16(peer->ftm.burst_period); + target->samples_per_burst = peer->ftm.ftms_per_burst; + target->num_of_bursts = peer->ftm.num_bursts_exp; + iwl_mvm_ftm_set_target_flags(mvm, peer, &target->initiator_ap_flags); + iwl_mvm_ftm_set_sta(mvm, vif, peer, &target->sta_id, + &target->initiator_ap_flags); + iwl_mvm_ftm_set_secured_ranging(mvm, vif, target->bssid, + &target->cipher, target->hltk, + target->tk, target->rx_pn, + target->tx_pn, + &target->initiator_ap_flags); + + i2r_max_sts = IWL_MVM_FTM_I2R_MAX_STS > 1 ? 1 : + IWL_MVM_FTM_I2R_MAX_STS; + + target->r2i_ndp_params = IWL_MVM_FTM_R2I_MAX_REP | + (IWL_MVM_FTM_R2I_MAX_STS << IWL_LOCATION_MAX_STS_POS) | + (IWL_MVM_FTM_R2I_MAX_TOTAL_LTF << IWL_LOCATION_TOTAL_LTF_POS); + target->i2r_ndp_params = IWL_MVM_FTM_I2R_MAX_REP | + (i2r_max_sts << IWL_LOCATION_MAX_STS_POS) | + (IWL_MVM_FTM_I2R_MAX_TOTAL_LTF << IWL_LOCATION_TOTAL_LTF_POS); + + if (peer->ftm.non_trigger_based) { + target->min_time_between_msr = + cpu_to_le16(IWL_MVM_FTM_NON_TB_MIN_TIME_BETWEEN_MSR); + target->burst_period = + cpu_to_le16(IWL_MVM_FTM_NON_TB_MAX_TIME_BETWEEN_MSR); + } else { + target->min_time_between_msr = cpu_to_le16(0); + } + + target->band = + iwl_mvm_phy_band_from_nl80211(peer->chandef.chan->band); + + /* + * TODO: Beacon interval is currently unknown, so use the common value + * of 100 TUs. + */ + target->beacon_interval = cpu_to_le16(100); + + /* + * If secure LTF is turned off, replace the flag with PMF only + */ + flags = le32_to_cpu(target->initiator_ap_flags); + if (flags & IWL_INITIATOR_AP_FLAGS_SECURED) { + if (!IWL_MVM_FTM_INITIATOR_SECURE_LTF) + flags &= ~IWL_INITIATOR_AP_FLAGS_SECURED; + + flags |= IWL_INITIATOR_AP_FLAGS_PMF; + target->initiator_ap_flags = cpu_to_le32(flags); + } + + return 0; +} + +static int iwl_mvm_ftm_start_v14(struct iwl_mvm *mvm, + struct ieee80211_vif *vif, + struct cfg80211_pmsr_request *req) +{ + struct iwl_tof_range_req_cmd_v14 cmd; + struct iwl_host_cmd hcmd = { + .id = WIDE_ID(LOCATION_GROUP, TOF_RANGE_REQ_CMD), + .dataflags[0] = IWL_HCMD_DFL_DUP, + .data[0] = &cmd, + .len[0] = sizeof(cmd), + }; + u8 i; + int err; + + iwl_mvm_ftm_cmd_common(mvm, vif, (void *)&cmd, req); + + for (i = 0; i < cmd.num_of_ap; i++) { + struct cfg80211_pmsr_request_peer *peer = &req->peers[i]; + struct iwl_tof_range_req_ap_entry_v10 *target = &cmd.ap[i]; + + err = iwl_mvm_ftm_put_target_v10(mvm, vif, peer, target); + if (err) + return err; + } + + return iwl_mvm_ftm_send_cmd(mvm, &hcmd); +} + int iwl_mvm_ftm_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, struct cfg80211_pmsr_request *req) { @@ -938,6 +1062,9 @@ int iwl_mvm_ftm_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, IWL_FW_CMD_VER_UNKNOWN); switch (cmd_ver) { + case 14: + err = iwl_mvm_ftm_start_v14(mvm, vif, req); + break; case 13: err = iwl_mvm_ftm_start_v13(mvm, vif, req); break; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c index 8e760300a1ab..e4caa362f597 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2015-2017 Intel Deutschland GmbH - * Copyright (C) 2018-2023 Intel Corporation + * Copyright (C) 2018-2024 Intel Corporation */ #include <net/cfg80211.h> #include <linux/etherdevice.h> @@ -88,7 +88,7 @@ static int iwl_mvm_ftm_responder_set_bw_v2(struct cfg80211_chan_def *chandef, static void iwl_mvm_ftm_responder_set_ndp(struct iwl_mvm *mvm, - struct iwl_tof_responder_config_cmd_v9 *cmd) + struct iwl_tof_responder_config_cmd *cmd) { /* Up to 2 R2I STS are allowed on the responder */ u32 r2i_max_sts = IWL_MVM_FTM_R2I_MAX_STS < 2 ? @@ -117,7 +117,7 @@ iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm, * field interpretation is different), so the same struct can be use * for all cases. */ - struct iwl_tof_responder_config_cmd_v9 cmd = { + struct iwl_tof_responder_config_cmd cmd = { .channel_num = chandef->chan->hw_value, .cmd_valid_fields = cpu_to_le32(IWL_TOF_RESPONDER_CMD_VALID_CHAN_INFO | @@ -131,8 +131,13 @@ iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm, lockdep_assert_held(&mvm->mutex); + if (cmd_ver == 10) { + cmd.band = + iwl_mvm_phy_band_from_nl80211(chandef->chan->band); + } + /* Use a default of bss_color=1 for now */ - if (cmd_ver == 9) { + if (cmd_ver >= 9) { cmd.cmd_valid_fields |= cpu_to_le32(IWL_TOF_RESPONDER_CMD_VALID_BSS_COLOR | IWL_TOF_RESPONDER_CMD_VALID_MIN_MAX_TIME_BETWEEN_MSR); @@ -148,7 +153,7 @@ iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm, } if (cmd_ver >= 8) - iwl_mvm_ftm_responder_set_ndp(mvm, &cmd); + iwl_mvm_ftm_responder_set_ndp(mvm, (void *)&cmd); if (cmd_ver >= 7) err = iwl_mvm_ftm_responder_set_bw_v2(chandef, &cmd.format_bw, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index f4937a100cbe..08c4898c8f1a 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -28,9 +28,6 @@ #define MVM_UCODE_ALIVE_TIMEOUT (2 * HZ) #define MVM_UCODE_CALIB_TIMEOUT (2 * HZ) -#define IWL_UATS_VLP_AP_SUPPORTED BIT(29) -#define IWL_UATS_AFC_AP_SUPPORTED BIT(30) - struct iwl_mvm_alive_data { bool valid; u32 scd_base_addr; @@ -408,7 +405,7 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, UREG_LMAC2_CURRENT_PC)); } - if (ret == -ETIMEDOUT && !mvm->pldr_sync) + if (ret == -ETIMEDOUT && !mvm->fw_product_reset) iwl_fw_dbg_error_collect(&mvm->fwrt, FW_DBG_TRIGGER_ALIVE_TIMEOUT); @@ -460,12 +457,14 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, #endif /* + * For pre-MLD API (MLD API doesn't use the timestamps): * All the BSSes in the BSS table include the GP2 in the system * at the beacon Rx time, this is of course no longer relevant * since we are resetting the firmware. * Purge all the BSS table. */ - cfg80211_bss_flush(mvm->hw->wiphy); + if (!mvm->mld_api_is_used) + cfg80211_bss_flush(mvm->hw->wiphy); return 0; } @@ -491,17 +490,11 @@ static void iwl_mvm_uats_init(struct iwl_mvm *mvm) .dataflags[0] = IWL_HCMD_DFL_NOCOPY, }; - if (!(mvm->trans->trans_cfg->device_family >= - IWL_DEVICE_FAMILY_AX210)) { + if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) { IWL_DEBUG_RADIO(mvm, "UATS feature is not supported\n"); return; } - if (!mvm->fwrt.uats_enabled) { - IWL_DEBUG_RADIO(mvm, "UATS feature is disabled\n"); - return; - } - cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd.id, IWL_FW_CMD_VER_UNKNOWN); if (cmd_ver != 1) { @@ -513,7 +506,7 @@ static void iwl_mvm_uats_init(struct iwl_mvm *mvm) ret = iwl_uefi_get_uats_table(mvm->trans, &mvm->fwrt); if (ret < 0) { - IWL_ERR(mvm, "failed to read UATS table (%d)\n", ret); + IWL_DEBUG_FW(mvm, "failed to read UATS table (%d)\n", ret); return; } @@ -627,8 +620,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm) if (mvm->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) { sb_cfg = iwl_read_umac_prph(mvm->trans, SB_MODIFY_CFG_FLAG); /* if needed, we'll reset this on our way out later */ - mvm->pldr_sync = sb_cfg == SB_CFG_RESIDES_IN_ROM; - if (mvm->pldr_sync && iwl_mei_pldr_req()) + mvm->fw_product_reset = sb_cfg == SB_CFG_RESIDES_IN_ROM; + if (mvm->fw_product_reset && iwl_mei_pldr_req()) return -EBUSY; } @@ -647,7 +640,7 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm) IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret); /* if we needed reset then fail here, but notify and remove */ - if (mvm->pldr_sync) { + if (mvm->fw_product_reset) { iwl_mei_alive_notif(false); iwl_trans_pcie_remove(mvm->trans, true); } @@ -686,14 +679,6 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm) goto error; } - if (IWL_MVM_PARSE_NVM && !mvm->nvm_data) { - ret = iwl_nvm_init(mvm); - if (ret) { - IWL_ERR(mvm, "Failed to read NVM: %d\n", ret); - goto error; - } - } - ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(REGULATORY_AND_NVM_GROUP, NVM_ACCESS_COMPLETE), CMD_SEND_IN_RFKILL, @@ -718,7 +703,7 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm) return ret; /* Read the NVM only at driver load time, no need to do this twice */ - if (!IWL_MVM_PARSE_NVM && !mvm->nvm_data) { + if (!mvm->nvm_data) { mvm->nvm_data = iwl_get_nvm(mvm->trans, mvm->fw, mvm->set_tx_ant, mvm->set_rx_ant); if (IS_ERR(mvm->nvm_data)) { @@ -843,7 +828,7 @@ remove_notif: iwl_remove_notification(&mvm->notif_wait, &calib_wait); out: mvm->rfkill_safe_init_done = false; - if (iwlmvm_mod_params.init_dbg && !mvm->nvm_data) { + if (!mvm->nvm_data) { /* we want to debug INIT and we have no NVM - fake */ mvm->nvm_data = kzalloc(sizeof(struct iwl_nvm_data) + sizeof(struct ieee80211_channel) + @@ -1231,10 +1216,6 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm) "Failed to send LARI_CONFIG_CHANGE (%d)\n", ret); } - - if (le32_to_cpu(cmd.oem_uhb_allow_bitmap) & IWL_UATS_VLP_AP_SUPPORTED || - le32_to_cpu(cmd.oem_uhb_allow_bitmap) & IWL_UATS_AFC_AP_SUPPORTED) - mvm->fwrt.uats_enabled = true; } void iwl_mvm_get_bios_tables(struct iwl_mvm *mvm) @@ -1376,9 +1357,6 @@ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm) if (ret) { IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret); - - if (iwlmvm_mod_params.init_dbg) - return 0; return ret; } @@ -1415,14 +1393,14 @@ int iwl_mvm_up(struct iwl_mvm *mvm) ret = iwl_mvm_load_rt_fw(mvm); if (ret) { IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret); - if (ret != -ERFKILL && !mvm->pldr_sync) + if (ret != -ERFKILL && !mvm->fw_product_reset) iwl_fw_dbg_error_collect(&mvm->fwrt, FW_DBG_TRIGGER_DRIVER); goto error; } /* FW loaded successfully */ - mvm->pldr_sync = false; + mvm->fw_product_reset = false; iwl_fw_disable_dbg_asserts(&mvm->fwrt); iwl_get_shared_mem_conf(&mvm->fwrt); @@ -1489,8 +1467,6 @@ int iwl_mvm_up(struct iwl_mvm *mvm) for (i = 0; i < IWL_MVM_FW_MAX_LINK_ID + 1; i++) RCU_INIT_POINTER(mvm->link_id_to_link_conf[i], NULL); - memset(&mvm->fw_link_ids_map, 0, sizeof(mvm->fw_link_ids_map)); - mvm->tdls_cs.peer.sta_id = IWL_MVM_INVALID_STA; /* reset quota debouncing buffer - 0xff will yield invalid data */ @@ -1619,8 +1595,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm) IWL_DEBUG_INFO(mvm, "RT uCode started.\n"); return 0; error: - if (!iwlmvm_mod_params.init_dbg || !ret) - iwl_mvm_stop_device(mvm); + iwl_mvm_stop_device(mvm); return ret; } diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/link.c b/drivers/net/wireless/intel/iwlwifi/mvm/link.c index 6ec9a8e21a34..a9929aa49913 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/link.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/link.c @@ -11,6 +11,7 @@ HOW(BLOCKED_TPT) \ HOW(BLOCKED_FW) \ HOW(BLOCKED_NON_BSS) \ + HOW(BLOCKED_ROC) \ HOW(EXIT_MISSED_BEACON) \ HOW(EXIT_LOW_RSSI) \ HOW(EXIT_COEX) \ @@ -50,26 +51,15 @@ static void iwl_mvm_print_esr_state(struct iwl_mvm *mvm, u32 mask) static u32 iwl_mvm_get_free_fw_link_id(struct iwl_mvm *mvm, struct iwl_mvm_vif *mvm_vif) { - u32 link_id; + u32 i; lockdep_assert_held(&mvm->mutex); - link_id = ffz(mvm->fw_link_ids_map); + for (i = 0; i < ARRAY_SIZE(mvm->link_id_to_link_conf); i++) + if (!rcu_access_pointer(mvm->link_id_to_link_conf[i])) + return i; - /* this case can happen if there're deactivated but not removed links */ - if (link_id > IWL_MVM_FW_MAX_LINK_ID) - return IWL_MVM_FW_LINK_ID_INVALID; - - mvm->fw_link_ids_map |= BIT(link_id); - return link_id; -} - -static void iwl_mvm_release_fw_link_id(struct iwl_mvm *mvm, u32 link_id) -{ - lockdep_assert_held(&mvm->mutex); - - if (!WARN_ON(link_id > IWL_MVM_FW_MAX_LINK_ID)) - mvm->fw_link_ids_map &= ~BIT(link_id); + return IWL_MVM_FW_LINK_ID_INVALID; } static int iwl_mvm_link_cmd_send(struct iwl_mvm *mvm, @@ -380,7 +370,6 @@ int iwl_mvm_unset_link_mapping(struct iwl_mvm *mvm, struct ieee80211_vif *vif, RCU_INIT_POINTER(mvm->link_id_to_link_conf[link_info->fw_link_id], NULL); - iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id); return 0; } @@ -504,17 +493,27 @@ iwl_mvm_get_puncturing_factor(const struct ieee80211_bss_conf *link_conf) static unsigned int iwl_mvm_get_chan_load(struct ieee80211_bss_conf *link_conf) { + struct ieee80211_vif *vif = link_conf->vif; struct iwl_mvm_vif_link_info *mvm_link = iwl_mvm_vif_from_mac80211(link_conf->vif)->link[link_conf->link_id]; const struct element *bss_load_elem; const struct ieee80211_bss_load_elem *bss_load; enum nl80211_band band = link_conf->chanreq.oper.chan->band; + const struct cfg80211_bss_ies *ies; unsigned int chan_load; u32 chan_load_by_us; rcu_read_lock(); - bss_load_elem = ieee80211_bss_get_elem(link_conf->bss, - WLAN_EID_QBSS_LOAD); + if (ieee80211_vif_link_active(vif, link_conf->link_id)) + ies = rcu_dereference(link_conf->bss->beacon_ies); + else + ies = rcu_dereference(link_conf->bss->ies); + + if (ies) + bss_load_elem = cfg80211_find_elem(WLAN_EID_QBSS_LOAD, + ies->data, ies->len); + else + bss_load_elem = NULL; /* If there isn't BSS Load element, take the defaults */ if (!bss_load_elem || @@ -978,6 +977,9 @@ void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif, lockdep_assert_held(&mvm->mutex); + if (!IWL_MVM_AUTO_EML_ENABLE) + return; + /* Nothing to do */ if (!mvmvif->esr_active) return; @@ -1025,19 +1027,24 @@ void iwl_mvm_block_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif, lockdep_assert_held(&mvm->mutex); + if (!IWL_MVM_AUTO_EML_ENABLE) + return; + /* This should be called only with disable reasons */ if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS))) return; - if (!(mvmvif->esr_disable_reason & reason)) { - IWL_DEBUG_INFO(mvm, - "Blocking EMLSR mode. reason = %s (0x%x)\n", - iwl_get_esr_state_string(reason), reason); - iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason); - } + if (mvmvif->esr_disable_reason & reason) + return; + + IWL_DEBUG_INFO(mvm, + "Blocking EMLSR mode. reason = %s (0x%x)\n", + iwl_get_esr_state_string(reason), reason); mvmvif->esr_disable_reason |= reason; + iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason); + iwl_mvm_exit_esr(mvm, vif, reason, link_to_keep); } @@ -1082,6 +1089,15 @@ static void iwl_mvm_esr_unblocked(struct iwl_mvm *mvm, IWL_DEBUG_INFO(mvm, "EMLSR is unblocked\n"); + /* If we exited due to an EXIT reason, and the exit was in less than + * 30 seconds, then a MLO scan was scheduled already. + */ + if (!need_new_sel && + !(mvmvif->last_esr_exit.reason & IWL_MVM_BLOCK_ESR_REASONS)) { + IWL_DEBUG_INFO(mvm, "Wait for MLO scan\n"); + return; + } + /* * If EMLSR was blocked for more than 30 seconds, or the last link * selection decided to not enter EMLSR, trigger a new scan. @@ -1111,6 +1127,9 @@ void iwl_mvm_unblock_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif, lockdep_assert_held(&mvm->mutex); + if (!IWL_MVM_AUTO_EML_ENABLE) + return; + /* This should be called only with disable reasons */ if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS))) return; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c index 5144fa0f96b0..dfcc96f18b4f 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c @@ -296,6 +296,7 @@ int iwl_mvm_mac_ctxt_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif) INIT_LIST_HEAD(&mvmvif->time_event_data.list); mvmvif->time_event_data.id = TE_MAX; + mvmvif->roc_activity = ROC_NUM_ACTIVITIES; mvmvif->deflink.bcast_sta.sta_id = IWL_MVM_INVALID_STA; mvmvif->deflink.mcast_sta.sta_id = IWL_MVM_INVALID_STA; @@ -1010,12 +1011,13 @@ static void iwl_mvm_mac_ctxt_set_tx(struct iwl_mvm *mvm, tx->tx_flags = cpu_to_le32(tx_flags); if (!fw_has_capa(&mvm->fw->ucode_capa, - IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION)) + IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION)) { iwl_mvm_toggle_tx_ant(mvm, &mvm->mgmt_last_antenna_idx); - tx->rate_n_flags = - cpu_to_le32(BIT(mvm->mgmt_last_antenna_idx) << - RATE_MCS_ANT_POS); + tx->rate_n_flags = + cpu_to_le32(BIT(mvm->mgmt_last_antenna_idx) << + RATE_MCS_ANT_POS); + } rate = iwl_mvm_mac_ctxt_get_beacon_rate(mvm, info, vif); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c index dac6155ae1bd..835a05b91833 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c @@ -22,7 +22,7 @@ #include "mvm.h" #include "sta.h" #include "time-event.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "iwl-phy-db.h" #include "testmode.h" #include "fw/error-dump.h" @@ -30,21 +30,28 @@ #include "iwl-nvm-parse.h" #include "time-sync.h" +#define IWL_MVM_LIMITS(ap) \ + { \ + .max = 1, \ + .types = BIT(NL80211_IFTYPE_STATION), \ + }, \ + { \ + .max = 1, \ + .types = ap | \ + BIT(NL80211_IFTYPE_P2P_CLIENT) | \ + BIT(NL80211_IFTYPE_P2P_GO), \ + }, \ + { \ + .max = 1, \ + .types = BIT(NL80211_IFTYPE_P2P_DEVICE), \ + } + static const struct ieee80211_iface_limit iwl_mvm_limits[] = { - { - .max = 1, - .types = BIT(NL80211_IFTYPE_STATION), - }, - { - .max = 1, - .types = BIT(NL80211_IFTYPE_AP) | - BIT(NL80211_IFTYPE_P2P_CLIENT) | - BIT(NL80211_IFTYPE_P2P_GO), - }, - { - .max = 1, - .types = BIT(NL80211_IFTYPE_P2P_DEVICE), - }, + IWL_MVM_LIMITS(0) +}; + +static const struct ieee80211_iface_limit iwl_mvm_limits_ap[] = { + IWL_MVM_LIMITS(BIT(NL80211_IFTYPE_AP)) }; static const struct ieee80211_iface_combination iwl_mvm_iface_combinations[] = { @@ -54,6 +61,12 @@ static const struct ieee80211_iface_combination iwl_mvm_iface_combinations[] = { .limits = iwl_mvm_limits, .n_limits = ARRAY_SIZE(iwl_mvm_limits), }, + { + .num_different_channels = 1, + .max_interfaces = 3, + .limits = iwl_mvm_limits_ap, + .n_limits = ARRAY_SIZE(iwl_mvm_limits_ap), + }, }; static const struct cfg80211_pmsr_capabilities iwl_mvm_pmsr_capa = { @@ -138,8 +151,7 @@ struct ieee80211_regdomain *iwl_mvm_get_regdomain(struct wiphy *wiphy, resp->channels, __le16_to_cpu(resp->mcc), __le16_to_cpu(resp->geo_info), - le32_to_cpu(resp->cap), resp_ver, - mvm->fwrt.uats_enabled); + le32_to_cpu(resp->cap), resp_ver); /* Store the return source id */ src_id = resp->source_id; if (IS_ERR_OR_NULL(regd)) { @@ -360,7 +372,7 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) if (mvm->mld_api_is_used && mvm->nvm_data->sku_cap_11be_enable && !iwlwifi_mod_params.disable_11ax && !iwlwifi_mod_params.disable_11be) { - hw->wiphy->flags |= WIPHY_FLAG_DISABLE_WEXT; + hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO; /* we handle this already earlier, but need it for MLO */ ieee80211_hw_set(hw, HANDLES_QUIET_CSA); } @@ -371,12 +383,6 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) if (!mvm->mld_api_is_used) ieee80211_hw_set(hw, TIMING_BEACON_ONLY); - /* We should probably have this, but mac80211 - * currently doesn't support it for MLO. - */ - if (!(hw->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO)) - ieee80211_hw_set(hw, DEAUTH_NEED_MGD_TX_PREP); - /* * On older devices, enabling TX A-MSDU occasionally leads to * something getting messed up, the command read from the FIFO @@ -579,13 +585,13 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) hw->wiphy->max_scan_ssids = PROBE_OPTION_MAX; BUILD_BUG_ON(IWL_MVM_SCAN_STOPPING_MASK & IWL_MVM_SCAN_MASK); - BUILD_BUG_ON(IWL_MVM_MAX_UMAC_SCANS > HWEIGHT32(IWL_MVM_SCAN_MASK) || - IWL_MVM_MAX_LMAC_SCANS > HWEIGHT32(IWL_MVM_SCAN_MASK)); + BUILD_BUG_ON(IWL_MAX_UMAC_SCANS > HWEIGHT32(IWL_MVM_SCAN_MASK) || + IWL_MAX_LMAC_SCANS > HWEIGHT32(IWL_MVM_SCAN_MASK)); if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN)) - mvm->max_scans = IWL_MVM_MAX_UMAC_SCANS; + mvm->max_scans = IWL_MAX_UMAC_SCANS; else - mvm->max_scans = IWL_MVM_MAX_LMAC_SCANS; + mvm->max_scans = IWL_MAX_LMAC_SCANS; if (mvm->nvm_data->bands[NL80211_BAND_2GHZ].n_channels) hw->wiphy->bands[NL80211_BAND_2GHZ] = @@ -727,8 +733,6 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) #ifdef CONFIG_PM_SLEEP if ((unified || mvm->fw->img[IWL_UCODE_WOWLAN].num_sec) && - mvm->trans->ops->d3_suspend && - mvm->trans->ops->d3_resume && device_can_wakeup(mvm->trans->dev)) { mvm->wowlan.flags |= WIPHY_WOWLAN_MAGIC_PKT | WIPHY_WOWLAN_DISCONNECT | @@ -823,7 +827,7 @@ void iwl_mvm_mac_tx(struct ieee80211_hw *hw, } if (offchannel && - !test_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status) && + !test_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status) && !test_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) goto drop; @@ -1104,6 +1108,8 @@ static void iwl_mvm_cleanup_iterator(void *data, u8 *mac, iwl_mvm_te_clear_data(mvm, &mvmvif->time_event_data); spin_unlock_bh(&mvm->time_event_lock); + mvmvif->roc_activity = ROC_NUM_ACTIVITIES; + mvmvif->bf_enabled = false; mvmvif->ba_enabled = false; mvmvif->ap_sta = NULL; @@ -1209,6 +1215,7 @@ static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm) int __iwl_mvm_mac_start(struct iwl_mvm *mvm) { + bool fast_resume = false; int ret; lockdep_assert_held(&mvm->mutex); @@ -1234,6 +1241,30 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm) mvm->nvm_data = NULL; } +#ifdef CONFIG_PM + /* fast_resume will be cleared by iwl_mvm_fast_resume */ + fast_resume = mvm->fast_resume; + + if (fast_resume) { + ret = iwl_mvm_fast_resume(mvm); + if (ret) { + iwl_mvm_stop_device(mvm); + /* iwl_mvm_up() will be called further down */ + } else { + /* + * We clear IWL_MVM_STATUS_FIRMWARE_RUNNING upon + * mac_down() so that debugfs will stop honoring + * requests after we flush all the workers. + * Set the IWL_MVM_STATUS_FIRMWARE_RUNNING bit again + * now that we are back. This is a bit abusing the + * flag since the firmware wasn't really ever stopped, + * but this still serves the purpose. + */ + set_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status); + } + } +#endif /* CONFIG_PM */ + if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status)) { /* * Now convert the HW_RESTART_REQUESTED flag to IN_HW_RESTART @@ -1244,7 +1275,10 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm) /* Clean up some internal and mac80211 state on restart */ iwl_mvm_restart_cleanup(mvm); } - ret = iwl_mvm_up(mvm); + + /* we also want to load the firmware if fast_resume failed */ + if (!fast_resume || ret) + ret = iwl_mvm_up(mvm); iwl_dbg_tlv_time_point(&mvm->fwrt, IWL_FW_INI_TIME_POINT_POST_INIT, NULL); @@ -1295,7 +1329,7 @@ static void iwl_mvm_restart_complete(struct iwl_mvm *mvm) { int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status); @@ -1311,8 +1345,6 @@ static void iwl_mvm_restart_complete(struct iwl_mvm *mvm) * of packets the FW sent out, so we must reconnect. */ iwl_mvm_teardown_tdls_peers(mvm); - - mutex_unlock(&mvm->mutex); } void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw, @@ -1329,7 +1361,7 @@ void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw, } } -void __iwl_mvm_mac_stop(struct iwl_mvm *mvm) +void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend) { lockdep_assert_held(&mvm->mutex); @@ -1345,7 +1377,11 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm) if (!iwl_mvm_has_new_station_api(mvm->fw)) iwl_mvm_rm_aux_sta(mvm); - iwl_mvm_stop_device(mvm); + if (suspend && + mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) + iwl_mvm_fast_suspend(mvm); + else + iwl_mvm_stop_device(mvm); iwl_mvm_async_handlers_purge(mvm); /* async_handlers_list is empty and will stay empty: HW is stopped */ @@ -1378,7 +1414,7 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm) } } -void iwl_mvm_mac_stop(struct ieee80211_hw *hw) +void iwl_mvm_mac_stop(struct ieee80211_hw *hw, bool suspend) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); @@ -1414,7 +1450,7 @@ void iwl_mvm_mac_stop(struct ieee80211_hw *hw) iwl_mvm_mei_set_sw_rfkill_state(mvm); mutex_lock(&mvm->mutex); - __iwl_mvm_mac_stop(mvm); + __iwl_mvm_mac_stop(mvm, suspend); mutex_unlock(&mvm->mutex); /* @@ -1659,9 +1695,8 @@ static void iwl_mvm_prevent_esr_done_wk(struct wiphy *wiphy, struct ieee80211_vif *vif = container_of((void *)mvmvif, struct ieee80211_vif, drv_priv); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_PREVENTION); - mutex_unlock(&mvm->mutex); } static void iwl_mvm_mlo_int_scan_wk(struct wiphy *wiphy, struct wiphy_work *wk) @@ -1671,11 +1706,8 @@ static void iwl_mvm_mlo_int_scan_wk(struct wiphy *wiphy, struct wiphy_work *wk) struct ieee80211_vif *vif = container_of((void *)mvmvif, struct ieee80211_vif, drv_priv); - mutex_lock(&mvmvif->mvm->mutex); - + guard(mvm)(mvmvif->mvm); iwl_mvm_int_mlo_scan(mvmvif->mvm, vif); - - mutex_unlock(&mvmvif->mvm->mutex); } static void iwl_mvm_unblock_esr_tpt(struct wiphy *wiphy, struct wiphy_work *wk) @@ -1686,9 +1718,8 @@ static void iwl_mvm_unblock_esr_tpt(struct wiphy *wiphy, struct wiphy_work *wk) struct ieee80211_vif *vif = container_of((void *)mvmvif, struct ieee80211_vif, drv_priv); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_TPT); - mutex_unlock(&mvm->mutex); } void iwl_mvm_mac_init_mvmvif(struct iwl_mvm *mvm, struct iwl_mvm_vif *mvmvif) @@ -1863,12 +1894,8 @@ void iwl_mvm_prepare_mac_removal(struct iwl_mvm *mvm, cancel_delayed_work_sync(&mvmvif->csa_work); } -/* This function is doing the common part of removing the interface for - * both - MLD and non-MLD modes. Returns true if removing the interface - * is done - */ -static bool iwl_mvm_mac_remove_interface_common(struct ieee80211_hw *hw, - struct ieee80211_vif *vif) +static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); @@ -1916,21 +1943,10 @@ static bool iwl_mvm_mac_remove_interface_common(struct ieee80211_hw *hw, mvm->noa_duration = 0; } #endif - return true; + goto out; } iwl_mvm_power_update_mac(mvm); - return false; -} - -static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw, - struct ieee80211_vif *vif) -{ - struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); - - if (iwl_mvm_mac_remove_interface_common(hw, vif)) - goto out; /* Before the interface removal, mac80211 would cancel the ROC, and the * ROC worker would be scheduled if needed. The worker would be flushed @@ -2078,7 +2094,7 @@ void iwl_mvm_configure_filter(struct ieee80211_hw *hw, struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); struct iwl_mcast_filter_cmd *cmd = (void *)(unsigned long)multicast; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); /* replace previous configuration */ kfree(mvm->mcast_filter_cmd); @@ -2095,7 +2111,6 @@ void iwl_mvm_configure_filter(struct ieee80211_hw *hw, iwl_mvm_recalc_multicast(mvm); out: - mutex_unlock(&mvm->mutex); *total_flags = 0; } @@ -2115,9 +2130,8 @@ static void iwl_mvm_config_iface_filter(struct ieee80211_hw *hw, !vif->p2p) return; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL); - mutex_unlock(&mvm->mutex); } int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm, struct ieee80211_vif *vif) @@ -2779,6 +2793,13 @@ iwl_mvm_bss_info_changed_station_common(struct iwl_mvm *mvm, if (changes & BSS_CHANGED_BANDWIDTH) iwl_mvm_update_link_smps(vif, link_conf); + + if (changes & BSS_CHANGED_TPE) { + IWL_DEBUG_CALIB(mvm, "Changing TPE\n"); + iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif, + link_conf, + false); + } } static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm, @@ -2828,6 +2849,8 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm, if (changes & BSS_CHANGED_ASSOC) { if (vif->cfg.assoc) { + mvmvif->session_prot_connection_loss = false; + /* clear statistics to get clean beacon counter */ iwl_mvm_request_statistics(mvm, true); for_each_mvm_vif_valid_link(mvmvif, i) @@ -3162,7 +3185,7 @@ static void iwl_mvm_stop_ap_ibss(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_stop_ap_ibss_common(mvm, vif); @@ -3192,8 +3215,6 @@ static void iwl_mvm_stop_ap_ibss(struct ieee80211_hw *hw, iwl_mvm_power_update_mac(mvm); iwl_mvm_mac_ctxt_remove(mvm, vif); - - mutex_unlock(&mvm->mutex); } static void iwl_mvm_stop_ap(struct ieee80211_hw *hw, @@ -3248,7 +3269,7 @@ static void iwl_mvm_bss_info_changed(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (changes & BSS_CHANGED_IDLE && !vif->cfg.idle) iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED, true); @@ -3275,25 +3296,19 @@ static void iwl_mvm_bss_info_changed(struct ieee80211_hw *hw, bss_conf->txpower); iwl_mvm_set_tx_power(mvm, vif, bss_conf->txpower); } - - mutex_unlock(&mvm->mutex); } int iwl_mvm_mac_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_scan_request *hw_req) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; if (hw_req->req.n_channels == 0 || hw_req->req.n_channels > mvm->fw->ucode_capa.n_scan_channels) return -EINVAL; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_reg_scan_start(mvm, vif, &hw_req->req, &hw_req->ies); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_reg_scan_start(mvm, vif, &hw_req->req, &hw_req->ies); } void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw, @@ -3301,7 +3316,7 @@ void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); /* Due to a race condition, it's possible that mac80211 asks * us to stop a hw_scan when it's already stopped. This can @@ -3312,8 +3327,6 @@ void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw, */ if (mvm->scan_status & IWL_MVM_SCAN_REGULAR) iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_REGULAR, true); - - mutex_unlock(&mvm->mutex); } void @@ -3482,7 +3495,7 @@ void iwl_mvm_sta_pre_rcu_remove(struct ieee80211_hw *hw, * Since there's mvm->mutex here, no need to have RCU lock for * mvm_sta->link access. */ - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); for (link_id = 0; link_id < ARRAY_SIZE(mvm_sta->link); link_id++) { struct iwl_mvm_link_sta *link_sta; u32 sta_id; @@ -3499,7 +3512,6 @@ void iwl_mvm_sta_pre_rcu_remove(struct ieee80211_hw *hw, RCU_INIT_POINTER(mvm->fw_id_to_link_sta[sta_id], NULL); } } - mutex_unlock(&mvm->mutex); } static void iwl_mvm_check_uapsd(struct iwl_mvm *mvm, struct ieee80211_vif *vif, @@ -3775,8 +3787,6 @@ static void iwl_mvm_rs_rate_init_all_links(struct iwl_mvm *mvm, } } -#define IWL_MVM_MIN_BEACON_INTERVAL_TU 16 - static bool iwl_mvm_vif_conf_from_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, struct ieee80211_sta *sta) @@ -4246,12 +4256,8 @@ static int iwl_mvm_mac_conf_tx(struct ieee80211_hw *hw, * The exception is P2P_DEVICE interface which needs immediate update. */ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) { - int ret; - - mutex_lock(&mvm->mutex); - ret = iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL); - mutex_unlock(&mvm->mutex); - return ret; + guard(mvm)(mvm); + return iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL); } return 0; } @@ -4260,11 +4266,14 @@ void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_prep_tx_info *info) { + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + if (info->was_assoc && !mvmvif->session_prot_connection_loss) + return; + + guard(mvm)(mvm); iwl_mvm_protect_assoc(mvm, vif, info->duration, info->link_id); - mutex_unlock(&mvm->mutex); } void iwl_mvm_mac_mgd_complete_tx(struct ieee80211_hw *hw, @@ -4277,9 +4286,8 @@ void iwl_mvm_mac_mgd_complete_tx(struct ieee80211_hw *hw, if (info->success) return; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_stop_session_protection(mvm, vif); - mutex_unlock(&mvm->mutex); } int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw, @@ -4289,20 +4297,12 @@ int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); - if (!vif->cfg.idle) { - ret = -EBUSY; - goto out; - } - - ret = iwl_mvm_sched_scan_start(mvm, vif, req, ies, IWL_MVM_SCAN_SCHED); + if (!vif->cfg.idle) + return -EBUSY; -out: - mutex_unlock(&mvm->mutex); - return ret; + return iwl_mvm_sched_scan_start(mvm, vif, req, ies, IWL_MVM_SCAN_SCHED); } int iwl_mvm_mac_sched_scan_stop(struct ieee80211_hw *hw, @@ -4580,13 +4580,9 @@ int iwl_mvm_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, struct ieee80211_key_conf *key) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - - mutex_lock(&mvm->mutex); - ret = __iwl_mvm_mac_set_key(hw, cmd, vif, sta, key); - mutex_unlock(&mvm->mutex); - return ret; + guard(mvm)(mvm); + return __iwl_mvm_mac_set_key(hw, cmd, vif, sta, key); } void iwl_mvm_mac_update_tkip_key(struct ieee80211_hw *hw, @@ -4808,6 +4804,37 @@ static int iwl_mvm_roc_station(struct iwl_mvm *mvm, return ret; } +static int iwl_mvm_roc_p2p(struct iwl_mvm *mvm, + struct ieee80211_channel *channel, + struct ieee80211_vif *vif, + int duration, + enum ieee80211_roc_type type) +{ + enum iwl_roc_activity activity; + int ret; + + lockdep_assert_held(&mvm->mutex); + + switch (type) { + case IEEE80211_ROC_TYPE_NORMAL: + activity = ROC_ACTIVITY_P2P_DISC; + break; + case IEEE80211_ROC_TYPE_MGMT_TX: + activity = ROC_ACTIVITY_P2P_NEG; + break; + default: + WARN_ONCE(1, "Got an invalid P2P ROC type\n"); + return -EINVAL; + } + + ret = iwl_mvm_mld_add_aux_sta(mvm, + iwl_mvm_get_lmac_id(mvm, channel->band)); + if (ret) + return ret; + + return iwl_mvm_roc_add_cmd(mvm, channel, vif, duration, activity); +} + static int iwl_mvm_p2p_find_phy_ctxt(struct iwl_mvm *mvm, struct ieee80211_vif *vif, struct ieee80211_channel *channel) @@ -4861,6 +4888,7 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif, const struct iwl_mvm_roc_ops *ops) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); + struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm); u32 lmac_id; int ret; @@ -4873,11 +4901,14 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif, */ flush_work(&mvm->roc_done_wk); - ret = iwl_mvm_esr_non_bss_link(mvm, vif, 0, true); - if (ret) - return ret; + if (!IS_ERR_OR_NULL(bss_vif)) { + ret = iwl_mvm_block_esr_sync(mvm, bss_vif, + IWL_MVM_ESR_BLOCKED_ROC); + if (ret) + return ret; + } - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); switch (vif->type) { case NL80211_IFTYPE_STATION: @@ -4887,30 +4918,29 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif, ret = ops->add_aux_sta_for_hs20(mvm, lmac_id); if (!ret) ret = iwl_mvm_roc_station(mvm, channel, vif, duration); - goto out_unlock; + return ret; case NL80211_IFTYPE_P2P_DEVICE: /* handle below */ break; default: IWL_ERR(mvm, "ROC: Invalid vif type=%u\n", vif->type); - ret = -EINVAL; - goto out_unlock; + return -EINVAL; } + if (iwl_mvm_has_p2p_over_aux(mvm)) { + ret = iwl_mvm_roc_p2p(mvm, channel, vif, duration, type); + return ret; + } ret = iwl_mvm_p2p_find_phy_ctxt(mvm, vif, channel); if (ret) - goto out_unlock; + return ret; ret = ops->link(mvm, vif); if (ret) - goto out_unlock; + return ret; - ret = iwl_mvm_start_p2p_roc(mvm, vif, duration, type); -out_unlock: - mutex_unlock(&mvm->mutex); - IWL_DEBUG_MAC80211(mvm, "leave\n"); - return ret; + return iwl_mvm_start_p2p_roc(mvm, vif, duration, type); } int iwl_mvm_cancel_roc(struct ieee80211_hw *hw, @@ -4991,13 +5021,9 @@ int iwl_mvm_add_chanctx(struct ieee80211_hw *hw, struct ieee80211_chanctx_conf *ctx) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - mutex_lock(&mvm->mutex); - ret = __iwl_mvm_add_chanctx(mvm, ctx); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return __iwl_mvm_add_chanctx(mvm, ctx); } static void __iwl_mvm_remove_chanctx(struct iwl_mvm *mvm, @@ -5016,9 +5042,8 @@ void iwl_mvm_remove_chanctx(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); __iwl_mvm_remove_chanctx(mvm, ctx); - mutex_unlock(&mvm->mutex); } void iwl_mvm_change_chanctx(struct ieee80211_hw *hw, @@ -5038,26 +5063,23 @@ void iwl_mvm_change_chanctx(struct ieee80211_hw *hw, phy_ctxt->ref, changed)) return; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); /* we are only changing the min_width, may be a noop */ if (changed == IEEE80211_CHANCTX_CHANGE_MIN_WIDTH) { if (phy_ctxt->width == def->width) - goto out_unlock; + return; /* we are just toggling between 20_NOHT and 20 */ if (phy_ctxt->width <= NL80211_CHAN_WIDTH_20 && def->width <= NL80211_CHAN_WIDTH_20) - goto out_unlock; + return; } iwl_mvm_bt_coex_vif_change(mvm); iwl_mvm_phy_ctxt_changed(mvm, phy_ctxt, def, &ctx->ap, ctx->rx_chains_static, ctx->rx_chains_dynamic); - -out_unlock: - mutex_unlock(&mvm->mutex); } /* @@ -5177,6 +5199,10 @@ static int __iwl_mvm_assign_vif_chanctx(struct iwl_mvm *mvm, } iwl_mvm_update_quotas(mvm, false, NULL); + + iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif, + link_conf, + false); } goto out; @@ -5196,13 +5222,9 @@ static int iwl_mvm_assign_vif_chanctx(struct ieee80211_hw *hw, struct ieee80211_chanctx_conf *ctx) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - - mutex_lock(&mvm->mutex); - ret = __iwl_mvm_assign_vif_chanctx(mvm, vif, link_conf, ctx, false); - mutex_unlock(&mvm->mutex); - return ret; + guard(mvm)(mvm); + return __iwl_mvm_assign_vif_chanctx(mvm, vif, link_conf, ctx, false); } /* @@ -5290,9 +5312,8 @@ static void iwl_mvm_unassign_vif_chanctx(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); __iwl_mvm_unassign_vif_chanctx(mvm, vif, link_conf, ctx, false); - mutex_unlock(&mvm->mutex); } static int @@ -5302,7 +5323,7 @@ iwl_mvm_switch_vif_chanctx_swap(struct iwl_mvm *mvm, { int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); ops->__unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf, vifs[0].old_ctx, true); __iwl_mvm_remove_chanctx(mvm, vifs[0].old_ctx); @@ -5325,7 +5346,7 @@ iwl_mvm_switch_vif_chanctx_swap(struct iwl_mvm *mvm, if (iwl_mvm_phy_ctx_count(mvm) > 1) iwl_mvm_teardown_tdls_peers(mvm); - goto out; + return 0; out_remove: __iwl_mvm_remove_chanctx(mvm, vifs[0].new_ctx); @@ -5342,15 +5363,11 @@ out_reassign: goto out_restart; } - goto out; + return ret; out_restart: /* things keep failing, better restart the hw */ iwl_mvm_nic_restart(mvm, false); - -out: - mutex_unlock(&mvm->mutex); - return ret; } @@ -5361,7 +5378,7 @@ iwl_mvm_switch_vif_chanctx_reassign(struct iwl_mvm *mvm, { int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); ops->__unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf, vifs[0].old_ctx, true); @@ -5373,7 +5390,7 @@ iwl_mvm_switch_vif_chanctx_reassign(struct iwl_mvm *mvm, goto out_reassign; } - goto out; + return 0; out_reassign: if (ops->__assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf, @@ -5382,15 +5399,11 @@ out_reassign: goto out_restart; } - goto out; + return ret; out_restart: /* things keep failing, better restart the hw */ iwl_mvm_nic_restart(mvm, false); - -out: - mutex_unlock(&mvm->mutex); - return ret; } @@ -5517,13 +5530,9 @@ int iwl_mvm_mac_testmode_cmd(struct ieee80211_hw *hw, void *data, int len) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int err; - mutex_lock(&mvm->mutex); - err = __iwl_mvm_mac_testmode_cmd(mvm, vif, data, len); - mutex_unlock(&mvm->mutex); - - return err; + guard(mvm)(mvm); + return __iwl_mvm_mac_testmode_cmd(mvm, vif, data, len); } #endif @@ -5740,13 +5749,9 @@ static int iwl_mvm_mac_pre_channel_switch(struct ieee80211_hw *hw, struct ieee80211_channel_switch *chsw) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_pre_channel_switch(mvm, vif, chsw); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_pre_channel_switch(mvm, vif, chsw); } void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw, @@ -5798,16 +5803,14 @@ void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw, } mvmvif->csa_count = chsw->count; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (mvmvif->csa_failed) - goto out_unlock; + return; WARN_ON(iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, CHANNEL_SWITCH_TIME_EVENT_CMD), 0, sizeof(cmd), &cmd)); -out_unlock: - mutex_unlock(&mvm->mutex); } static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop) @@ -5816,17 +5819,16 @@ static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop) if (!iwl_mvm_has_new_tx_api(mvm)) { if (drop) { - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_flush_tx_path(mvm, iwl_mvm_flushable_queues(mvm) & queues); - mutex_unlock(&mvm->mutex); } else { iwl_trans_wait_tx_queues_empty(mvm->trans, queues); } return; } - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++) { struct ieee80211_sta *sta; @@ -5841,7 +5843,6 @@ static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop) iwl_mvm_wait_sta_queues_empty(mvm, iwl_mvm_sta_from_mac80211(sta)); } - mutex_unlock(&mvm->mutex); } void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, @@ -5924,7 +5925,7 @@ void iwl_mvm_mac_flush_sta(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_link_sta *link_sta; int link_id; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); for_each_sta_active_link(vif, sta, link_sta, link_id) { mvm_link_sta = rcu_dereference_protected(mvmsta->link[link_id], lockdep_is_held(&mvm->mutex)); @@ -5935,7 +5936,6 @@ void iwl_mvm_mac_flush_sta(struct ieee80211_hw *hw, struct ieee80211_vif *vif, mvmsta->tfd_queue_msk)) IWL_ERR(mvm, "flush request fail\n"); } - mutex_unlock(&mvm->mutex); } static int iwl_mvm_mac_get_acs_survey(struct iwl_mvm *mvm, int idx, @@ -6001,7 +6001,6 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx, struct survey_info *survey) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret = 0; u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, WIDE_ID(SYSTEM_GROUP, SYSTEM_STATISTICS_CMD), @@ -6021,12 +6020,13 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx, if (idx > 0) return iwl_mvm_mac_get_acs_survey(mvm, idx - 1, survey); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (iwl_mvm_firmware_running(mvm)) { - ret = iwl_mvm_request_statistics(mvm, false); + int ret = iwl_mvm_request_statistics(mvm, false); + if (ret) - goto out; + return ret; } survey->filled = SURVEY_INFO_TIME_RX | @@ -6042,7 +6042,7 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx, /* the new fw api doesn't support the following fields */ if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN) - goto out; + return 0; survey->filled |= SURVEY_INFO_TIME | SURVEY_INFO_TIME_SCAN; @@ -6054,9 +6054,7 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx, mvm->radio_stats.on_time_scan; do_div(survey->time_scan, USEC_PER_MSEC); - out: - mutex_unlock(&mvm->mutex); - return ret; + return 0; } static void iwl_mvm_set_sta_rate(u32 rate_n_flags, struct rate_info *rinfo) @@ -6223,13 +6221,13 @@ void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw, if (!vif->cfg.assoc) return; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (mvmvif->deflink.ap_sta_id != mvmsta->deflink.sta_id) - goto unlock; + return; if (iwl_mvm_request_statistics(mvm, false)) - goto unlock; + return; sinfo->rx_beacon = 0; for_each_mvm_vif_valid_link(mvmvif, i) @@ -6243,8 +6241,6 @@ void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw, mvmvif->deflink.beacon_stats.avg_signal; sinfo->filled |= BIT_ULL(NL80211_STA_INFO_BEACON_SIGNAL_AVG); } - unlock: - mutex_unlock(&mvm->mutex); } static void iwl_mvm_event_mlme_callback_ini(struct iwl_mvm *mvm, @@ -6430,9 +6426,8 @@ void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_sync_rx_queues_internal(mvm, IWL_MVM_RXQ_EMPTY, true, NULL, 0); - mutex_unlock(&mvm->mutex); } int @@ -6468,13 +6463,9 @@ int iwl_mvm_start_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct cfg80211_pmsr_request *request) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_ftm_start(mvm, vif, request); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_ftm_start(mvm, vif, request); } void iwl_mvm_abort_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif, @@ -6482,9 +6473,8 @@ void iwl_mvm_abort_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_ftm_abort(mvm, request); - mutex_unlock(&mvm->mutex); } static bool iwl_mvm_can_hw_csum(struct sk_buff *skb) @@ -6519,7 +6509,6 @@ int iwl_mvm_set_hw_timestamp(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); u32 protocols = 0; - int ret; /* HW timestamping is only supported for a specific station */ if (!hwts->macaddr) @@ -6529,11 +6518,8 @@ int iwl_mvm_set_hw_timestamp(struct ieee80211_hw *hw, protocols = IWL_TIME_SYNC_PROTOCOL_TM | IWL_TIME_SYNC_PROTOCOL_FTM; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_time_sync_config(mvm, hwts->macaddr, protocols); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_time_sync_config(mvm, hwts->macaddr, protocols); } const struct ieee80211_ops iwl_mvm_hw_ops = { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c index fcfd2dd7568e..3c99396ad369 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c @@ -12,7 +12,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw, int ret; int i; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_mac_init_mvmvif(mvm, mvmvif); @@ -32,7 +32,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw, /* Allocate resources for the MAC context, and add it to the fw */ ret = iwl_mvm_mac_ctxt_init(mvm, vif); if (ret) - goto out_unlock; + return ret; rcu_assign_pointer(mvm->vif_id_to_mac[mvmvif->id], vif); @@ -46,7 +46,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw, ret = iwl_mvm_mld_mac_ctxt_add(mvm, vif); if (ret) - goto out_unlock; + return ret; /* beacon filtering */ ret = iwl_mvm_disable_beacon_filter(mvm, vif); @@ -95,7 +95,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw, if (vif->p2p || iwl_fw_lookup_cmd_ver(mvm->fw, PHY_CONTEXT_CMD, 1) < 5) vif->driver_flags |= IEEE80211_VIF_IGNORE_OFDMA_WIDER_BW; - goto out_unlock; + return 0; out_free_bf: if (mvm->bf_allowed_vif == mvmvif) { @@ -106,9 +106,6 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw, out_remove_mac: mvmvif->link[0] = NULL; iwl_mvm_mld_mac_ctxt_remove(mvm, vif); - out_unlock: - mutex_unlock(&mvm->mutex); - return ret; } @@ -125,7 +122,7 @@ static void iwl_mvm_mld_mac_remove_interface(struct ieee80211_hw *hw, vif->type == NL80211_IFTYPE_ADHOC)) iwl_mvm_tcm_rm_vif(mvm, vif); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (vif == mvm->csme_vif) { iwl_mei_set_netdev(NULL); @@ -188,8 +185,6 @@ static void iwl_mvm_mld_mac_remove_interface(struct ieee80211_hw *hw, mvm->monitor_on = false; __clear_bit(IEEE80211_HW_RX_INCLUDES_FCS, mvm->hw->flags); } - - mutex_unlock(&mvm->mutex); } static unsigned int iwl_mvm_mld_count_active_links(struct iwl_mvm_vif *mvmvif) @@ -227,6 +222,8 @@ static void iwl_mvm_restart_mpdu_count(struct iwl_mvm *mvm, mvmsta->mpdu_counters[q].window_start = jiffies; spin_unlock_bh(&mvmsta->mpdu_counters[q].lock); } + + IWL_DEBUG_STATS(mvm, "MPDU counters are cleared\n"); } static int iwl_mvm_esr_mode_active(struct iwl_mvm *mvm, @@ -350,6 +347,11 @@ __iwl_mvm_mld_assign_vif_chanctx(struct iwl_mvm *mvm, rcu_read_unlock(); } + if (vif->type == NL80211_IFTYPE_STATION) + iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif, + link_conf, + false); + /* then activate */ ret = iwl_mvm_link_changed(mvm, vif, link_conf, LINK_CONTEXT_MODIFY_ACTIVE | @@ -387,10 +389,11 @@ static int iwl_mvm_mld_assign_vif_chanctx(struct ieee80211_hw *hw, struct ieee80211_chanctx_conf *ctx) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; /* update EMLSR mode */ if (ieee80211_vif_type_p2p(vif) != NL80211_IFTYPE_STATION) { + int ret; + ret = iwl_mvm_esr_non_bss_link(mvm, vif, link_conf->link_id, true); /* @@ -401,11 +404,8 @@ static int iwl_mvm_mld_assign_vif_chanctx(struct ieee80211_hw *hw, return ret; } - mutex_lock(&mvm->mutex); - ret = __iwl_mvm_mld_assign_vif_chanctx(mvm, vif, link_conf, ctx, false); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return __iwl_mvm_mld_assign_vif_chanctx(mvm, vif, link_conf, ctx, false); } static int iwl_mvm_esr_mode_inactive(struct iwl_mvm *mvm, @@ -531,9 +531,37 @@ static void iwl_mvm_mld_unassign_vif_chanctx(struct ieee80211_hw *hw, } static void +iwl_mvm_tpe_sta_cmd_data(struct iwl_txpower_constraints_cmd *cmd, + const struct ieee80211_bss_conf *bss_info) +{ + u8 i; + + /* + * NOTE: the 0 here is IEEE80211_TPE_CAT_6GHZ_DEFAULT, + * we fully ignore IEEE80211_TPE_CAT_6GHZ_SUBORDINATE + */ + + BUILD_BUG_ON(ARRAY_SIZE(cmd->psd_pwr) != + ARRAY_SIZE(bss_info->tpe.psd_local[0].power)); + + /* if not valid, mac80211 puts default (max value) */ + for (i = 0; i < ARRAY_SIZE(cmd->psd_pwr); i++) + cmd->psd_pwr[i] = min(bss_info->tpe.psd_local[0].power[i], + bss_info->tpe.psd_reg_client[0].power[i]); + + BUILD_BUG_ON(ARRAY_SIZE(cmd->eirp_pwr) != + ARRAY_SIZE(bss_info->tpe.max_local[0].power)); + + for (i = 0; i < ARRAY_SIZE(cmd->eirp_pwr); i++) + cmd->eirp_pwr[i] = min(bss_info->tpe.max_local[0].power[i], + bss_info->tpe.max_reg_client[0].power[i]); +} + +void iwl_mvm_send_ap_tx_power_constraint_cmd(struct iwl_mvm *mvm, struct ieee80211_vif *vif, - struct ieee80211_bss_conf *bss_conf) + struct ieee80211_bss_conf *bss_conf, + bool is_ap) { struct iwl_txpower_constraints_cmd cmd = {}; struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); @@ -553,19 +581,22 @@ iwl_mvm_send_ap_tx_power_constraint_cmd(struct iwl_mvm *mvm, link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID) return; - if (bss_conf->chanreq.oper.chan->band != NL80211_BAND_6GHZ || - bss_conf->chanreq.oper.chan->flags & - IEEE80211_CHAN_NO_6GHZ_VLP_CLIENT) + if (bss_conf->chanreq.oper.chan->band != NL80211_BAND_6GHZ) return; cmd.link_id = cpu_to_le16(link_info->fw_link_id); - /* - * Currently supporting VLP Soft AP only. - */ - cmd.ap_type = cpu_to_le16(IWL_6GHZ_AP_TYPE_VLP); memset(cmd.psd_pwr, DEFAULT_TPE_TX_POWER, sizeof(cmd.psd_pwr)); memset(cmd.eirp_pwr, DEFAULT_TPE_TX_POWER, sizeof(cmd.eirp_pwr)); + if (is_ap) { + cmd.ap_type = cpu_to_le16(IWL_6GHZ_AP_TYPE_VLP); + } else if (bss_conf->power_type == IEEE80211_REG_UNSET_AP) { + return; + } else { + cmd.ap_type = cpu_to_le16(bss_conf->power_type - 1); + iwl_mvm_tpe_sta_cmd_data(&cmd, bss_conf); + } + ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(PHY_OPS_GROUP, AP_TX_POWER_CONSTRAINTS_CMD), @@ -584,15 +615,16 @@ static int iwl_mvm_mld_start_ap_ibss(struct ieee80211_hw *hw, struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (vif->type == NL80211_IFTYPE_AP) - iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif, link_conf); + iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif, + link_conf, true); /* Send the beacon template */ ret = iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, link_conf); if (ret) - goto out_unlock; + return ret; /* the link should be already activated when assigning chan context */ ret = iwl_mvm_link_changed(mvm, vif, link_conf, @@ -600,11 +632,11 @@ static int iwl_mvm_mld_start_ap_ibss(struct ieee80211_hw *hw, ~LINK_CONTEXT_MODIFY_ACTIVE, true); if (ret) - goto out_unlock; + return ret; ret = iwl_mvm_mld_add_mcast_sta(mvm, vif, link_conf); if (ret) - goto out_unlock; + return ret; /* Send the bcast station. At this stage the TBTT and DTIM time * events are added and applied to the scheduler @@ -628,7 +660,7 @@ static int iwl_mvm_mld_start_ap_ibss(struct ieee80211_hw *hw, iwl_mvm_ftm_restart_responder(mvm, vif, link_conf); - goto out_unlock; + return 0; out_failed: iwl_mvm_power_update_mac(mvm); @@ -636,8 +668,6 @@ out_failed: iwl_mvm_mld_rm_bcast_sta(mvm, vif, link_conf); out_rm_mcast: iwl_mvm_mld_rm_mcast_sta(mvm, vif, link_conf); -out_unlock: - mutex_unlock(&mvm->mutex); return ret; } @@ -660,7 +690,7 @@ static void iwl_mvm_mld_stop_ap_ibss(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_stop_ap_ibss_common(mvm, vif); @@ -674,7 +704,6 @@ static void iwl_mvm_mld_stop_ap_ibss(struct ieee80211_hw *hw, iwl_mvm_mld_rm_mcast_sta(mvm, vif, link_conf); iwl_mvm_power_update_mac(mvm); - mutex_unlock(&mvm->mutex); } static void iwl_mvm_mld_stop_ap(struct ieee80211_hw *hw, @@ -844,6 +873,8 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm, if (changes & BSS_CHANGED_ASSOC) { if (vif->cfg.assoc) { + mvmvif->session_prot_connection_loss = false; + /* clear statistics to get clean beacon counter */ iwl_mvm_request_statistics(mvm, true); iwl_mvm_sf_update(mvm, vif, false); @@ -977,7 +1008,7 @@ static void iwl_mvm_mld_link_info_changed(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); switch (vif->type) { case NL80211_IFTYPE_STATION: @@ -1003,8 +1034,6 @@ static void iwl_mvm_mld_link_info_changed(struct ieee80211_hw *hw, link_conf->txpower); iwl_mvm_set_tx_power(mvm, vif, link_conf->txpower); } - - mutex_unlock(&mvm->mutex); } static void iwl_mvm_mld_vif_cfg_changed(struct ieee80211_hw *hw, @@ -1013,15 +1042,13 @@ static void iwl_mvm_mld_vif_cfg_changed(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (changes & BSS_CHANGED_IDLE && !vif->cfg.idle) iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED, true); if (vif->type == NL80211_IFTYPE_STATION) iwl_mvm_mld_vif_cfg_changed_station(mvm, vif, changes); - - mutex_unlock(&mvm->mutex); } static int @@ -1054,9 +1081,8 @@ static void iwl_mvm_mld_config_iface_filter(struct ieee80211_hw *hw, !vif->p2p) return; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false); - mutex_unlock(&mvm->mutex); } static int @@ -1078,14 +1104,10 @@ iwl_mvm_mld_mac_conf_tx(struct ieee80211_hw *hw, * The exception is P2P_DEVICE interface which needs immediate update. */ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) { - int ret; - - mutex_lock(&mvm->mutex); - ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf, - LINK_CONTEXT_MODIFY_QOS_PARAMS, - true); - mutex_unlock(&mvm->mutex); - return ret; + guard(mvm)(mvm); + return iwl_mvm_link_changed(mvm, vif, &vif->bss_conf, + LINK_CONTEXT_MODIFY_QOS_PARAMS, + true); } return 0; } @@ -1230,13 +1252,9 @@ iwl_mvm_mld_change_sta_links(struct ieee80211_hw *hw, u16 old_links, u16 new_links) { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); - int ret; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_mld_update_sta_links(mvm, vif, sta, old_links, new_links); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_mld_update_sta_links(mvm, vif, sta, old_links, new_links); } bool iwl_mvm_vif_has_esr_cap(struct iwl_mvm *mvm, struct ieee80211_vif *vif) @@ -1264,26 +1282,19 @@ static bool iwl_mvm_mld_can_activate_links(struct ieee80211_hw *hw, { struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); int n_links = hweight16(desired_links); - bool ret = true; if (n_links <= 1) return true; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); /* Check if HW supports the wanted number of links */ - if (n_links > iwl_mvm_max_active_links(mvm, vif)) { - ret = false; - goto unlock; - } + if (n_links > iwl_mvm_max_active_links(mvm, vif)) + return false; /* If it is an eSR device, check that we can enter eSR */ - ret = iwl_mvm_is_esr_supported(mvm->fwrt.trans) && - iwl_mvm_vif_has_esr_cap(mvm, vif); - -unlock: - mutex_unlock(&mvm->mutex); - return ret; + return iwl_mvm_is_esr_supported(mvm->fwrt.trans) && + iwl_mvm_vif_has_esr_cap(mvm, vif); } static enum ieee80211_neg_ttlm_res diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c index 9d139b56e152..d5a204e52076 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c @@ -241,7 +241,7 @@ int iwl_mvm_mld_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, IWL_MAX_TID_COUNT, &wdg_timeout); } -/* Allocate a new station entry for the broadcast station to the given vif, +/* Allocate a new station entry for the multicast station to the given vif, * and send it to the FW. * Note that each AP/GO mac should have its own multicast station. */ @@ -470,7 +470,7 @@ static int iwl_mvm_mld_cfg_sta(struct iwl_mvm *mvm, struct ieee80211_sta *sta, break; } - switch (sta->deflink.smps_mode) { + switch (link_sta->smps_mode) { case IEEE80211_SMPS_AUTOMATIC: case IEEE80211_SMPS_NUM_MODES: WARN_ON(1); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 0a1959bd4079..22f48b66d79c 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -9,6 +9,7 @@ #include <linux/list.h> #include <linux/spinlock.h> +#include <linux/cleanup.h> #include <linux/leds.h> #include <linux/in6.h> @@ -23,7 +24,7 @@ #include "iwl-op-mode.h" #include "iwl-trans.h" #include "fw/notif-wait.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "fw/file.h" #include "iwl-config.h" #include "sta.h" @@ -82,14 +83,9 @@ extern const struct ieee80211_ops iwl_mvm_mld_hw_ops; /** * struct iwl_mvm_mod_params - module parameters for iwlmvm - * @init_dbg: if true, then the NIC won't be stopped if the INIT fw asserted. - * We will register to mac80211 to have testmode working. The NIC must not - * be up'ed after the INIT fw asserted. This is useful to be able to use - * proprietary tools over testmode to debug the INIT fw. * @power_scheme: one of enum iwl_power_scheme */ struct iwl_mvm_mod_params { - bool init_dbg; int power_scheme; }; extern struct iwl_mvm_mod_params iwlmvm_mod_params; @@ -360,7 +356,9 @@ struct iwl_mvm_vif_link_info { * @IWL_MVM_ESR_BLOCKED_WOWLAN: WOWLAN is preventing the enablement of EMLSR * @IWL_MVM_ESR_BLOCKED_TPT: block EMLSR when there is not enough traffic * @IWL_MVM_ESR_BLOCKED_FW: FW didn't recommended/forced exit from EMLSR - * @IWL_MVM_ESR_BLOCKED_NON_BSS: An active non-bssid link's preventing EMLSR + * @IWL_MVM_ESR_BLOCKED_NON_BSS: An active non-BSS interface's link is + * preventing EMLSR + * @IWL_MVM_ESR_BLOCKED_ROC: remain-on-channel is preventing EMLSR * @IWL_MVM_ESR_EXIT_MISSED_BEACON: exited EMLSR due to missed beacons * @IWL_MVM_ESR_EXIT_LOW_RSSI: link is deactivated/not allowed for EMLSR * due to low RSSI. @@ -377,6 +375,7 @@ enum iwl_mvm_esr_state { IWL_MVM_ESR_BLOCKED_TPT = 0x4, IWL_MVM_ESR_BLOCKED_FW = 0x8, IWL_MVM_ESR_BLOCKED_NON_BSS = 0x10, + IWL_MVM_ESR_BLOCKED_ROC = 0x20, IWL_MVM_ESR_EXIT_MISSED_BEACON = 0x10000, IWL_MVM_ESR_EXIT_LOW_RSSI = 0x20000, IWL_MVM_ESR_EXIT_COEX = 0x40000, @@ -426,6 +425,7 @@ struct iwl_mvm_esr_exit { * @csa_bcn_pending: indicates that we are waiting for a beacon on a new channel * @csa_blocks_tx: CSA is blocking TX * @features: hw features active for this vif + * @max_tx_op: max TXOP in usecs for all ACs, zero for no limit. * @ap_beacon_time: AP beacon time for synchronisation (on older FW) * @bf_enabled: indicates if beacon filtering is enabled * @ba_enabled: indicated if beacon abort is enabled @@ -448,6 +448,40 @@ struct iwl_mvm_esr_exit { * @prevent_esr_done_wk: work that should be done when esr prevention ends. * @mlo_int_scan_wk: work for the internal MLO scan. * @unblock_esr_tpt_wk: work for unblocking EMLSR when tpt is high enough. + * @roc_activity: currently running ROC activity for this vif (or + * ROC_NUM_ACTIVITIES if no activity is running). + * @session_prot_connection_loss: the connection was lost due to session + * protection ending without receiving a beacon, so we need to now + * protect the deauth separately + * @ap_early_keys: The firmware cannot install keys before stations etc., + * but higher layers work differently, so we store the keys here for + * later installation. + * @ap_sta: pointer to the AP STA data structure + * @csa_count: CSA counter (old CSA implementation w/o firmware) + * @csa_misbehave: CSA AP misbehaviour flag (old implementation) + * @csa_target_freq: CSA target channel frequency (old implementation) + * @csa_work: CSA work (old implementation) + * @dbgfs_bf: beamforming debugfs data + * @dbgfs_dir: debugfs directory for this vif + * @dbgfs_pm: power management debugfs data + * @dbgfs_quota_min: debugfs value for minimal quota + * @dbgfs_slink: debugfs symlink for this interface + * @ftm_unprotected: unprotected FTM debugfs override + * @hs_time_event_data: hotspot/AUX ROC time event data + * @mac_pwr_cmd: debugfs override for MAC power command + * @target_ipv6_addrs: IPv6 addresses on this interface for offload + * @num_target_ipv6_addrs: number of @target_ipv6_addrs + * @tentative_addrs: bitmap of tentative IPv6 addresses in @target_ipv6_addrs + * @rekey_data: rekeying data for WoWLAN GTK rekey offload + * @seqno: storage for seqno for older firmware D0/D3 transition + * @seqno_valid: indicates @seqno is valid + * @time_event_data: session protection time event data + * @tsf_id: the TSF resource ID assigned in firmware (for firmware needing that) + * @tx_key_idx: WEP transmit key index for D3 + * @uapsd_misbehaving_ap_addr: MLD address/BSSID of U-APSD misbehaving AP, to + * not use U-APSD on reconnection + * @uapsd_nonagg_detected_wk: worker for handling detection of no aggregation + * in U-APSD */ struct iwl_mvm_vif { struct iwl_mvm *mvm; @@ -461,6 +495,7 @@ struct iwl_mvm_vif { bool pm_enabled; bool monitor_active; bool esr_active; + bool session_prot_connection_loss; u8 low_latency: 6; u8 low_latency_actual: 1; @@ -525,6 +560,7 @@ struct iwl_mvm_vif { struct iwl_mvm_time_event_data time_event_data; struct iwl_mvm_time_event_data hs_time_event_data; + enum iwl_roc_activity roc_activity; /* TCP Checksum Offload */ netdev_features_t features; @@ -538,6 +574,8 @@ struct iwl_mvm_vif { struct ieee80211_key_conf __rcu *keys[2]; } bcn_prot; + u16 max_tx_op; + u16 link_selection_res; u8 link_selection_primary; u8 primary_link; @@ -607,7 +645,7 @@ enum iwl_mvm_sched_scan_pass_all_states { }; /** - * struct iwl_mvm_tt_mgnt - Thermal Throttling Management structure + * struct iwl_mvm_tt_mgmt - Thermal Throttling Management structure * @ct_kill_exit: worker to exit thermal kill * @dynamic_smps: Is thermal throttling enabled dynamic_smps? * @tx_backoff: The current thremal throttling tx backoff in uSec. @@ -730,24 +768,20 @@ struct iwl_mvm_tcm { * struct iwl_mvm_reorder_buffer - per ra/tid/queue reorder buffer * @head_sn: reorder window head sn * @num_stored: number of mpdus stored in the buffer - * @buf_size: the reorder buffer size as set by the last addba request * @queue: queue of this reorder buffer * @last_amsdu: track last ASMDU SN for duplication detection * @last_sub_index: track ASMDU sub frame index for duplication detection * @valid: reordering is valid for this queue * @lock: protect reorder buffer internal state - * @mvm: mvm pointer, needed for frame timer context */ struct iwl_mvm_reorder_buffer { u16 head_sn; u16 num_stored; - u16 buf_size; int queue; u16 last_amsdu; u8 last_sub_index; bool valid; spinlock_t lock; - struct iwl_mvm *mvm; } ____cacheline_aligned_in_smp; /** @@ -769,6 +803,7 @@ __aligned(roundup_pow_of_two(sizeof(struct sk_buff_head))) * @tid: tid of the session * @baid: baid of the session * @timeout: the timeout set in the addba request + * @buf_size: the reorder buffer size as set by the last addba request * @entries_per_queue: # of buffers per queue, this actually gets * aligned up to avoid cache line sharing between queues * @last_rx: last rx jiffies, updated only if timeout passed from last update @@ -785,13 +820,14 @@ struct iwl_mvm_baid_data { u8 tid; u8 baid; u16 timeout; + u16 buf_size; u16 entries_per_queue; unsigned long last_rx; struct timer_list session_timer; struct iwl_mvm_baid_data __rcu **rcu_ptr; struct iwl_mvm *mvm; struct iwl_mvm_reorder_buffer reorder_buf[IWL_MAX_RX_HW_QUEUES]; - struct iwl_mvm_reorder_buf_entry entries[]; + struct iwl_mvm_reorder_buf_entry entries[] ____cacheline_aligned_in_smp; }; static inline struct iwl_mvm_baid_data * @@ -1040,7 +1076,6 @@ struct iwl_mvm { struct iwl_rx_phy_info last_phy_info; struct ieee80211_sta __rcu *fw_id_to_mac_id[IWL_MVM_STATION_COUNT_MAX]; struct ieee80211_link_sta __rcu *fw_id_to_link_sta[IWL_MVM_STATION_COUNT_MAX]; - unsigned long fw_link_ids_map; u8 rx_ba_sessions; /* configured by mac80211 */ @@ -1062,7 +1097,7 @@ struct iwl_mvm { unsigned int max_scans; /* UMAC scan tracking */ - u32 scan_uid_status[IWL_MVM_MAX_UMAC_SCANS]; + u32 scan_uid_status[IWL_MAX_UMAC_SCANS]; /* start time of last scan in TSF of the mac that requested the scan */ u64 scan_start; @@ -1152,6 +1187,7 @@ struct iwl_mvm { struct ieee80211_channel **nd_channels; int n_nd_channels; bool net_detect; + bool fast_resume; u8 offload_tid; #ifdef CONFIG_IWLWIFI_DEBUGFS bool d3_wake_sysassert; @@ -1306,13 +1342,21 @@ struct iwl_mvm { struct iwl_phy_specific_cfg phy_filters; #endif + /* report rx timestamp in ptp clock time */ + bool rx_ts_ptp; + unsigned long last_6ghz_passive_scan_jiffies; unsigned long last_reset_or_resume_time_jiffies; bool sta_remove_requires_queue_remove; bool mld_api_is_used; - bool pldr_sync; + /* + * Indicates that firmware will do a product reset (and then + * therefore fail to load) when we start it (due to OTP burn), + * if so don't dump errors etc. since this is expected. + */ + bool fw_product_reset; struct iwl_time_sync_data time_sync; @@ -1330,11 +1374,14 @@ struct iwl_mvm { #define IWL_MAC80211_GET_MVM(_hw) \ IWL_OP_MODE_GET_MVM((struct iwl_op_mode *)((_hw)->priv)) +DEFINE_GUARD(mvm, struct iwl_mvm *, mutex_lock(&_T->mutex), mutex_unlock(&_T->mutex)) + /** * enum iwl_mvm_status - MVM status bits * @IWL_MVM_STATUS_HW_RFKILL: HW RF-kill is asserted * @IWL_MVM_STATUS_HW_CTKILL: CT-kill is active - * @IWL_MVM_STATUS_ROC_RUNNING: remain-on-channel is running + * @IWL_MVM_STATUS_ROC_P2P_RUNNING: remain-on-channel on P2P is running (when + * P2P is not over AUX) * @IWL_MVM_STATUS_HW_RESTART_REQUESTED: HW restart was requested * @IWL_MVM_STATUS_IN_HW_RESTART: HW restart is active * @IWL_MVM_STATUS_ROC_AUX_RUNNING: AUX remain-on-channel is running @@ -1348,7 +1395,7 @@ struct iwl_mvm { enum iwl_mvm_status { IWL_MVM_STATUS_HW_RFKILL, IWL_MVM_STATUS_HW_CTKILL, - IWL_MVM_STATUS_ROC_RUNNING, + IWL_MVM_STATUS_ROC_P2P_RUNNING, IWL_MVM_STATUS_HW_RESTART_REQUESTED, IWL_MVM_STATUS_IN_HW_RESTART, IWL_MVM_STATUS_ROC_AUX_RUNNING, @@ -1439,7 +1486,8 @@ iwl_mvm_rcu_dereference_vif_id(struct iwl_mvm *mvm, u8 vif_id, bool rcu) static inline struct ieee80211_bss_conf * iwl_mvm_rcu_fw_link_id_to_link_conf(struct iwl_mvm *mvm, u8 link_id, bool rcu) { - if (WARN_ON(link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf))) + if (IWL_FW_CHECK(mvm, link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf), + "erroneous FW link ID: %d\n", link_id)) return NULL; if (rcu) @@ -1724,7 +1772,7 @@ struct iwl_rate_info { u8 ieee; /* MAC header: IWL_RATE_6M_IEEE, etc. */ }; -void __iwl_mvm_mac_stop(struct iwl_mvm *mvm); +void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend); int __iwl_mvm_mac_start(struct iwl_mvm *mvm); /****************** @@ -1860,10 +1908,10 @@ static inline u8 iwl_mvm_get_valid_tx_ant(struct iwl_mvm *mvm) static inline u8 iwl_mvm_get_valid_rx_ant(struct iwl_mvm *mvm) { - u8 rx_ant = mvm->fw->valid_tx_ant; + u8 rx_ant = mvm->fw->valid_rx_ant; if (mvm->nvm_data && mvm->nvm_data->valid_rx_ant) - rx_ant &= mvm->nvm_data->valid_tx_ant; + rx_ant &= mvm->nvm_data->valid_rx_ant; if (mvm->set_rx_ant) rx_ant &= mvm->set_rx_ant; @@ -2246,11 +2294,22 @@ extern const struct file_operations iwl_dbgfs_d3_test_ops; #ifdef CONFIG_PM void iwl_mvm_set_last_nonqos_seq(struct iwl_mvm *mvm, struct ieee80211_vif *vif); +void iwl_mvm_fast_suspend(struct iwl_mvm *mvm); +int iwl_mvm_fast_resume(struct iwl_mvm *mvm); #else static inline void iwl_mvm_set_last_nonqos_seq(struct iwl_mvm *mvm, struct ieee80211_vif *vif) { } + +static inline void iwl_mvm_fast_suspend(struct iwl_mvm *mvm) +{ +} + +static inline int iwl_mvm_fast_resume(struct iwl_mvm *mvm) +{ + return 0; +} #endif void iwl_mvm_set_wowlan_qos_seq(struct iwl_mvm_sta *mvm_ap_sta, struct iwl_wowlan_config_cmd *cmd); @@ -2762,6 +2821,13 @@ static inline void iwl_mvm_mei_set_sw_rfkill_state(struct iwl_mvm *mvm) sw_rfkill); } +static inline bool iwl_mvm_has_p2p_over_aux(struct iwl_mvm *mvm) +{ + u32 cmd_id = WIDE_ID(MAC_CONF_GROUP, ROC_CMD); + + return iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 0) >= 4; +} + static inline bool iwl_mvm_mei_filter_scan(struct iwl_mvm *mvm, struct sk_buff *skb) { @@ -2796,7 +2862,7 @@ int iwl_mvm_op_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant); int iwl_mvm_mac_start(struct ieee80211_hw *hw); void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw, enum ieee80211_reconfig_type reconfig_type); -void iwl_mvm_mac_stop(struct ieee80211_hw *hw); +void iwl_mvm_mac_stop(struct ieee80211_hw *hw, bool suspend); static inline int iwl_mvm_mac_config(struct ieee80211_hw *hw, u32 changed) { return 0; @@ -2927,7 +2993,7 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif, int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm, struct ieee80211_channel *channel, struct ieee80211_vif *vif, - int duration, u32 activity); + int duration, enum iwl_roc_activity activity); /* EMLSR */ bool iwl_mvm_vif_has_esr_cap(struct iwl_mvm *mvm, struct ieee80211_vif *vif); @@ -2954,4 +3020,10 @@ iwl_mvm_bt_coex_calculate_esr_mode(struct iwl_mvm *mvm, bool primary); int iwl_mvm_esr_non_bss_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif, unsigned int link_id, bool active); + +void +iwl_mvm_send_ap_tx_power_constraint_cmd(struct iwl_mvm *mvm, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *bss_conf, + bool is_ap); #endif /* __IWL_MVM_H__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c index ae8177222881..836ca22597bc 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2012-2014, 2018-2019, 2021-2023 Intel Corporation + * Copyright (C) 2012-2014, 2018-2019, 2021-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -9,8 +9,7 @@ #include "iwl-trans.h" #include "iwl-csr.h" #include "mvm.h" -#include "iwl-eeprom-parse.h" -#include "iwl-eeprom-read.h" +#include "iwl-nvm-utils.h" #include "iwl-nvm-parse.h" #include "iwl-prph.h" #include "fw/acpi.h" diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c index d343432474db..b7dcae76a05d 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c @@ -18,7 +18,7 @@ #include "iwl-modparams.h" #include "mvm.h" #include "iwl-phy-db.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "iwl-csr.h" #include "iwl-io.h" #include "iwl-prph.h" @@ -41,12 +41,8 @@ static const struct iwl_op_mode_ops iwl_mvm_ops_mq; struct iwl_mvm_mod_params iwlmvm_mod_params = { .power_scheme = IWL_POWER_SCHEME_BPS, - /* rest of fields are 0 by default */ }; -module_param_named(init_dbg, iwlmvm_mod_params.init_dbg, bool, 0444); -MODULE_PARM_DESC(init_dbg, - "set to true to debug an ASSERT in INIT fw (default: false"); module_param_named(power_scheme, iwlmvm_mod_params.power_scheme, int, 0444); MODULE_PARM_DESC(power_scheme, "power management scheme: 1-active, 2-balanced, 3-low power, default: 2"); @@ -471,7 +467,7 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = { iwl_mvm_time_sync_msmt_confirm_event, RX_HANDLER_SYNC, struct iwl_time_msmt_cfm_notify), RX_HANDLER_GRP(MAC_CONF_GROUP, ROC_NOTIF, - iwl_mvm_rx_roc_notif, RX_HANDLER_SYNC, + iwl_mvm_rx_roc_notif, RX_HANDLER_ASYNC_LOCKED, struct iwl_roc_notif), RX_HANDLER_GRP(SCAN_GROUP, CHANNEL_SURVEY_NOTIF, iwl_mvm_rx_channel_survey_notif, RX_HANDLER_ASYNC_LOCKED, @@ -572,6 +568,7 @@ static const struct iwl_hcmd_names iwl_mvm_legacy_names[] = { HCMD_NAME(D0I3_END_CMD), HCMD_NAME(LTR_CONFIG), HCMD_NAME(LDBG_CONFIG_CMD), + HCMD_NAME(DEBUG_LOG_MSG), }; /* Please keep this array *SORTED* by hex value. @@ -579,6 +576,7 @@ static const struct iwl_hcmd_names iwl_mvm_legacy_names[] = { */ static const struct iwl_hcmd_names iwl_mvm_system_names[] = { HCMD_NAME(SHARED_MEM_CFG_CMD), + HCMD_NAME(SOC_CONFIGURATION_CMD), HCMD_NAME(INIT_EXTENDED_CFG_CMD), HCMD_NAME(FW_ERROR_RECOVERY_CMD), HCMD_NAME(RFI_CONFIG_CMD), @@ -593,8 +591,10 @@ static const struct iwl_hcmd_names iwl_mvm_system_names[] = { * Access is done through binary search */ static const struct iwl_hcmd_names iwl_mvm_mac_conf_names[] = { + HCMD_NAME(LOW_LATENCY_CMD), HCMD_NAME(CHANNEL_SWITCH_TIME_EVENT_CMD), HCMD_NAME(SESSION_PROTECTION_CMD), + HCMD_NAME(CANCEL_CHANNEL_SWITCH_CMD), HCMD_NAME(MAC_CONFIG_CMD), HCMD_NAME(LINK_CONFIG_CMD), HCMD_NAME(STA_CONFIG_CMD), @@ -603,7 +603,10 @@ static const struct iwl_hcmd_names iwl_mvm_mac_conf_names[] = { HCMD_NAME(STA_DISABLE_TX_CMD), HCMD_NAME(ROC_CMD), HCMD_NAME(ROC_NOTIF), + HCMD_NAME(CHANNEL_SWITCH_ERROR_NOTIF), + HCMD_NAME(MISSED_VAP_NOTIF), HCMD_NAME(SESSION_PROTECTION_NOTIF), + HCMD_NAME(PROBE_RESPONSE_DATA_NOTIF), HCMD_NAME(CHANNEL_SWITCH_START_NOTIF), }; @@ -627,6 +630,8 @@ static const struct iwl_hcmd_names iwl_mvm_data_path_names[] = { HCMD_NAME(DQA_ENABLE_CMD), HCMD_NAME(UPDATE_MU_GROUPS_CMD), HCMD_NAME(TRIGGER_RX_QUEUES_NOTIF_CMD), + HCMD_NAME(WNM_PLATFORM_PTM_REQUEST_CMD), + HCMD_NAME(WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD), HCMD_NAME(STA_HE_CTXT_CMD), HCMD_NAME(RLC_CONFIG_CMD), HCMD_NAME(RFH_QUEUE_CONFIG_CMD), @@ -653,6 +658,21 @@ static const struct iwl_hcmd_names iwl_mvm_statistics_names[] = { /* Please keep this array *SORTED* by hex value. * Access is done through binary search */ +static const struct iwl_hcmd_names iwl_mvm_debug_names[] = { + HCMD_NAME(LMAC_RD_WR), + HCMD_NAME(UMAC_RD_WR), + HCMD_NAME(HOST_EVENT_CFG), + HCMD_NAME(DBGC_SUSPEND_RESUME), + HCMD_NAME(BUFFER_ALLOCATION), + HCMD_NAME(GET_TAS_STATUS), + HCMD_NAME(FW_DUMP_COMPLETE_CMD), + HCMD_NAME(FW_CLEAR_BUFFER), + HCMD_NAME(MFU_ASSERT_DUMP_NTF), +}; + +/* Please keep this array *SORTED* by hex value. + * Access is done through binary search + */ static const struct iwl_hcmd_names iwl_mvm_scan_names[] = { HCMD_NAME(CHANNEL_SURVEY_NOTIF), HCMD_NAME(OFFLOAD_MATCH_INFO_NOTIF), @@ -705,6 +725,7 @@ static const struct iwl_hcmd_arr iwl_mvm_groups[] = { [PROT_OFFLOAD_GROUP] = HCMD_ARR(iwl_mvm_prot_offload_names), [REGULATORY_AND_NVM_GROUP] = HCMD_ARR(iwl_mvm_regulatory_and_nvm_names), + [DEBUG_GROUP] = HCMD_ARR(iwl_mvm_debug_names), [STATISTICS_GROUP] = HCMD_ARR(iwl_mvm_statistics_names), }; @@ -740,20 +761,18 @@ static void iwl_mvm_tx_unblock_dwork(struct work_struct *work) struct ieee80211_vif *tx_blocked_vif; struct iwl_mvm_vif *mvmvif; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); tx_blocked_vif = rcu_dereference_protected(mvm->csa_tx_blocked_vif, lockdep_is_held(&mvm->mutex)); if (!tx_blocked_vif) - goto unlock; + return; mvmvif = iwl_mvm_vif_from_mac80211(tx_blocked_vif); iwl_mvm_modify_all_sta_disable_tx(mvm, mvmvif, false); RCU_INIT_POINTER(mvm->csa_tx_blocked_vif, NULL); -unlock: - mutex_unlock(&mvm->mutex); } static void iwl_mvm_fwrt_dump_start(void *ctx) @@ -770,21 +789,12 @@ static void iwl_mvm_fwrt_dump_end(void *ctx) mutex_unlock(&mvm->mutex); } -static bool iwl_mvm_fwrt_fw_running(void *ctx) -{ - return iwl_mvm_firmware_running(ctx); -} - static int iwl_mvm_fwrt_send_hcmd(void *ctx, struct iwl_host_cmd *host_cmd) { struct iwl_mvm *mvm = (struct iwl_mvm *)ctx; - int ret; - mutex_lock(&mvm->mutex); - ret = iwl_mvm_send_cmd(mvm, host_cmd); - mutex_unlock(&mvm->mutex); - - return ret; + guard(mvm)(mvm); + return iwl_mvm_send_cmd(mvm, host_cmd); } static bool iwl_mvm_d3_debug_enable(void *ctx) @@ -795,7 +805,6 @@ static bool iwl_mvm_d3_debug_enable(void *ctx) static const struct iwl_fw_runtime_ops iwl_mvm_fwrt_ops = { .dump_start = iwl_mvm_fwrt_dump_start, .dump_end = iwl_mvm_fwrt_dump_end, - .fw_running = iwl_mvm_fwrt_fw_running, .send_hcmd = iwl_mvm_fwrt_send_hcmd, .d3_debug_enable = iwl_mvm_d3_debug_enable, }; @@ -851,8 +860,7 @@ get_nvm_from_fw: ret = iwl_mvm_init_mcc(mvm); } - if (!iwlmvm_mod_params.init_dbg || !ret) - iwl_mvm_stop_device(mvm); + iwl_mvm_stop_device(mvm); mutex_unlock(&mvm->mutex); wiphy_unlock(mvm->hw->wiphy); @@ -862,7 +870,7 @@ get_nvm_from_fw: IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret); /* no longer need this regardless of failure or not */ - mvm->pldr_sync = false; + mvm->fw_product_reset = false; return ret; } @@ -1360,24 +1368,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, trans_cfg.no_reclaim_cmds = no_reclaim_cmds; trans_cfg.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds); - switch (iwlwifi_mod_params.amsdu_size) { - case IWL_AMSDU_DEF: - trans_cfg.rx_buf_size = IWL_AMSDU_4K; - break; - case IWL_AMSDU_4K: - trans_cfg.rx_buf_size = IWL_AMSDU_4K; - break; - case IWL_AMSDU_8K: - trans_cfg.rx_buf_size = IWL_AMSDU_8K; - break; - case IWL_AMSDU_12K: - trans_cfg.rx_buf_size = IWL_AMSDU_12K; - break; - default: - pr_err("%s: Unsupported amsdu_size: %d\n", KBUILD_MODNAME, - iwlwifi_mod_params.amsdu_size); - trans_cfg.rx_buf_size = IWL_AMSDU_4K; - } + trans_cfg.rx_buf_size = iwl_amsdu_size_to_rxb_size(); trans->wide_cmd_header = true; trans_cfg.bc_table_dword = @@ -1437,9 +1428,6 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, goto out_free; } - IWL_INFO(mvm, "Detected %s, REV=0x%X\n", - mvm->trans->name, mvm->trans->hw_rev); - if (iwlwifi_mod_params.nvm_file) mvm->nvm_file_name = iwlwifi_mod_params.nvm_file; else @@ -1507,8 +1495,6 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, iwl_fw_flush_dumps(&mvm->fwrt); iwl_fw_runtime_free(&mvm->fwrt); - if (iwlmvm_mod_params.init_dbg) - return op_mode; iwl_phy_db_free(mvm->phy_db); kfree(mvm->scan_cmd); iwl_trans_op_mode_leave(trans); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c index 568f53c56199..bc363e8427e4 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c @@ -211,19 +211,37 @@ static void iwl_mvm_power_configure_uapsd(struct iwl_mvm *mvm, IWL_MVM_PS_HEAVY_RX_THLD_PERCENT; } -static void iwl_mvm_p2p_standalone_iterator(void *_data, u8 *mac, - struct ieee80211_vif *vif) +struct iwl_allow_uapsd_iface_iterator_data { + struct ieee80211_vif *current_vif; + bool allow_uapsd; +}; + +static void iwl_mvm_allow_uapsd_iterator(void *_data, u8 *mac, + struct ieee80211_vif *vif) { - bool *is_p2p_standalone = _data; + struct iwl_allow_uapsd_iface_iterator_data *data = _data; + struct iwl_mvm_vif *other_mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_mvm_vif *curr_mvmvif = + iwl_mvm_vif_from_mac80211(data->current_vif); - switch (ieee80211_vif_type_p2p(vif)) { - case NL80211_IFTYPE_P2P_GO: + /* exclude the given vif */ + if (vif == data->current_vif) + return; + + switch (vif->type) { case NL80211_IFTYPE_AP: - *is_p2p_standalone = false; + case NL80211_IFTYPE_ADHOC: + case NL80211_IFTYPE_NAN: + data->allow_uapsd = false; break; case NL80211_IFTYPE_STATION: - if (vif->cfg.assoc) - *is_p2p_standalone = false; + /* allow UAPSD if P2P interface and BSS station interface share + * the same channel. + */ + if (vif->cfg.assoc && other_mvmvif->deflink.phy_ctxt && + curr_mvmvif->deflink.phy_ctxt && + other_mvmvif->deflink.phy_ctxt->id != curr_mvmvif->deflink.phy_ctxt->id) + data->allow_uapsd = false; break; default: @@ -235,6 +253,10 @@ static bool iwl_mvm_power_allow_uapsd(struct iwl_mvm *mvm, struct ieee80211_vif *vif) { struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_allow_uapsd_iface_iterator_data data = { + .current_vif = vif, + .allow_uapsd = true, + }; if (ether_addr_equal(mvmvif->uapsd_misbehaving_ap_addr, vif->cfg.ap_addr)) @@ -249,88 +271,75 @@ static bool iwl_mvm_power_allow_uapsd(struct iwl_mvm *mvm, IEEE80211_P2P_OPPPS_ENABLE_BIT)) return false; - /* - * Avoid using uAPSD if client is in DCM - - * low latency issue in Miracast - */ - if (iwl_mvm_phy_ctx_count(mvm) >= 2) + if (vif->p2p && !iwl_mvm_is_p2p_scm_uapsd_supported(mvm)) return false; - if (vif->p2p) { - /* Allow U-APSD only if p2p is stand alone */ - bool is_p2p_standalone = true; - - if (!iwl_mvm_is_p2p_scm_uapsd_supported(mvm)) - return false; - - ieee80211_iterate_active_interfaces_atomic(mvm->hw, - IEEE80211_IFACE_ITER_NORMAL, - iwl_mvm_p2p_standalone_iterator, - &is_p2p_standalone); - - if (!is_p2p_standalone) - return false; - } + ieee80211_iterate_active_interfaces_atomic(mvm->hw, + IEEE80211_IFACE_ITER_NORMAL, + iwl_mvm_allow_uapsd_iterator, + &data); - return true; + return data.allow_uapsd; } -static bool iwl_mvm_power_is_radar(struct ieee80211_vif *vif) +static bool iwl_mvm_power_is_radar(struct ieee80211_bss_conf *link_conf) { struct ieee80211_chanctx_conf *chanctx_conf; - struct ieee80211_bss_conf *link_conf; - bool radar_detect = false; - unsigned int link_id; - rcu_read_lock(); - for_each_vif_active_link(vif, link_conf, link_id) { - chanctx_conf = rcu_dereference(link_conf->chanctx_conf); - /* this happens on link switching, just ignore inactive ones */ - if (!chanctx_conf) - continue; + chanctx_conf = rcu_dereference(link_conf->chanctx_conf); - radar_detect = !!(chanctx_conf->def.chan->flags & - IEEE80211_CHAN_RADAR); - if (radar_detect) - goto out; - } + /* this happens on link switching, just ignore inactive ones */ + if (!chanctx_conf) + return false; -out: - rcu_read_unlock(); - return radar_detect; + return chanctx_conf->def.chan->flags & IEEE80211_CHAN_RADAR; } static void iwl_mvm_power_config_skip_dtim(struct iwl_mvm *mvm, struct ieee80211_vif *vif, struct iwl_mac_power_cmd *cmd) { - int dtimper = vif->bss_conf.dtim_period ?: 1; - int skip; + struct ieee80211_bss_conf *link_conf; + unsigned int min_link_skip = ~0; + unsigned int link_id; /* disable, in case we're supposed to override */ cmd->skip_dtim_periods = 0; cmd->flags &= ~cpu_to_le16(POWER_FLAGS_SKIP_OVER_DTIM_MSK); - if (iwl_mvm_power_is_radar(vif)) + if (!test_bit(IWL_MVM_STATUS_IN_D3, &mvm->status)) { + if (iwlmvm_mod_params.power_scheme != IWL_POWER_SCHEME_LP) + return; + cmd->skip_dtim_periods = 2; + cmd->flags |= cpu_to_le16(POWER_FLAGS_SKIP_OVER_DTIM_MSK); return; + } - if (dtimper >= 10) - return; + rcu_read_lock(); + for_each_vif_active_link(vif, link_conf, link_id) { + unsigned int dtimper = link_conf->dtim_period ?: 1; + unsigned int dtimper_tu = dtimper * link_conf->beacon_int; + unsigned int skip; - if (!test_bit(IWL_MVM_STATUS_IN_D3, &mvm->status)) { - if (iwlmvm_mod_params.power_scheme != IWL_POWER_SCHEME_LP) + if (dtimper >= 10 || iwl_mvm_power_is_radar(link_conf)) { + rcu_read_unlock(); return; - skip = 2; - } else { - int dtimper_tu = dtimper * vif->bss_conf.beacon_int; + } if (WARN_ON(!dtimper_tu)) - return; + continue; + /* configure skip over dtim up to 900 TU DTIM interval */ - skip = max_t(u8, 1, 900 / dtimper_tu); + skip = max_t(int, 1, 900 / dtimper_tu); + min_link_skip = min(min_link_skip, skip); } + rcu_read_unlock(); + + /* no WARN_ON, can only happen with WARN_ON above */ + if (min_link_skip == ~0) + return; - cmd->skip_dtim_periods = skip; + cmd->skip_dtim_periods = min_link_skip; cmd->flags |= cpu_to_le16(POWER_FLAGS_SKIP_OVER_DTIM_MSK); } diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c index 3ba62fb2c85e..05715e5af6ab 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c @@ -514,6 +514,8 @@ void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm, link_sta->agg.max_tid_amsdu_len[i] = 1; } + ieee80211_sta_recalc_aggregates(sta); + IWL_DEBUG_RATE(mvm, "AMSDU update. AMSDU size: %d, AMSDU selected size: %d, AMSDU TID bitmap 0x%X\n", le32_to_cpu(notif->amsdu_size), size, @@ -609,6 +611,7 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm, cpu_to_le16(max_amsdu_len) : 0, }; unsigned int link_id = link_conf->link_id; + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(mvmsta->vif); int cmd_ver; int ret; @@ -652,7 +655,10 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm, * since TLC offload works with one mode we can assume * that only vht/ht is used and also set it as station max amsdu */ - sta->deflink.agg.max_amsdu_len = max_amsdu_len; + link_sta->agg.max_amsdu_len = max_amsdu_len; + ieee80211_sta_recalc_aggregates(sta); + + cfg_cmd.max_tx_op = cpu_to_le16(mvmvif->max_tx_op); cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 0); IWL_DEBUG_RATE(mvm, "TLC CONFIG CMD, sta_id=%d, max_ch_width=%d, mode=%d\n", diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h index 6cd4ec4d8f34..ea81cb236d5c 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h @@ -3,7 +3,7 @@ * * Copyright(c) 2015 Intel Mobile Communications GmbH * Copyright(c) 2017 Intel Deutschland GmbH - * Copyright (C) 2003 - 2014, 2018 - 2023 Intel Corporation + * Copyright (C) 2003 - 2014, 2018 - 2024 Intel Corporation *****************************************************************************/ #ifndef __rs_h__ @@ -198,11 +198,12 @@ struct rs_rate { /** * struct iwl_lq_sta_rs_fw - rate and related statistics for RS in FW * @last_rate_n_flags: last rate reported by FW + * @pers: persistent fields * @pers.sta_id: the id of the station - * @chains: bitmask of chains reported in %chain_signal - * @chain_signal: per chain signal strength - * @last_rssi: last rssi reported - * @drv: pointer back to the driver data + * @pers.chains: bitmask of chains reported in %chain_signal + * @pers.chain_signal: per chain signal strength + * @pers.last_rssi: last rssi reported + * @pers.drv: pointer back to the driver data */ struct iwl_lq_sta_rs_fw { /* last tx rate_n_flags */ @@ -213,11 +214,11 @@ struct iwl_lq_sta_rs_fw { u32 sta_id; #ifdef CONFIG_MAC80211_DEBUGFS /** - * @dbg_fixed_rate: for debug, use fixed rate if not 0 + * @pers.dbg_fixed_rate: for debug, use fixed rate if not 0 */ u32 dbg_fixed_rate; /** - * @dbg_agg_frame_count_lim: for debug, max number of + * @pers.dbg_agg_frame_count_lim: for debug, max number of * frames in A-MPDU */ u16 dbg_agg_frame_count_lim; @@ -402,7 +403,7 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta, int tid, struct ieee80211_tx_info *info, bool ndp); /** - * iwl_rate_control_register - Register the rate control algorithm callbacks + * iwl_mvm_rate_control_register - Register the rate control algorithm callbacks * * Since the rate control algorithm is hardware specific, there is no need * or reason to place it as a stand alone module. The driver can call @@ -414,7 +415,7 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta, int iwl_mvm_rate_control_register(void); /** - * iwl_rate_control_unregister - Unregister the rate control callbacks + * iwl_mvm_rate_control_unregister - Unregister the rate control callbacks * * This should be called after calling ieee80211_unregister_hw, but before * the driver is unloaded. diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c index 6e933907f985..151289e13308 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c @@ -1009,6 +1009,9 @@ static void iwl_mvm_update_esr_mode_tpt(struct iwl_mvm *mvm) spin_unlock_bh(&mvmsta->mpdu_counters[q].lock); } + IWL_DEBUG_STATS(mvm, "total Tx MPDUs: %ld. total Rx MPDUs: %ld\n", + total_tx, total_rx); + /* If we don't have enough MPDUs - exit EMLSR */ if (total_tx < IWL_MVM_ENTER_ESR_TPT_THRESH && total_rx < IWL_MVM_ENTER_ESR_TPT_THRESH) { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c index 489cfb0a4ab1..1a210d0c22b3 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c @@ -566,7 +566,7 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm, lockdep_assert_held(&reorder_buf->lock); while (ieee80211_sn_less(ssn, nssn)) { - int index = ssn % reorder_buf->buf_size; + int index = ssn % baid_data->buf_size; struct sk_buff_head *skb_list = &entries[index].frames; struct sk_buff *skb; @@ -617,7 +617,7 @@ static void iwl_mvm_del_ba(struct iwl_mvm *mvm, int queue, spin_lock_bh(&reorder_buf->lock); iwl_mvm_release_frames(mvm, sta, NULL, ba_data, reorder_buf, ieee80211_sn_add(reorder_buf->head_sn, - reorder_buf->buf_size)); + ba_data->buf_size)); spin_unlock_bh(&reorder_buf->lock); out: @@ -839,7 +839,7 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm, } /* put in reorder buffer */ - index = sn % buffer->buf_size; + index = sn % baid_data->buf_size; __skb_queue_tail(&entries[index].frames, skb); buffer->num_stored++; @@ -1954,6 +1954,16 @@ static void iwl_mvm_rx_fill_status(struct iwl_mvm *mvm, iwl_mvm_decode_lsig(skb, phy_data); rx_status->device_timestamp = phy_data->gp2_on_air_rise; + + if (mvm->rx_ts_ptp && mvm->monitor_on) { + u64 adj_time = + iwl_mvm_ptp_get_adj_time(mvm, phy_data->gp2_on_air_rise * NSEC_PER_USEC); + + rx_status->mactime = div64_u64(adj_time, NSEC_PER_USEC); + rx_status->flag |= RX_FLAG_MACTIME_IS_RTAP_TS64; + rx_status->flag &= ~RX_FLAG_MACTIME; + } + rx_status->freq = ieee80211_channel_to_frequency(phy_data->channel, rx_status->band); iwl_mvm_get_signal_strength(mvm, rx_status, rate_n_flags, @@ -2032,7 +2042,6 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, u32 len; u32 pkt_len = iwl_rx_packet_payload_len(pkt); struct ieee80211_sta *sta = NULL; - struct ieee80211_link_sta *link_sta = NULL; struct sk_buff *skb; u8 crypt_len = 0; u8 sta_id = le32_get_bits(desc->status, IWL_RX_MPDU_STATUS_STA_ID); @@ -2185,6 +2194,8 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, if (desc->status & cpu_to_le32(IWL_RX_MPDU_STATUS_SRC_STA_FOUND)) { if (!WARN_ON_ONCE(sta_id >= mvm->fw->ucode_capa.num_stations)) { + struct ieee80211_link_sta *link_sta; + sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]); if (IS_ERR(sta)) sta = NULL; @@ -2360,7 +2371,6 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi, struct iwl_rx_packet *pkt = rxb_addr(rxb); struct iwl_rx_no_data_ver_3 *desc = (void *)pkt->data; u32 rssi; - u32 info_type; struct ieee80211_sta *sta = NULL; struct sk_buff *skb; struct iwl_mvm_rx_phy_data phy_data; @@ -2373,7 +2383,6 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi, return; rssi = le32_to_cpu(desc->rssi); - info_type = le32_to_cpu(desc->info) & RX_NO_DATA_INFO_TYPE_MSK; phy_data.d0 = desc->phy_info[0]; phy_data.d1 = desc->phy_info[1]; phy_data.phy_info = IWL_RX_MPDU_PHY_TSF_OVERLOAD; @@ -2425,7 +2434,12 @@ void iwl_mvm_rx_monitor_no_data(struct iwl_mvm *mvm, struct napi_struct *napi, /* 0-length PSDU */ rx_status->flag |= RX_FLAG_NO_PSDU; - switch (info_type) { + /* mark as failed PLCP on any errors to skip checks in mac80211 */ + if (le32_get_bits(desc->info, RX_NO_DATA_INFO_ERR_MSK) != + RX_NO_DATA_INFO_ERR_NONE) + rx_status->flag |= RX_FLAG_FAILED_PLCP_CRC; + + switch (le32_get_bits(desc->info, RX_NO_DATA_INFO_TYPE_MSK)) { case RX_NO_DATA_INFO_TYPE_NDP: rx_status->zero_length_psdu_type = IEEE80211_RADIOTAP_ZERO_LEN_PSDU_SOUNDING; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c index e975f5ff17b5..8e0df31f1b3e 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -208,7 +208,7 @@ static void iwl_mvm_scan_iterator(void *_data, u8 *mac, curr_mvmvif = iwl_mvm_vif_from_mac80211(data->current_vif); - if (vif->type == NL80211_IFTYPE_AP && vif->p2p && + if (ieee80211_vif_type_p2p(vif) == NL80211_IFTYPE_P2P_GO && mvmvif->deflink.phy_ctxt && curr_mvmvif->deflink.phy_ctxt && mvmvif->deflink.phy_ctxt->id != curr_mvmvif->deflink.phy_ctxt->id) data->is_dcm_with_p2p_go = true; @@ -2878,7 +2878,7 @@ static void iwl_mvm_scan_respect_p2p_go_iter(void *_data, u8 *mac, if (vif == data->current_vif) return; - if (vif->type == NL80211_IFTYPE_AP && vif->p2p) { + if (ieee80211_vif_type_p2p(vif) == NL80211_IFTYPE_P2P_GO) { u32 link_id; for (link_id = 0; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index cc79fe991c26..15e64d94d6ea 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -857,12 +857,6 @@ int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm, size = iwl_mvm_get_queue_size(sta); } - /* take the min with bc tbl entries allowed */ - size = min_t(u32, size, mvm->trans->txqs.bc_tbl_size / sizeof(u16)); - - /* size needs to be power of 2 values for calculating read/write pointers */ - size = rounddown_pow_of_two(size); - if (sta) { struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); struct ieee80211_link_sta *link_sta; @@ -887,22 +881,13 @@ int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm, if (!sta_mask) return -EINVAL; - do { - queue = iwl_trans_txq_alloc(mvm->trans, 0, sta_mask, - tid, size, timeout); + queue = iwl_trans_txq_alloc(mvm->trans, 0, sta_mask, + tid, size, timeout); - if (queue < 0) - IWL_DEBUG_TX_QUEUES(mvm, - "Failed allocating TXQ of size %d for sta mask %x tid %d, ret: %d\n", - size, sta_mask, tid, queue); - size /= 2; - } while (queue < 0 && size >= 16); - - if (queue < 0) - return queue; - - IWL_DEBUG_TX_QUEUES(mvm, "Enabling TXQ #%d for sta mask 0x%x tid %d\n", - queue, sta_mask, tid); + if (queue >= 0) + IWL_DEBUG_TX_QUEUES(mvm, + "Enabling TXQ #%d for sta mask 0x%x tid %d\n", + queue, sta_mask, tid); return queue; } @@ -2758,7 +2743,7 @@ static void iwl_mvm_free_reorder(struct iwl_mvm *mvm, */ WARN_ON(1); - for (j = 0; j < reorder_buf->buf_size; j++) + for (j = 0; j < data->buf_size; j++) __skb_queue_purge(&entries[j].frames); spin_unlock_bh(&reorder_buf->lock); @@ -2767,7 +2752,7 @@ static void iwl_mvm_free_reorder(struct iwl_mvm *mvm, static void iwl_mvm_init_reorder_buffer(struct iwl_mvm *mvm, struct iwl_mvm_baid_data *data, - u16 ssn, u16 buf_size) + u16 ssn) { int i; @@ -2780,12 +2765,10 @@ static void iwl_mvm_init_reorder_buffer(struct iwl_mvm *mvm, reorder_buf->num_stored = 0; reorder_buf->head_sn = ssn; - reorder_buf->buf_size = buf_size; spin_lock_init(&reorder_buf->lock); - reorder_buf->mvm = mvm; reorder_buf->queue = i; reorder_buf->valid = false; - for (j = 0; j < reorder_buf->buf_size; j++) + for (j = 0; j < data->buf_size; j++) __skb_queue_head_init(&entries[j].frames); } } @@ -2994,13 +2977,14 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta, baid_data->mvm = mvm; baid_data->tid = tid; baid_data->sta_mask = iwl_mvm_sta_fw_id_mask(mvm, sta, -1); + baid_data->buf_size = buf_size; mvm_sta->tid_to_baid[tid] = baid; if (timeout) mod_timer(&baid_data->session_timer, TU_TO_EXP_TIME(timeout * 2)); - iwl_mvm_init_reorder_buffer(mvm, baid_data, ssn, buf_size); + iwl_mvm_init_reorder_buffer(mvm, baid_data, ssn); /* * protect the BA data with RCU to cover a case where our * internal RX sync mechanism will timeout (not that it's @@ -4433,6 +4417,7 @@ void iwl_mvm_count_mpdu(struct iwl_mvm_sta *mvm_sta, u8 fw_sta_id, u32 count, bool tx, int queue) { struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(mvm_sta->vif); + struct iwl_mvm *mvm = mvmvif->mvm; struct iwl_mvm_tpt_counter *queue_counter; struct iwl_mvm_mpdu_counter *link_counter; u32 total_mpdus = 0; @@ -4469,6 +4454,8 @@ void iwl_mvm_count_mpdu(struct iwl_mvm_sta *mvm_sta, u8 fw_sta_id, u32 count, memset(queue_counter->per_link, 0, sizeof(queue_counter->per_link)); queue_counter->window_start = jiffies; + + IWL_DEBUG_STATS(mvm, "MPDU counters are cleared\n"); } for (int i = 0; i < IWL_MVM_FW_MAX_LINK_ID; i++) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h index 754a05a8c189..0dc83d6afb3c 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h @@ -478,7 +478,7 @@ struct iwl_mvm_int_sta { }; /** - * Send the STA info to the FW. + * iwl_mvm_sta_send_to_fw - Send the STA info to the FW. * * @mvm: the iwl_mvm* to use * @sta: the STA diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c index e7d5f4ebeb25..3d25ff5cd7e8 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c @@ -2,7 +2,7 @@ /* * Copyright (C) 2014 Intel Mobile Communications GmbH * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2018-2020, 2022-2023 Intel Corporation + * Copyright (C) 2018-2020, 2022-2024 Intel Corporation */ #include <linux/etherdevice.h> #include "mvm.h" @@ -151,7 +151,7 @@ void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw, u32 duration = 2 * vif->bss_conf.dtim_period * vif->bss_conf.beacon_int; /* Protect the session to hear the TDLS setup response on the channel */ - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) iwl_mvm_schedule_session_protection(mvm, vif, duration, @@ -159,7 +159,6 @@ void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw, else iwl_mvm_protect_session(mvm, vif, duration, duration, 100, true); - mutex_unlock(&mvm->mutex); } static const char * @@ -460,21 +459,21 @@ void iwl_mvm_tdls_ch_switch_work(struct work_struct *work) int ret; mvm = container_of(work, struct iwl_mvm, tdls_cs.dwork.work); - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); /* called after an active channel switch has finished or timed-out */ iwl_mvm_tdls_update_cs_state(mvm, IWL_MVM_TDLS_SW_IDLE); /* station might be gone, in that case do nothing */ if (mvm->tdls_cs.peer.sta_id == IWL_MVM_INVALID_STA) - goto out; + return; sta = rcu_dereference_protected( mvm->fw_id_to_mac_id[mvm->tdls_cs.peer.sta_id], lockdep_is_held(&mvm->mutex)); /* the station may not be here, but if it is, it must be a TDLS peer */ if (!sta || IS_ERR(sta) || WARN_ON(!sta->tdls)) - goto out; + return; mvmsta = iwl_mvm_sta_from_mac80211(sta); vif = mvmsta->vif; @@ -493,8 +492,6 @@ void iwl_mvm_tdls_ch_switch_work(struct work_struct *work) /* retry after a DTIM if we failed sending now */ delay = TU_TO_MS(vif->bss_conf.dtim_period * vif->bss_conf.beacon_int); schedule_delayed_work(&mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); -out: - mutex_unlock(&mvm->mutex); } int @@ -509,7 +506,7 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw, unsigned int delay; int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); IWL_DEBUG_TDLS(mvm, "TDLS channel switch with %pM ch %d width %d\n", sta->addr, chandef->chan->center_freq, chandef->width); @@ -519,8 +516,7 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw, IWL_DEBUG_TDLS(mvm, "Existing peer. Can't start switch with %pM\n", sta->addr); - ret = -EBUSY; - goto out; + return -EBUSY; } ret = iwl_mvm_tdls_config_channel_switch(mvm, vif, @@ -529,17 +525,15 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw, oper_class, chandef, 0, 0, 0, tmpl_skb, ch_sw_tm_ie); if (ret) - goto out; + return ret; /* * Mark the peer as "in tdls switch" for this vif. We only allow a * single such peer per vif. */ mvm->tdls_cs.peer.skb = skb_copy(tmpl_skb, GFP_KERNEL); - if (!mvm->tdls_cs.peer.skb) { - ret = -ENOMEM; - goto out; - } + if (!mvm->tdls_cs.peer.skb) + return -ENOMEM; mvmsta = iwl_mvm_sta_from_mac80211(sta); mvm->tdls_cs.peer.sta_id = mvmsta->deflink.sta_id; @@ -556,10 +550,7 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw, vif->bss_conf.beacon_int); mod_delayed_work(system_wq, &mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); - -out: - mutex_unlock(&mvm->mutex); - return ret; + return 0; } void iwl_mvm_tdls_cancel_channel_switch(struct ieee80211_hw *hw, @@ -626,7 +617,7 @@ iwl_mvm_tdls_recv_channel_switch(struct ieee80211_hw *hw, params->action_code == WLAN_TDLS_CHANNEL_SWITCH_REQUEST ? "REQ" : "RESP"; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); IWL_DEBUG_TDLS(mvm, "Received TDLS ch switch action %s from %pM status %d\n", @@ -670,5 +661,4 @@ retry: 1024 / 1000; mod_delayed_work(system_wq, &mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); - mutex_unlock(&mvm->mutex); } diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c b/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c index f49e3c98b1ba..47b8e7b64ead 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c @@ -208,6 +208,7 @@ static void setup_link_conf(struct kunit *test) bss_load->channel_util = params->channel_util; rcu_assign_pointer(bss.ies, ies); + rcu_assign_pointer(bss.beacon_ies, ies); } static void test_link_grading(struct kunit *test) @@ -393,9 +394,6 @@ static void test_valid_link_pair(struct kunit *test) chandef_a.width = params->cw_a ?: NL80211_CHAN_WIDTH_20; chandef_b.width = params->cw_b ?: NL80211_CHAN_WIDTH_20; -#ifdef CONFIG_IWLWIFI_SUPPORT_DEBUG_OVERRIDES - trans->dbg_cfg = default_dbg_config; -#endif mvm.trans = trans; mvm.last_bt_notif.wifi_loss_low_rssi = params->bt; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c index 31bc80cdcb7d..a8c42ce3b630 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c @@ -47,12 +47,13 @@ void iwl_mvm_te_clear_data(struct iwl_mvm *mvm, static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm) { + struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm); struct ieee80211_vif *vif = mvm->p2p_device_vif; lockdep_assert_held(&mvm->mutex); /* - * Clear the ROC_RUNNING status bit. + * Clear the ROC_P2P_RUNNING status bit. * This will cause the TX path to drop offchannel transmissions. * That would also be done by mac80211, but it is racy, in particular * in the case that the time event actually completed in the firmware. @@ -62,7 +63,7 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm) * won't get stuck on the queue and be transmitted in the next * time event. */ - if (test_and_clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status)) { + if (test_and_clear_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status)) { struct iwl_mvm_vif *mvmvif; synchronize_net(); @@ -99,7 +100,14 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm) } } - /* Do the same for AUX ROC */ + /* + * P2P AUX ROC and HS2.0 ROC do not run simultaneously. + * Clear the ROC_AUX_RUNNING status bit. + * This will cause the TX path to drop offchannel transmissions. + * That would also be done by mac80211, but it is racy, in particular + * in the case that the time event actually completed in the firmware + * (which is handled in iwl_mvm_te_handle_notif). + */ if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) { synchronize_net(); @@ -119,9 +127,9 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm) iwl_mvm_rm_aux_sta(mvm); } + if (!IS_ERR_OR_NULL(bss_vif)) + iwl_mvm_unblock_esr(mvm, bss_vif, IWL_MVM_ESR_BLOCKED_ROC); mutex_unlock(&mvm->mutex); - if (vif) - iwl_mvm_esr_non_bss_link(mvm, vif, 0, false); } void iwl_mvm_roc_done_wk(struct work_struct *wk) @@ -214,6 +222,8 @@ static bool iwl_mvm_te_check_disconnect(struct iwl_mvm *mvm, iwl_dbg_tlv_time_point(&mvm->fwrt, IWL_FW_INI_TIME_POINT_ASSOC_FAILED, NULL); + + mvmvif->session_prot_connection_loss = true; } iwl_mvm_connection_loss(mvm, vif, errmsg); @@ -378,7 +388,7 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm, te_data->end_jiffies = TU_TO_EXP_TIME(te_data->duration); if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) { - set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status); + set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status); ieee80211_ready_on_channel(mvm->hw); } else if (te_data->id == TE_CHANNEL_SWITCH_PERIOD) { iwl_mvm_te_handle_notify_csa(mvm, te_data, notif); @@ -388,14 +398,51 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm, } } +struct iwl_mvm_rx_roc_iterator_data { + u32 activity; + bool end_activity; + bool found; +}; + +static void iwl_mvm_rx_roc_iterator(void *_data, u8 *mac, + struct ieee80211_vif *vif) +{ + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + struct iwl_mvm_rx_roc_iterator_data *data = _data; + + if (mvmvif->roc_activity == data->activity) { + data->found = true; + if (data->end_activity) + mvmvif->roc_activity = ROC_NUM_ACTIVITIES; + } +} + void iwl_mvm_rx_roc_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb) { struct iwl_rx_packet *pkt = rxb_addr(rxb); struct iwl_roc_notif *notif = (void *)pkt->data; + u32 activity = le32_to_cpu(notif->activity); + bool started = le32_to_cpu(notif->success) && + le32_to_cpu(notif->started); + struct iwl_mvm_rx_roc_iterator_data data = { + .activity = activity, + .end_activity = !started, + }; - if (le32_to_cpu(notif->success) && le32_to_cpu(notif->started) && - le32_to_cpu(notif->activity) == ROC_ACTIVITY_HOTSPOT) { + /* Clear vif roc_activity if done (set to ROC_NUM_ACTIVITIES) */ + ieee80211_iterate_active_interfaces_atomic(mvm->hw, + IEEE80211_IFACE_ITER_NORMAL, + iwl_mvm_rx_roc_iterator, + &data); + /* + * It is possible that the ROC was canceled + * but the notification was already fired. + */ + if (!data.found) + return; + + if (started) { set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status); ieee80211_ready_on_channel(mvm->hw); } else { @@ -724,6 +771,21 @@ static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm, "Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret); } +static void iwl_mvm_roc_rm_cmd(struct iwl_mvm *mvm, u32 activity) +{ + struct iwl_roc_req roc_cmd = { + .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE), + .activity = cpu_to_le32(activity), + }; + int ret; + + lockdep_assert_held(&mvm->mutex); + ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, ROC_CMD), 0, + sizeof(roc_cmd), &roc_cmd); + if (ret) + IWL_ERR(mvm, "Couldn't send the ROC_CMD: %d\n", ret); +} + static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm, struct iwl_mvm_time_event_data *te_data, u32 *uid) @@ -733,6 +795,9 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm, struct iwl_mvm_vif *mvmvif; enum nl80211_iftype iftype; s8 link_id; + bool p2p_aux = iwl_mvm_has_p2p_over_aux(mvm); + u8 roc_ver = iwl_fw_lookup_cmd_ver(mvm->fw, + WIDE_ID(MAC_CONF_GROUP, ROC_CMD), 0); if (!vif) return false; @@ -757,14 +822,22 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm, iwl_mvm_te_clear_data(mvm, te_data); spin_unlock_bh(&mvm->time_event_lock); - /* When session protection is used, the te_data->id field - * is reused to save session protection's configuration. - * For AUX ROC, HOT_SPOT_CMD is used and the te_data->id field is set - * to HOT_SPOT_CMD. - */ - if (fw_has_capa(&mvm->fw->ucode_capa, - IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD) && - id != HOT_SPOT_CMD) { + if ((p2p_aux && iftype == NL80211_IFTYPE_P2P_DEVICE) || + (roc_ver >= 3 && mvmvif->roc_activity == ROC_ACTIVITY_HOTSPOT)) { + if (mvmvif->roc_activity < ROC_NUM_ACTIVITIES) { + iwl_mvm_roc_rm_cmd(mvm, mvmvif->roc_activity); + mvmvif->roc_activity = ROC_NUM_ACTIVITIES; + iwl_mvm_roc_finished(mvm); + } + return false; + } else if (fw_has_capa(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD) && + id != HOT_SPOT_CMD) { + /* When session protection is used, the te_data->id field + * is reused to save session protection's configuration. + * For AUX ROC, HOT_SPOT_CMD is used and the te_data->id + * field is set to HOT_SPOT_CMD. + */ if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) { /* Session protection is still ongoing. Cancel it */ iwl_mvm_cancel_session_protection(mvm, vif, id, @@ -965,7 +1038,7 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm, if (WARN_ON(mvmvif->time_event_data.id != le32_to_cpu(notif->conf_id))) goto out_unlock; - set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status); + set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status); ieee80211_ready_on_channel(mvm->hw); /* Start TE */ } @@ -984,12 +1057,21 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif, u32 *duration_tu, u32 *delay) { - u32 dtim_interval = vif->bss_conf.dtim_period * - vif->bss_conf.beacon_int; + struct ieee80211_bss_conf *link_conf; + unsigned int link_id; + u32 dtim_interval = 0; *delay = AUX_ROC_MIN_DELAY; *duration_tu = MSEC_TO_TU(duration_ms); + rcu_read_lock(); + for_each_vif_active_link(vif, link_conf, link_id) { + dtim_interval = + max_t(u32, dtim_interval, + link_conf->dtim_period * link_conf->beacon_int); + } + rcu_read_unlock(); + /* * If we are associated we want the delay time to be at least one * dtim interval so that the FW can wait until after the DTIM and @@ -998,8 +1080,10 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif, * Since we want to use almost a whole dtim interval we would also * like the delay to be for 2-3 dtim intervals, in case there are * other time events with higher priority. + * dtim_interval should never be 0, it can be 1 if we don't know it + * (we haven't heard any beacon yet). */ - if (vif->cfg.assoc) { + if (vif->cfg.assoc && !WARN_ON(!dtim_interval)) { *delay = min_t(u32, dtim_interval * 3, AUX_ROC_MAX_DELAY); /* We cannot remain off-channel longer than the DTIM interval */ if (dtim_interval <= *duration_tu) { @@ -1014,7 +1098,7 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif, int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm, struct ieee80211_channel *channel, struct ieee80211_vif *vif, - int duration, u32 activity) + int duration, enum iwl_roc_activity activity) { int res; u32 duration_tu, delay; @@ -1023,9 +1107,13 @@ int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm, .activity = cpu_to_le32(activity), .sta_id = cpu_to_le32(mvm->aux_sta.sta_id), }; + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); lockdep_assert_held(&mvm->mutex); + if (WARN_ON(mvmvif->roc_activity != ROC_NUM_ACTIVITIES)) + return -EBUSY; + /* Set the channel info data */ iwl_mvm_set_chan_info(mvm, &roc_req.channel_info, channel->hw_value, @@ -1041,14 +1129,16 @@ int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm, "\t(requested = %ums, max_delay = %ums)\n", duration, delay); IWL_DEBUG_TE(mvm, - "Requesting to remain on channel %u for %utu\n", - channel->hw_value, duration_tu); + "Requesting to remain on channel %u for %utu. activity %u\n", + channel->hw_value, duration_tu, activity); /* Set the node address */ memcpy(roc_req.node_addr, vif->addr, ETH_ALEN); res = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, ROC_CMD), 0, sizeof(roc_req), &roc_req); + if (!res) + mvmvif->roc_activity = activity; return res; } @@ -1191,62 +1281,40 @@ void iwl_mvm_cleanup_roc_te(struct iwl_mvm *mvm) __iwl_mvm_remove_time_event(mvm, te_data, &uid); } -static void iwl_mvm_roc_rm_cmd(struct iwl_mvm *mvm, u32 activity) -{ - int ret; - struct iwl_roc_req roc_cmd = { - .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE), - .activity = cpu_to_le32(activity), - }; - - lockdep_assert_held(&mvm->mutex); - ret = iwl_mvm_send_cmd_pdu(mvm, - WIDE_ID(MAC_CONF_GROUP, ROC_CMD), - 0, sizeof(roc_cmd), &roc_cmd); - WARN_ON(ret); -} - -static void iwl_mvm_roc_station_remove(struct iwl_mvm *mvm, - struct iwl_mvm_vif *mvmvif) -{ - u32 cmd_id = WIDE_ID(MAC_CONF_GROUP, ROC_CMD); - u8 fw_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, - IWL_FW_CMD_VER_UNKNOWN); - - if (fw_ver == IWL_FW_CMD_VER_UNKNOWN) - iwl_mvm_remove_aux_roc_te(mvm, mvmvif, - &mvmvif->hs_time_event_data); - else if (fw_ver == 3) - iwl_mvm_roc_rm_cmd(mvm, ROC_ACTIVITY_HOTSPOT); - else - IWL_ERR(mvm, "ROC command version %d mismatch!\n", fw_ver); -} - void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif) { - struct iwl_mvm_vif *mvmvif; + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); struct iwl_mvm_time_event_data *te_data; + bool p2p_aux = iwl_mvm_has_p2p_over_aux(mvm); + u8 roc_ver = iwl_fw_lookup_cmd_ver(mvm->fw, + WIDE_ID(MAC_CONF_GROUP, ROC_CMD), 0); + int iftype = vif->type; mutex_lock(&mvm->mutex); - if (fw_has_capa(&mvm->fw->ucode_capa, - IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) { - mvmvif = iwl_mvm_vif_from_mac80211(vif); + if (p2p_aux || (roc_ver >= 3 && iftype != NL80211_IFTYPE_P2P_DEVICE)) { + if (mvmvif->roc_activity < ROC_NUM_ACTIVITIES) { + iwl_mvm_roc_rm_cmd(mvm, mvmvif->roc_activity); + mvmvif->roc_activity = ROC_NUM_ACTIVITIES; + } + goto cleanup_roc; + } else if (fw_has_capa(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) { te_data = &mvmvif->time_event_data; - if (vif->type == NL80211_IFTYPE_P2P_DEVICE) { + if (iftype == NL80211_IFTYPE_P2P_DEVICE) { if (te_data->id >= SESSION_PROTECT_CONF_MAX_ID) { IWL_DEBUG_TE(mvm, "No remain on channel event\n"); mutex_unlock(&mvm->mutex); return; } - iwl_mvm_cancel_session_protection(mvm, vif, te_data->id, te_data->link_id); } else { - iwl_mvm_roc_station_remove(mvm, mvmvif); + iwl_mvm_remove_aux_roc_te(mvm, mvmvif, + &mvmvif->hs_time_event_data); } goto cleanup_roc; } @@ -1259,8 +1327,8 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif) } mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); - - if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) + iftype = te_data->vif->type; + if (iftype == NL80211_IFTYPE_P2P_DEVICE) iwl_mvm_remove_time_event(mvm, mvmvif, te_data); else iwl_mvm_remove_aux_roc_te(mvm, mvmvif, te_data); @@ -1271,9 +1339,10 @@ cleanup_roc: * (so the status bit isn't set) set it here so iwl_mvm_cleanup_roc will * cleanup things properly */ - set_bit(vif->type == NL80211_IFTYPE_P2P_DEVICE ? - IWL_MVM_STATUS_ROC_RUNNING : IWL_MVM_STATUS_ROC_AUX_RUNNING, - &mvm->status); + if (p2p_aux || iftype != NL80211_IFTYPE_P2P_DEVICE) + set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status); + else + set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status); /* Mutex is released inside this function */ iwl_mvm_cleanup_roc(mvm); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c index f8b08f98daa0..ed0796aff722 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2012-2014, 2019-2022 Intel Corporation + * Copyright (C) 2012-2014, 2019-2022, 2024 Intel Corporation * Copyright (C) 2013-2014 Intel Mobile Communications GmbH * Copyright (C) 2015-2016 Intel Deutschland GmbH */ @@ -299,7 +299,7 @@ static void check_exit_ctkill(struct work_struct *work) ret = iwl_mvm_get_temp(mvm, &temp); - __iwl_mvm_mac_stop(mvm); + __iwl_mvm_mac_stop(mvm, false); if (ret) goto reschedule; @@ -618,48 +618,35 @@ static int iwl_mvm_tzone_get_temp(struct thermal_zone_device *device, int ret; int temp; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (!iwl_mvm_firmware_running(mvm) || - mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) { - ret = -ENODATA; - goto out; - } + mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) + return -ENODATA; ret = iwl_mvm_get_temp(mvm, &temp); if (ret) - goto out; + return ret; *temperature = temp * 1000; - -out: - mutex_unlock(&mvm->mutex); - return ret; + return 0; } static int iwl_mvm_tzone_set_trip_temp(struct thermal_zone_device *device, const struct thermal_trip *trip, int temp) { struct iwl_mvm *mvm = thermal_zone_device_priv(device); - int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (!iwl_mvm_firmware_running(mvm) || - mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) { - ret = -EIO; - goto out; - } + mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) + return -EIO; - if ((temp / 1000) > S16_MAX) { - ret = -EINVAL; - goto out; - } + if ((temp / 1000) > S16_MAX) + return -EINVAL; - ret = iwl_mvm_send_temp_report_ths_cmd(mvm); -out: - mutex_unlock(&mvm->mutex); - return ret; + return iwl_mvm_send_temp_report_ths_cmd(mvm); } static struct thermal_zone_device_ops tzone_ops = { @@ -733,27 +720,18 @@ static int iwl_mvm_tcool_set_cur_state(struct thermal_cooling_device *cdev, unsigned long new_state) { struct iwl_mvm *mvm = (struct iwl_mvm *)(cdev->devdata); - int ret; - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (!iwl_mvm_firmware_running(mvm) || - mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) { - ret = -EIO; - goto unlock; - } + mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) + return -EIO; - if (new_state >= ARRAY_SIZE(iwl_mvm_cdev_budgets)) { - ret = -EINVAL; - goto unlock; - } - - ret = iwl_mvm_ctdp_command(mvm, CTDP_CMD_OPERATION_START, - new_state); + if (new_state >= ARRAY_SIZE(iwl_mvm_cdev_budgets)) + return -EINVAL; -unlock: - mutex_unlock(&mvm->mutex); - return ret; + return iwl_mvm_ctdp_command(mvm, CTDP_CMD_OPERATION_START, + new_state); } static const struct thermal_cooling_device_ops tcooling_ops = { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c index 1d695ece93e9..7ff5ea5e7aca 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c @@ -12,7 +12,7 @@ #include <net/ipv6.h> #include "iwl-trans.h" -#include "iwl-eeprom-parse.h" +#include "iwl-nvm-utils.h" #include "mvm.h" #include "sta.h" #include "time-sync.h" @@ -802,10 +802,30 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) if (info.control.vif) { struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(info.control.vif); + bool p2p_aux = iwl_mvm_has_p2p_over_aux(mvm); - if (info.control.vif->type == NL80211_IFTYPE_P2P_DEVICE || - info.control.vif->type == NL80211_IFTYPE_AP || - info.control.vif->type == NL80211_IFTYPE_ADHOC) { + if ((info.control.vif->type == NL80211_IFTYPE_P2P_DEVICE && + p2p_aux) || + (info.control.vif->type == NL80211_IFTYPE_STATION && + offchannel)) { + /* + * IWL_MVM_OFFCHANNEL_QUEUE is used for ROC packets + * that can be used in 2 different types of vifs, P2P + * Device and STATION. + * P2P Device uses the offchannel queue. + * STATION (HS2.0) uses the auxiliary context of the FW, + * and hence needs to be sent on the aux queue. + * If P2P_DEV_OVER_AUX is supported (p2p_aux = true) + * also P2P Device uses the aux queue. + */ + sta_id = mvm->aux_sta.sta_id; + queue = mvm->aux_queue; + if (WARN_ON(queue == IWL_MVM_INVALID_QUEUE)) + return -1; + } else if (info.control.vif->type == + NL80211_IFTYPE_P2P_DEVICE || + info.control.vif->type == NL80211_IFTYPE_AP || + info.control.vif->type == NL80211_IFTYPE_ADHOC) { u32 link_id = u32_get_bits(info.control.flags, IEEE80211_TX_CTRL_MLO_LINK); struct iwl_mvm_vif_link_info *link; @@ -831,18 +851,6 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) } else if (info.control.vif->type == NL80211_IFTYPE_MONITOR) { queue = mvm->snif_queue; sta_id = mvm->snif_sta.sta_id; - } else if (info.control.vif->type == NL80211_IFTYPE_STATION && - offchannel) { - /* - * IWL_MVM_OFFCHANNEL_QUEUE is used for ROC packets - * that can be used in 2 different types of vifs, P2P & - * STATION. - * P2P uses the offchannel queue. - * STATION (HS2.0) uses the auxiliary context of the FW, - * and hence needs to be sent on the aux queue. - */ - sta_id = mvm->aux_sta.sta_id; - queue = mvm->aux_queue; } } diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c index 47283a358ffd..0e5fa8374103 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c @@ -892,7 +892,7 @@ static void iwl_mvm_tcm_iter(void *_data, u8 *mac, struct ieee80211_vif *vif) static void iwl_mvm_tcm_results(struct iwl_mvm *mvm) { - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); ieee80211_iterate_active_interfaces( mvm->hw, IEEE80211_IFACE_ITER_NORMAL, @@ -900,8 +900,6 @@ static void iwl_mvm_tcm_results(struct iwl_mvm *mvm) if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN)) iwl_mvm_config_scan(mvm); - - mutex_unlock(&mvm->mutex); } static void iwl_mvm_tcm_uapsd_nonagg_detected_wk(struct work_struct *wk) @@ -1130,10 +1128,9 @@ void iwl_mvm_recalc_tcm(struct iwl_mvm *mvm) spin_unlock(&mvm->tcm.lock); if (handle_uapsd && iwl_mvm_has_new_rx_api(mvm)) { - mutex_lock(&mvm->mutex); + guard(mvm)(mvm); if (iwl_mvm_request_statistics(mvm, true)) handle_uapsd = false; - mutex_unlock(&mvm->mutex); } spin_lock(&mvm->tcm.lock); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c index ebf11f276b20..e63efbf809f0 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c @@ -216,7 +216,7 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans, ctxt_info_gen3->cr_tail_idx_arr_base_addr = cpu_to_le64(trans_pcie->prph_info_dma_addr + 3 * PAGE_SIZE / 4); ctxt_info_gen3->mtr_base_addr = - cpu_to_le64(trans->txqs.txq[trans->txqs.cmd.q_id]->dma_addr); + cpu_to_le64(trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]->dma_addr); ctxt_info_gen3->mcr_base_addr = cpu_to_le64(trans_pcie->rxq->used_bd_dma); ctxt_info_gen3->mtr_size = diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c index 0fa92704cd14..344e4d5a1c6e 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2018-2023 Intel Corporation + * Copyright (C) 2018-2024 Intel Corporation */ #include "iwl-trans.h" #include "iwl-fh.h" @@ -218,7 +218,7 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans, /* initialize TX command queue */ ctxt_info->hcmd_cfg.cmd_queue_addr = - cpu_to_le64(trans->txqs.txq[trans->txqs.cmd.q_id]->dma_addr); + cpu_to_le64(trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]->dma_addr); ctxt_info->hcmd_cfg.cmd_queue_size = TFD_QUEUE_CB_SIZE(IWL_CMD_QUEUE_SIZE); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c index fed2754be680..9ad43464b702 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -503,7 +503,37 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = { {IWL_PCI_DEVICE(0x2727, PCI_ANY_ID, iwl_bz_trans_cfg)}, {IWL_PCI_DEVICE(0x272D, PCI_ANY_ID, iwl_bz_trans_cfg)}, {IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_bz_trans_cfg)}, - {IWL_PCI_DEVICE(0xA840, PCI_ANY_ID, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0000, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0090, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0094, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0098, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x009C, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00C0, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00C4, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00E0, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00E4, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00E8, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x00EC, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0100, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0110, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0114, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0118, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x011C, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0310, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0314, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0510, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x0A10, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1671, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1672, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1771, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1772, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1791, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x1792, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x4090, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x40C4, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x40E0, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x4110, iwl_bz_trans_cfg)}, + {IWL_PCI_DEVICE(0xA840, 0x4314, iwl_bz_trans_cfg)}, {IWL_PCI_DEVICE(0x7740, PCI_ANY_ID, iwl_bz_trans_cfg)}, {IWL_PCI_DEVICE(0x4D40, PCI_ANY_ID, iwl_bz_trans_cfg)}, @@ -997,32 +1027,6 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = { IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_CDB, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_name), -/* Bz */ -/* FIXME: need to change the naming according to the actual CRF */ - _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, - iwl_cfg_bz, iwl_fm_name), - - _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, - iwl_cfg_bz, iwl_fm_name), - -/* Ga (Gl) */ - _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, - IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_320, IWL_CFG_ANY, IWL_CFG_NO_CDB, - iwl_cfg_gl, iwl_gl_name), - _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, - IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, - IWL_CFG_NO_320, IWL_CFG_ANY, IWL_CFG_NO_CDB, - iwl_cfg_gl, iwl_mtp_name), - /* SoF with JF2 */ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, @@ -1103,6 +1107,32 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = { IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, iwlax210_2ax_cfg_so_jf_b0, iwl9462_name), +/* Bz */ +/* FIXME: need to change the naming according to the actual CRF */ + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, + iwl_cfg_bz, iwl_fm_name), + + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, + iwl_cfg_bz, iwl_fm_name), + +/* Ga (Gl) */ + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_320, IWL_CFG_ANY, IWL_CFG_NO_CDB, + iwl_cfg_gl, iwl_gl_name), + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, + IWL_CFG_NO_320, IWL_CFG_ANY, IWL_CFG_NO_CDB, + iwl_cfg_gl, iwl_mtp_name), + /* Sc */ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY, @@ -1476,6 +1506,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (!iwl_trans->name) iwl_trans->name = iwl_trans->cfg->name; + IWL_INFO(iwl_trans, "Detected %s\n", iwl_trans->name); + if (iwl_trans->trans_cfg->mq_rx_supported) { if (WARN_ON(!iwl_trans->cfg->num_rbds)) { ret = -EINVAL; diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index a7eebe400b5b..b59de4f80b4b 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2003-2015, 2018-2023 Intel Corporation + * Copyright (C) 2003-2015, 2018-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -22,7 +22,6 @@ #include "iwl-io.h" #include "iwl-op-mode.h" #include "iwl-drv.h" -#include "queue/tx.h" #include "iwl-context-info.h" /* @@ -273,7 +272,7 @@ enum iwl_pcie_fw_reset_state { }; /** - * enum wl_pcie_imr_status - imr dma transfer state + * enum iwl_pcie_imr_status - imr dma transfer state * @IMR_D2S_IDLE: default value of the dma transfer * @IMR_D2S_REQUESTED: dma transfer requested * @IMR_D2S_COMPLETED: dma transfer completed @@ -287,6 +286,58 @@ enum iwl_pcie_imr_status { }; /** + * struct iwl_pcie_txqs - TX queues data + * + * @bc_table_dword: true if the BC table expects DWORD (as opposed to bytes) + * @page_offs: offset from skb->cb to mac header page pointer + * @dev_cmd_offs: offset from skb->cb to iwl_device_tx_cmd pointer + * @queue_used: bit mask of used queues + * @queue_stopped: bit mask of stopped queues + * @txq: array of TXQ data structures representing the TXQs + * @scd_bc_tbls: gen1 pointer to the byte count table of the scheduler + * @queue_alloc_cmd_ver: queue allocation command version + * @bc_pool: bytecount DMA allocations pool + * @bc_tbl_size: bytecount table size + * @tso_hdr_page: page allocated (per CPU) for A-MSDU headers when doing TSO + * (and similar usage) + * @cmd: command queue data + * @cmd.fifo: FIFO number + * @cmd.q_id: queue ID + * @cmd.wdg_timeout: watchdog timeout + * @tfd: TFD data + * @tfd.max_tbs: max number of buffers per TFD + * @tfd.size: TFD size + * @tfd.addr_size: TFD/TB address size + */ +struct iwl_pcie_txqs { + unsigned long queue_used[BITS_TO_LONGS(IWL_MAX_TVQM_QUEUES)]; + unsigned long queue_stopped[BITS_TO_LONGS(IWL_MAX_TVQM_QUEUES)]; + struct iwl_txq *txq[IWL_MAX_TVQM_QUEUES]; + struct dma_pool *bc_pool; + size_t bc_tbl_size; + bool bc_table_dword; + u8 page_offs; + u8 dev_cmd_offs; + struct iwl_tso_hdr_page __percpu *tso_hdr_page; + + struct { + u8 fifo; + u8 q_id; + unsigned int wdg_timeout; + } cmd; + + struct { + u8 max_tbs; + u16 size; + u8 addr_size; + } tfd; + + struct iwl_dma_ptr scd_bc_tbls; + + u8 queue_alloc_cmd_ver; +}; + +/** * struct iwl_trans_pcie - PCIe transport specific data * @rxq: all the RX queue data * @rx_pool: initial pool of iwl_rx_mem_buffer for all the queues @@ -367,6 +418,7 @@ enum iwl_pcie_imr_status { * @is_down: indicates the NIC is down * @isr_stats: interrupt statistics * @napi_dev: (fake) netdev for NAPI registration + * @txqs: transport tx queues data. */ struct iwl_trans_pcie { struct iwl_rxq *rxq; @@ -464,6 +516,8 @@ struct iwl_trans_pcie { enum iwl_pcie_imr_status imr_status; wait_queue_head_t imr_waitq; char rf_name[32]; + + struct iwl_pcie_txqs txqs; }; static inline struct iwl_trans_pcie * @@ -538,6 +592,33 @@ void iwl_pcie_disable_ict(struct iwl_trans *trans); /***************************************************** * TX / HCMD ******************************************************/ +/* We need 2 entries for the TX command and header, and another one might + * be needed for potential data in the SKB's head. The remaining ones can + * be used for frags. + */ +#define IWL_TRANS_PCIE_MAX_FRAGS(trans_pcie) ((trans_pcie)->txqs.tfd.max_tbs - 3) + +struct iwl_tso_hdr_page { + struct page *page; + u8 *pos; +}; + +/* + * Note that we put this struct *last* in the page. By doing that, we ensure + * that no TB referencing this page can trigger the 32-bit boundary hardware + * bug. + */ +struct iwl_tso_page_info { + dma_addr_t dma_addr; + struct page *next; + refcount_t use_count; +}; + +#define IWL_TSO_PAGE_DATA_SIZE (PAGE_SIZE - sizeof(struct iwl_tso_page_info)) +#define IWL_TSO_PAGE_INFO(addr) \ + ((struct iwl_tso_page_info *)(((unsigned long)addr & PAGE_MASK) + \ + IWL_TSO_PAGE_DATA_SIZE)) + int iwl_pcie_tx_init(struct iwl_trans *trans); void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr); int iwl_pcie_tx_stop(struct iwl_trans *trans); @@ -552,10 +633,170 @@ void iwl_trans_pcie_txq_set_shared_mode(struct iwl_trans *trans, u32 txq_id, int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_device_tx_cmd *dev_cmd, int txq_id); void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans); -int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); void iwl_pcie_hcmd_complete(struct iwl_trans *trans, struct iwl_rx_cmd_buffer *rxb); void iwl_trans_pcie_tx_reset(struct iwl_trans *trans); +int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, + int slots_num, bool cmd_queue); + +dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr); +struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_cmd_meta *cmd_meta, + u8 **hdr, unsigned int hdr_room); + +void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_cmd_meta *cmd_meta); + +static inline dma_addr_t iwl_pcie_get_tso_page_phys(void *addr) +{ + dma_addr_t res; + + res = IWL_TSO_PAGE_INFO(addr)->dma_addr; + res += (unsigned long)addr & ~PAGE_MASK; + + return res; +} + +static inline dma_addr_t +iwl_txq_get_first_tb_dma(struct iwl_txq *txq, int idx) +{ + return txq->first_tb_dma + + sizeof(struct iwl_pcie_first_tb_buf) * idx; +} + +static inline u16 iwl_txq_get_cmd_index(const struct iwl_txq *q, u32 index) +{ + return index & (q->n_window - 1); +} + +static inline void *iwl_txq_get_tfd(struct iwl_trans *trans, + struct iwl_txq *txq, int idx) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + if (trans->trans_cfg->gen2) + idx = iwl_txq_get_cmd_index(txq, idx); + + return (u8 *)txq->tfds + trans_pcie->txqs.tfd.size * idx; +} + +/* + * We need this inline in case dma_addr_t is only 32-bits - since the + * hardware is always 64-bit, the issue can still occur in that case, + * so use u64 for 'phys' here to force the addition in 64-bit. + */ +static inline bool iwl_txq_crosses_4g_boundary(u64 phys, u16 len) +{ + return upper_32_bits(phys) != upper_32_bits(phys + len); +} + +int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q); + +static inline void iwl_txq_stop(struct iwl_trans *trans, struct iwl_txq *txq) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + if (!test_and_set_bit(txq->id, trans_pcie->txqs.queue_stopped)) { + iwl_op_mode_queue_full(trans->op_mode, txq->id); + IWL_DEBUG_TX_QUEUES(trans, "Stop hwq %d\n", txq->id); + } else { + IWL_DEBUG_TX_QUEUES(trans, "hwq %d already stopped\n", + txq->id); + } +} + +/** + * iwl_txq_inc_wrap - increment queue index, wrap back to beginning + * @trans: the transport (for configuration data) + * @index: current index + */ +static inline int iwl_txq_inc_wrap(struct iwl_trans *trans, int index) +{ + return ++index & + (trans->trans_cfg->base_params->max_tfd_queue_size - 1); +} + +/** + * iwl_txq_dec_wrap - decrement queue index, wrap back to end + * @trans: the transport (for configuration data) + * @index: current index + */ +static inline int iwl_txq_dec_wrap(struct iwl_trans *trans, int index) +{ + return --index & + (trans->trans_cfg->base_params->max_tfd_queue_size - 1); +} + +void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq); + +static inline void +iwl_trans_pcie_wake_queue(struct iwl_trans *trans, struct iwl_txq *txq) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + if (test_and_clear_bit(txq->id, trans_pcie->txqs.queue_stopped)) { + IWL_DEBUG_TX_QUEUES(trans, "Wake hwq %d\n", txq->id); + iwl_op_mode_queue_not_full(trans->op_mode, txq->id); + } +} + +int iwl_txq_gen2_set_tb(struct iwl_trans *trans, + struct iwl_tfh_tfd *tfd, dma_addr_t addr, + u16 len); + +static inline void iwl_txq_set_tfd_invalid_gen2(struct iwl_trans *trans, + struct iwl_tfh_tfd *tfd) +{ + tfd->num_tbs = 0; + + iwl_txq_gen2_set_tb(trans, tfd, trans->invalid_tx_cmd.dma, + trans->invalid_tx_cmd.size); +} + +void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans, + struct iwl_cmd_meta *meta, + struct iwl_tfh_tfd *tfd); + +int iwl_txq_dyn_alloc(struct iwl_trans *trans, u32 flags, + u32 sta_mask, u8 tid, + int size, unsigned int timeout); + +int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_device_tx_cmd *dev_cmd, int txq_id); + +void iwl_txq_dyn_free(struct iwl_trans *trans, int queue); +void iwl_txq_gen2_tx_free(struct iwl_trans *trans); +int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, + int slots_num, bool cmd_queue); +int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, + int queue_size); + +static inline u16 iwl_txq_gen1_tfd_tb_get_len(struct iwl_trans *trans, + void *_tfd, u8 idx) +{ + struct iwl_tfd *tfd; + struct iwl_tfd_tb *tb; + + if (trans->trans_cfg->gen2) { + struct iwl_tfh_tfd *tfh_tfd = _tfd; + struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; + + return le16_to_cpu(tfh_tb->tb_len); + } + + tfd = (struct iwl_tfd *)_tfd; + tb = &tfd->tbs[idx]; + + return le16_to_cpu(tb->hi_n_len) >> 4; +} + +void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn, + struct sk_buff_head *skbs, bool is_flush); +void iwl_pcie_set_q_ptrs(struct iwl_trans *trans, int txq_id, int ptr); +void iwl_pcie_freeze_txq_timer(struct iwl_trans *trans, + unsigned long txqs, bool freeze); +int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx); +int iwl_trans_pcie_wait_txqs_empty(struct iwl_trans *trans, u32 txq_bm); /***************************************************** * Error handling @@ -822,12 +1063,51 @@ void iwl_trans_pcie_dump_regs(struct iwl_trans *trans); #ifdef CONFIG_IWLWIFI_DEBUGFS void iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans); +void iwl_trans_pcie_debugfs_cleanup(struct iwl_trans *trans); #else static inline void iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans) { } #endif void iwl_pcie_rx_allocator_work(struct work_struct *data); +/* common trans ops for all generations transports */ +void iwl_trans_pcie_configure(struct iwl_trans *trans, + const struct iwl_trans_config *trans_cfg); +int iwl_trans_pcie_start_hw(struct iwl_trans *trans); +void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans); +void iwl_trans_pcie_write8(struct iwl_trans *trans, u32 ofs, u8 val); +void iwl_trans_pcie_write32(struct iwl_trans *trans, u32 ofs, u32 val); +u32 iwl_trans_pcie_read32(struct iwl_trans *trans, u32 ofs); +u32 iwl_trans_pcie_read_prph(struct iwl_trans *trans, u32 reg); +void iwl_trans_pcie_write_prph(struct iwl_trans *trans, u32 addr, u32 val); +int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr, + void *buf, int dwords); +int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr, + const void *buf, int dwords); +int iwl_trans_pcie_sw_reset(struct iwl_trans *trans, bool retake_ownership); +struct iwl_trans_dump_data * +iwl_trans_pcie_dump_data(struct iwl_trans *trans, u32 dump_mask, + const struct iwl_dump_sanitize_ops *sanitize_ops, + void *sanitize_ctx); +int iwl_trans_pcie_d3_resume(struct iwl_trans *trans, + enum iwl_d3_status *status, + bool test, bool reset); +int iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test, bool reset); +void iwl_trans_pci_interrupts(struct iwl_trans *trans, bool enable); +void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans); +void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg, + u32 mask, u32 value); +int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs, + u32 *val); +bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); +void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans); + +/* transport gen 1 exported functions */ +void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr); +int iwl_trans_pcie_start_fw(struct iwl_trans *trans, + const struct fw_img *fw, bool run_in_rfkill); +void iwl_trans_pcie_stop_device(struct iwl_trans *trans); + /* common functions that are used by gen2 transport */ int iwl_pcie_gen2_apm_init(struct iwl_trans *trans); void iwl_pcie_apm_config(struct iwl_trans *trans); @@ -849,7 +1129,7 @@ void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans, u8 max_power); /* transport gen 2 exported functions */ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans, const struct fw_img *fw, bool run_in_rfkill); -void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr); +void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans); int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); void iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans); @@ -864,5 +1144,7 @@ void iwl_trans_pcie_copy_imr_fh(struct iwl_trans *trans, u32 dst_addr, u64 src_addr, u32 byte_cnt); int iwl_trans_pcie_copy_imr(struct iwl_trans *trans, u32 dst_addr, u64 src_addr, u32 byte_cnt); +int iwl_trans_pcie_rxq_dma_data(struct iwl_trans *trans, int queue, + struct iwl_trans_rxq_dma_data *data); #endif /* __iwl_trans_int_pcie_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c index 984d7bcd381f..afb88eab8174 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2003-2014, 2018-2023 Intel Corporation + * Copyright (C) 2003-2014, 2018-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -1301,7 +1301,7 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans, int i) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id]; + struct iwl_txq *txq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; bool page_stolen = false; int max_len = trans_pcie->rx_buf_bytes; u32 offset = 0; @@ -1678,6 +1678,7 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id) */ static void iwl_pcie_irq_handle_error(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int i; /* W/A for WiFi/WiMAX coex and WiMAX own the RF */ @@ -1694,9 +1695,9 @@ static void iwl_pcie_irq_handle_error(struct iwl_trans *trans) } for (i = 0; i < trans->trans_cfg->base_params->num_of_queues; i++) { - if (!trans->txqs.txq[i]) + if (!trans_pcie->txqs.txq[i]) continue; - del_timer(&trans->txqs.txq[i]->stuck_timer); + del_timer(&trans_pcie->txqs.txq[i]->stuck_timer); } /* The STATUS_FW_ERROR bit is set in this function. This must happen diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c index a4a4772330cf..18dda89b7985 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2018-2023 Intel Corporation + * Copyright (C) 2018-2024 Intel Corporation */ #include "iwl-trans.h" #include "iwl-prph.h" @@ -247,7 +247,7 @@ static int iwl_pcie_gen2_nic_init(struct iwl_trans *trans) return -ENOMEM; /* Allocate or reset and init all Tx and Command queues */ - if (iwl_txq_gen2_init(trans, trans->txqs.cmd.q_id, queue_size)) + if (iwl_txq_gen2_init(trans, trans_pcie->txqs.cmd.q_id, queue_size)) return -ENOMEM; /* enable shadow regs in HW */ @@ -339,16 +339,17 @@ static void iwl_pcie_get_rf_name(struct iwl_trans *trans) pos += scnprintf(buf + pos, buflen - pos, "\n"); } -void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr) +void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); iwl_pcie_reset_ict(trans); /* make sure all queue are not stopped/used */ - memset(trans->txqs.queue_stopped, 0, - sizeof(trans->txqs.queue_stopped)); - memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); + memset(trans_pcie->txqs.queue_stopped, 0, + sizeof(trans_pcie->txqs.queue_stopped)); + memset(trans_pcie->txqs.queue_used, 0, + sizeof(trans_pcie->txqs.queue_used)); /* now that we got alive we can free the fw image & the context info. * paging memory cannot be freed included since FW will still use it diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c index d5a887b3a4bb..719ddc4b72c5 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2007-2015, 2018-2023 Intel Corporation + * Copyright (C) 2007-2015, 2018-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -127,8 +127,7 @@ out: kfree(buf); } -static int iwl_trans_pcie_sw_reset(struct iwl_trans *trans, - bool retake_ownership) +int iwl_trans_pcie_sw_reset(struct iwl_trans *trans, bool retake_ownership) { /* Reset entire device - do controller reset (results in SHRD_HW_RST) */ if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) { @@ -1336,8 +1335,8 @@ void iwl_pcie_synchronize_irqs(struct iwl_trans *trans) } } -static int iwl_trans_pcie_start_fw(struct iwl_trans *trans, - const struct fw_img *fw, bool run_in_rfkill) +int iwl_trans_pcie_start_fw(struct iwl_trans *trans, + const struct fw_img *fw, bool run_in_rfkill) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); bool hw_rfkill; @@ -1423,7 +1422,7 @@ out: return ret; } -static void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr) +void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr) { iwl_pcie_reset_ict(trans); iwl_pcie_tx_start(trans, scd_addr); @@ -1458,7 +1457,7 @@ void iwl_trans_pcie_handle_stop_rfkill(struct iwl_trans *trans, iwl_trans_pcie_rf_kill(trans, hw_rfkill, false); } -static void iwl_trans_pcie_stop_device(struct iwl_trans *trans) +void iwl_trans_pcie_stop_device(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); bool was_in_rfkill; @@ -1505,9 +1504,17 @@ void iwl_pcie_d3_complete_suspend(struct iwl_trans *trans, iwl_pcie_synchronize_irqs(trans); - iwl_clear_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); - iwl_clear_bit(trans, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE); + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) { + iwl_clear_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_BZ_MAC_ACCESS_REQ); + iwl_clear_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_MAC_INIT); + } else { + iwl_clear_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); + iwl_clear_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_INIT_DONE); + } if (reset) { /* @@ -1552,8 +1559,7 @@ static int iwl_pcie_d3_handshake(struct iwl_trans *trans, bool suspend) return 0; } -static int iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test, - bool reset) +int iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test, bool reset) { int ret; @@ -1571,9 +1577,9 @@ static int iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test, return 0; } -static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans, - enum iwl_d3_status *status, - bool test, bool reset) +int iwl_trans_pcie_d3_resume(struct iwl_trans *trans, + enum iwl_d3_status *status, + bool test, bool reset) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 val; @@ -1586,8 +1592,12 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans, goto out; } - iwl_set_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) + iwl_set_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_BZ_MAC_ACCESS_REQ); + else + iwl_set_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); ret = iwl_finish_nic_init(trans); if (ret) @@ -1874,7 +1884,7 @@ static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans) return 0; } -static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) +int iwl_trans_pcie_start_hw(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int ret; @@ -1886,7 +1896,7 @@ static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) return ret; } -static void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans) +void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -1906,17 +1916,17 @@ static void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans) iwl_pcie_synchronize_irqs(trans); } -static void iwl_trans_pcie_write8(struct iwl_trans *trans, u32 ofs, u8 val) +void iwl_trans_pcie_write8(struct iwl_trans *trans, u32 ofs, u8 val) { writeb(val, IWL_TRANS_GET_PCIE_TRANS(trans)->hw_base + ofs); } -static void iwl_trans_pcie_write32(struct iwl_trans *trans, u32 ofs, u32 val) +void iwl_trans_pcie_write32(struct iwl_trans *trans, u32 ofs, u32 val) { writel(val, IWL_TRANS_GET_PCIE_TRANS(trans)->hw_base + ofs); } -static u32 iwl_trans_pcie_read32(struct iwl_trans *trans, u32 ofs) +u32 iwl_trans_pcie_read32(struct iwl_trans *trans, u32 ofs) { return readl(IWL_TRANS_GET_PCIE_TRANS(trans)->hw_base + ofs); } @@ -1929,7 +1939,7 @@ static u32 iwl_trans_pcie_prph_msk(struct iwl_trans *trans) return 0x000FFFFF; } -static u32 iwl_trans_pcie_read_prph(struct iwl_trans *trans, u32 reg) +u32 iwl_trans_pcie_read_prph(struct iwl_trans *trans, u32 reg) { u32 mask = iwl_trans_pcie_prph_msk(trans); @@ -1938,8 +1948,7 @@ static u32 iwl_trans_pcie_read_prph(struct iwl_trans *trans, u32 reg) return iwl_trans_pcie_read32(trans, HBUS_TARG_PRPH_RDAT); } -static void iwl_trans_pcie_write_prph(struct iwl_trans *trans, u32 addr, - u32 val) +void iwl_trans_pcie_write_prph(struct iwl_trans *trans, u32 addr, u32 val) { u32 mask = iwl_trans_pcie_prph_msk(trans); @@ -1948,20 +1957,20 @@ static void iwl_trans_pcie_write_prph(struct iwl_trans *trans, u32 addr, iwl_trans_pcie_write32(trans, HBUS_TARG_PRPH_WDAT, val); } -static void iwl_trans_pcie_configure(struct iwl_trans *trans, - const struct iwl_trans_config *trans_cfg) +void iwl_trans_pcie_configure(struct iwl_trans *trans, + const struct iwl_trans_config *trans_cfg) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); /* free all first - we might be reconfigured for a different size */ iwl_pcie_free_rbs_pool(trans); - trans->txqs.cmd.q_id = trans_cfg->cmd_queue; - trans->txqs.cmd.fifo = trans_cfg->cmd_fifo; - trans->txqs.cmd.wdg_timeout = trans_cfg->cmd_q_wdg_timeout; - trans->txqs.page_offs = trans_cfg->cb_data_offs; - trans->txqs.dev_cmd_offs = trans_cfg->cb_data_offs + sizeof(void *); - trans->txqs.queue_alloc_cmd_ver = trans_cfg->queue_alloc_cmd_ver; + trans_pcie->txqs.cmd.q_id = trans_cfg->cmd_queue; + trans_pcie->txqs.cmd.fifo = trans_cfg->cmd_fifo; + trans_pcie->txqs.cmd.wdg_timeout = trans_cfg->cmd_q_wdg_timeout; + trans_pcie->txqs.page_offs = trans_cfg->cb_data_offs; + trans_pcie->txqs.dev_cmd_offs = trans_cfg->cb_data_offs + sizeof(void *); + trans_pcie->txqs.queue_alloc_cmd_ver = trans_cfg->queue_alloc_cmd_ver; if (WARN_ON(trans_cfg->n_no_reclaim_cmds > MAX_NO_RECLAIM_CMDS)) trans_pcie->n_no_reclaim_cmds = 0; @@ -1980,7 +1989,7 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans, if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) trans_pcie->supported_dma_mask = DMA_BIT_MASK(11); - trans->txqs.bc_table_dword = trans_cfg->bc_table_dword; + trans_pcie->txqs.bc_table_dword = trans_cfg->bc_table_dword; trans_pcie->scd_set_active = trans_cfg->scd_set_active; trans->command_groups = trans_cfg->command_groups; @@ -2079,15 +2088,20 @@ void iwl_trans_pcie_free(struct iwl_trans *trans) trans->dev); mutex_destroy(&trans_pcie->mutex); - iwl_trans_free(trans); -} -static void iwl_trans_pcie_set_pmi(struct iwl_trans *trans, bool state) -{ - if (state) - set_bit(STATUS_TPOWER_PMI, &trans->status); - else - clear_bit(STATUS_TPOWER_PMI, &trans->status); + if (trans_pcie->txqs.tso_hdr_page) { + for_each_possible_cpu(i) { + struct iwl_tso_hdr_page *p = + per_cpu_ptr(trans_pcie->txqs.tso_hdr_page, i); + + if (p && p->page) + __free_page(p->page); + } + + free_percpu(trans_pcie->txqs.tso_hdr_page); + } + + iwl_trans_free(trans); } struct iwl_trans_pcie_removal { @@ -2253,7 +2267,7 @@ out: return true; } -static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) +bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) { bool ret; @@ -2267,7 +2281,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) return false; } -static void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans) +void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2297,8 +2311,8 @@ out: spin_unlock_bh(&trans_pcie->reg_lock); } -static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr, - void *buf, int dwords) +int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr, + void *buf, int dwords) { #define IWL_MAX_HW_ERRS 5 unsigned int num_consec_hw_errors = 0; @@ -2347,8 +2361,8 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr, return 0; } -static int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr, - const void *buf, int dwords) +int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr, + const void *buf, int dwords) { int offs, ret = 0; const u32 *vals = buf; @@ -2365,8 +2379,8 @@ static int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr, return ret; } -static int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs, - u32 *val) +int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs, + u32 *val) { return pci_read_config_dword(IWL_TRANS_GET_PCIE_TRANS(trans)->pci_dev, ofs, val); @@ -2374,8 +2388,8 @@ static int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs, #define IWL_FLUSH_WAIT_MS 2000 -static int iwl_trans_pcie_rxq_dma_data(struct iwl_trans *trans, int queue, - struct iwl_trans_rxq_dma_data *data) +int iwl_trans_pcie_rxq_dma_data(struct iwl_trans *trans, int queue, + struct iwl_trans_rxq_dma_data *data) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2390,8 +2404,9 @@ static int iwl_trans_pcie_rxq_dma_data(struct iwl_trans *trans, int queue, return 0; } -static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx) +int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_txq *txq; unsigned long now = jiffies; bool overflow_tx; @@ -2401,11 +2416,11 @@ static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx) if (test_bit(STATUS_TRANS_DEAD, &trans->status)) return -ENODEV; - if (!test_bit(txq_idx, trans->txqs.queue_used)) + if (!test_bit(txq_idx, trans_pcie->txqs.queue_used)) return -EINVAL; IWL_DEBUG_TX_QUEUES(trans, "Emptying queue %d...\n", txq_idx); - txq = trans->txqs.txq[txq_idx]; + txq = trans_pcie->txqs.txq[txq_idx]; spin_lock_bh(&txq->lock); overflow_tx = txq->overflow_tx || @@ -2451,8 +2466,9 @@ static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx) return 0; } -static int iwl_trans_pcie_wait_txqs_empty(struct iwl_trans *trans, u32 txq_bm) +int iwl_trans_pcie_wait_txqs_empty(struct iwl_trans *trans, u32 txq_bm) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int cnt; int ret = 0; @@ -2461,9 +2477,9 @@ static int iwl_trans_pcie_wait_txqs_empty(struct iwl_trans *trans, u32 txq_bm) cnt < trans->trans_cfg->base_params->num_of_queues; cnt++) { - if (cnt == trans->txqs.cmd.q_id) + if (cnt == trans_pcie->txqs.cmd.q_id) continue; - if (!test_bit(cnt, trans->txqs.queue_used)) + if (!test_bit(cnt, trans_pcie->txqs.queue_used)) continue; if (!(BIT(cnt) & txq_bm)) continue; @@ -2476,8 +2492,8 @@ static int iwl_trans_pcie_wait_txqs_empty(struct iwl_trans *trans, u32 txq_bm) return ret; } -static void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg, - u32 mask, u32 value) +void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg, + u32 mask, u32 value) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2636,12 +2652,13 @@ static int iwl_dbgfs_tx_queue_seq_show(struct seq_file *seq, void *v) struct iwl_dbgfs_tx_queue_priv *priv = seq->private; struct iwl_dbgfs_tx_queue_state *state = v; struct iwl_trans *trans = priv->trans; - struct iwl_txq *txq = trans->txqs.txq[state->pos]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[state->pos]; seq_printf(seq, "hwq %.3u: used=%d stopped=%d ", (unsigned int)state->pos, - !!test_bit(state->pos, trans->txqs.queue_used), - !!test_bit(state->pos, trans->txqs.queue_stopped)); + !!test_bit(state->pos, trans_pcie->txqs.queue_used), + !!test_bit(state->pos, trans_pcie->txqs.queue_stopped)); if (txq) seq_printf(seq, "read=%u write=%u need_update=%d frozen=%d n_window=%d ampdu=%d", @@ -2651,7 +2668,7 @@ static int iwl_dbgfs_tx_queue_seq_show(struct seq_file *seq, void *v) else seq_puts(seq, "(unallocated)"); - if (state->pos == trans->txqs.cmd.q_id) + if (state->pos == trans_pcie->txqs.cmd.q_id) seq_puts(seq, " (HCMD)"); seq_puts(seq, "\n"); @@ -3055,7 +3072,7 @@ void iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans) DEBUGFS_ADD_FILE(rf, dir, 0400); } -static void iwl_trans_pcie_debugfs_cleanup(struct iwl_trans *trans) +void iwl_trans_pcie_debugfs_cleanup(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct cont_rec *data = &trans_pcie->fw_mon_data; @@ -3068,10 +3085,11 @@ static void iwl_trans_pcie_debugfs_cleanup(struct iwl_trans *trans) static u32 iwl_trans_pcie_get_cmdlen(struct iwl_trans *trans, void *tfd) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 cmdlen = 0; int i; - for (i = 0; i < trans->txqs.tfd.max_tbs; i++) + for (i = 0; i < trans_pcie->txqs.tfd.max_tbs; i++) cmdlen += iwl_txq_gen1_tfd_tb_get_len(trans, tfd, i); return cmdlen; @@ -3332,15 +3350,14 @@ static int iwl_trans_get_fw_monitor_len(struct iwl_trans *trans, u32 *len) return 0; } -static struct iwl_trans_dump_data * -iwl_trans_pcie_dump_data(struct iwl_trans *trans, - u32 dump_mask, +struct iwl_trans_dump_data * +iwl_trans_pcie_dump_data(struct iwl_trans *trans, u32 dump_mask, const struct iwl_dump_sanitize_ops *sanitize_ops, void *sanitize_ctx) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_fw_error_dump_data *data; - struct iwl_txq *cmdq = trans->txqs.txq[trans->txqs.cmd.q_id]; + struct iwl_txq *cmdq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; struct iwl_fw_error_dump_txcmd *txcmd; struct iwl_trans_dump_data *dump_data; u32 len, num_rbs = 0, monitor_len = 0; @@ -3407,7 +3424,7 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans, data = (void *)dump_data->data; if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD) && cmdq) { - u16 tfd_size = trans->txqs.tfd.size; + u16 tfd_size = trans_pcie->txqs.tfd.size; data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_TXCMD); txcmd = (void *)data->data; @@ -3483,7 +3500,7 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans, return dump_data; } -static void iwl_trans_pci_interrupts(struct iwl_trans *trans, bool enable) +void iwl_trans_pci_interrupts(struct iwl_trans *trans, bool enable) { if (enable) iwl_enable_interrupts(trans); @@ -3491,7 +3508,7 @@ static void iwl_trans_pci_interrupts(struct iwl_trans *trans, bool enable) iwl_disable_interrupts(trans); } -static void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans) +void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans) { u32 inta_addr, sw_err_bit; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -3510,81 +3527,6 @@ static void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans) iwl_trans_sync_nmi_with_addr(trans, inta_addr, sw_err_bit); } -#define IWL_TRANS_COMMON_OPS \ - .op_mode_leave = iwl_trans_pcie_op_mode_leave, \ - .write8 = iwl_trans_pcie_write8, \ - .write32 = iwl_trans_pcie_write32, \ - .read32 = iwl_trans_pcie_read32, \ - .read_prph = iwl_trans_pcie_read_prph, \ - .write_prph = iwl_trans_pcie_write_prph, \ - .read_mem = iwl_trans_pcie_read_mem, \ - .write_mem = iwl_trans_pcie_write_mem, \ - .read_config32 = iwl_trans_pcie_read_config32, \ - .configure = iwl_trans_pcie_configure, \ - .set_pmi = iwl_trans_pcie_set_pmi, \ - .sw_reset = iwl_trans_pcie_sw_reset, \ - .grab_nic_access = iwl_trans_pcie_grab_nic_access, \ - .release_nic_access = iwl_trans_pcie_release_nic_access, \ - .set_bits_mask = iwl_trans_pcie_set_bits_mask, \ - .dump_data = iwl_trans_pcie_dump_data, \ - .d3_suspend = iwl_trans_pcie_d3_suspend, \ - .d3_resume = iwl_trans_pcie_d3_resume, \ - .interrupts = iwl_trans_pci_interrupts, \ - .sync_nmi = iwl_trans_pcie_sync_nmi, \ - .imr_dma_data = iwl_trans_pcie_copy_imr \ - -static const struct iwl_trans_ops trans_ops_pcie = { - IWL_TRANS_COMMON_OPS, - .start_hw = iwl_trans_pcie_start_hw, - .fw_alive = iwl_trans_pcie_fw_alive, - .start_fw = iwl_trans_pcie_start_fw, - .stop_device = iwl_trans_pcie_stop_device, - - .send_cmd = iwl_pcie_enqueue_hcmd, - - .tx = iwl_trans_pcie_tx, - .reclaim = iwl_txq_reclaim, - - .txq_disable = iwl_trans_pcie_txq_disable, - .txq_enable = iwl_trans_pcie_txq_enable, - - .txq_set_shared_mode = iwl_trans_pcie_txq_set_shared_mode, - - .wait_tx_queues_empty = iwl_trans_pcie_wait_txqs_empty, - - .freeze_txq_timer = iwl_trans_txq_freeze_timer, -#ifdef CONFIG_IWLWIFI_DEBUGFS - .debugfs_cleanup = iwl_trans_pcie_debugfs_cleanup, -#endif -}; - -static const struct iwl_trans_ops trans_ops_pcie_gen2 = { - IWL_TRANS_COMMON_OPS, - .start_hw = iwl_trans_pcie_start_hw, - .fw_alive = iwl_trans_pcie_gen2_fw_alive, - .start_fw = iwl_trans_pcie_gen2_start_fw, - .stop_device = iwl_trans_pcie_gen2_stop_device, - - .send_cmd = iwl_pcie_gen2_enqueue_hcmd, - - .tx = iwl_txq_gen2_tx, - .reclaim = iwl_txq_reclaim, - - .set_q_ptrs = iwl_txq_set_q_ptrs, - - .txq_alloc = iwl_txq_dyn_alloc, - .txq_free = iwl_txq_dyn_free, - .wait_txq_empty = iwl_trans_pcie_wait_txq_empty, - .rxq_dma_data = iwl_trans_pcie_rxq_dma_data, - .load_pnvm = iwl_trans_pcie_ctx_info_gen3_load_pnvm, - .set_pnvm = iwl_trans_pcie_ctx_info_gen3_set_pnvm, - .load_reduce_power = iwl_trans_pcie_ctx_info_gen3_load_reduce_power, - .set_reduce_power = iwl_trans_pcie_ctx_info_gen3_set_reduce_power, -#ifdef CONFIG_IWLWIFI_DEBUGFS - .debugfs_cleanup = iwl_trans_pcie_debugfs_cleanup, -#endif -}; - struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, const struct pci_device_id *ent, const struct iwl_cfg_trans_params *cfg_trans) @@ -3592,13 +3534,9 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, struct iwl_trans_pcie *trans_pcie, **priv; struct iwl_trans *trans; int ret, addr_size; - const struct iwl_trans_ops *ops = &trans_ops_pcie_gen2; void __iomem * const *table; u32 bar0; - if (!cfg_trans->gen2) - ops = &trans_ops_pcie; - /* reassign our BAR 0 if invalid due to possible runtime PM races */ pci_read_config_dword(pdev, PCI_BASE_ADDRESS_0, &bar0); if (bar0 == PCI_BASE_ADDRESS_MEM_TYPE_64) { @@ -3611,20 +3549,65 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, if (ret) return ERR_PTR(ret); - trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), &pdev->dev, ops, + trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), &pdev->dev, cfg_trans); if (!trans) return ERR_PTR(-ENOMEM); trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + if (trans->trans_cfg->gen2) { + trans_pcie->txqs.tfd.addr_size = 64; + trans_pcie->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; + trans_pcie->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); + } else { + trans_pcie->txqs.tfd.addr_size = 36; + trans_pcie->txqs.tfd.max_tbs = IWL_NUM_OF_TBS; + trans_pcie->txqs.tfd.size = sizeof(struct iwl_tfd); + } + trans->max_skb_frags = IWL_TRANS_PCIE_MAX_FRAGS(trans_pcie); + + trans_pcie->txqs.tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page); + if (!trans_pcie->txqs.tso_hdr_page) { + ret = -ENOMEM; + goto out_free_trans; + } + + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) + trans_pcie->txqs.bc_tbl_size = + sizeof(struct iwl_gen3_bc_tbl_entry) * TFD_QUEUE_BC_SIZE_GEN3_BZ; + else if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) + trans_pcie->txqs.bc_tbl_size = + sizeof(struct iwl_gen3_bc_tbl_entry) * TFD_QUEUE_BC_SIZE_GEN3_AX210; + else + trans_pcie->txqs.bc_tbl_size = sizeof(struct iwlagn_scd_bc_tbl); + /* + * For gen2 devices, we use a single allocation for each byte-count + * table, but they're pretty small (1k) so use a DMA pool that we + * allocate here. + */ + if (trans->trans_cfg->gen2) { + trans_pcie->txqs.bc_pool = + dmam_pool_create("iwlwifi:bc", trans->dev, + trans_pcie->txqs.bc_tbl_size, + 256, 0); + if (!trans_pcie->txqs.bc_pool) { + ret = -ENOMEM; + goto out_free_tso; + } + } + + /* Some things must not change even if the config does */ + WARN_ON(trans_pcie->txqs.tfd.addr_size != + (trans->trans_cfg->gen2 ? 64 : 36)); + /* Initialize NAPI here - it should be before registering to mac80211 * in the opmode but after the HW struct is allocated. */ trans_pcie->napi_dev = alloc_netdev_dummy(sizeof(struct iwl_trans_pcie *)); if (!trans_pcie->napi_dev) { ret = -ENOMEM; - goto out_free_trans; + goto out_free_tso; } /* The private struct in netdev is a pointer to struct iwl_trans_pcie */ priv = netdev_priv(trans_pcie->napi_dev); @@ -3663,7 +3646,7 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, pci_set_master(pdev); - addr_size = trans->txqs.tfd.addr_size; + addr_size = trans_pcie->txqs.tfd.addr_size; ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_size)); if (ret) { ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); @@ -3766,6 +3749,8 @@ out_no_pci: destroy_workqueue(trans_pcie->rba.alloc_wq); out_free_ndev: free_netdev(trans_pcie->napi_dev); +out_free_tso: + free_percpu(trans_pcie->txqs.tso_hdr_page); out_free_trans: iwl_trans_free(trans); return ERR_PTR(ret); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index aabbef114bc2..2e780fb2da42 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2018-2020, 2023 Intel Corporation + * Copyright (C) 2018-2020, 2023-2024 Intel Corporation */ #include <net/tso.h> #include <linux/tcp.h> @@ -11,7 +11,1180 @@ #include "iwl-io.h" #include "internal.h" #include "fw/api/tx.h" -#include "queue/tx.h" +#include "fw/api/commands.h" +#include "fw/api/datapath.h" +#include "iwl-scd.h" + +static struct page *get_workaround_page(struct iwl_trans *trans, + struct sk_buff *skb) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_tso_page_info *info; + struct page **page_ptr; + struct page *ret; + dma_addr_t phys; + + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->txqs.page_offs); + + ret = alloc_page(GFP_ATOMIC); + if (!ret) + return NULL; + + info = IWL_TSO_PAGE_INFO(page_address(ret)); + + /* Create a DMA mapping for the page */ + phys = dma_map_page_attrs(trans->dev, ret, 0, PAGE_SIZE, + DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + if (unlikely(dma_mapping_error(trans->dev, phys))) { + __free_page(ret); + return NULL; + } + + /* Store physical address and set use count */ + info->dma_addr = phys; + refcount_set(&info->use_count, 1); + + /* set the chaining pointer to the previous page if there */ + info->next = *page_ptr; + *page_ptr = ret; + + return ret; +} + +/* + * Add a TB and if needed apply the FH HW bug workaround; + * meta != NULL indicates that it's a page mapping and we + * need to dma_unmap_page() and set the meta->tbs bit in + * this case. + */ +static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans, + struct sk_buff *skb, + struct iwl_tfh_tfd *tfd, + dma_addr_t phys, void *virt, + u16 len, struct iwl_cmd_meta *meta, + bool unmap) +{ + dma_addr_t oldphys = phys; + struct page *page; + int ret; + + if (unlikely(dma_mapping_error(trans->dev, phys))) + return -ENOMEM; + + if (likely(!iwl_txq_crosses_4g_boundary(phys, len))) { + ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len); + + if (ret < 0) + goto unmap; + + if (meta) + meta->tbs |= BIT(ret); + + ret = 0; + goto trace; + } + + /* + * Work around a hardware bug. If (as expressed in the + * condition above) the TB ends on a 32-bit boundary, + * then the next TB may be accessed with the wrong + * address. + * To work around it, copy the data elsewhere and make + * a new mapping for it so the device will not fail. + */ + + if (WARN_ON(len > IWL_TSO_PAGE_DATA_SIZE)) { + ret = -ENOBUFS; + goto unmap; + } + + page = get_workaround_page(trans, skb); + if (!page) { + ret = -ENOMEM; + goto unmap; + } + + memcpy(page_address(page), virt, len); + + /* + * This is a bit odd, but performance does not matter here, what + * matters are the expectations of the calling code and TB cleanup + * function. + * + * As such, if unmap is set, then create another mapping for the TB + * entry as it will be unmapped later. On the other hand, if it is not + * set, then the TB entry will not be unmapped and instead we simply + * reference and sync the mapping that get_workaround_page() created. + */ + if (unmap) { + phys = dma_map_single(trans->dev, page_address(page), len, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(trans->dev, phys))) + return -ENOMEM; + } else { + phys = iwl_pcie_get_tso_page_phys(page_address(page)); + dma_sync_single_for_device(trans->dev, phys, len, + DMA_TO_DEVICE); + } + + ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len); + if (ret < 0) { + /* unmap the new allocation as single */ + oldphys = phys; + meta = NULL; + goto unmap; + } + + IWL_DEBUG_TX(trans, + "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n", + len, (unsigned long long)oldphys, + (unsigned long long)phys); + + ret = 0; +unmap: + if (!unmap) + goto trace; + + if (meta) + dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE); + else + dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE); +trace: + trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len); + + return ret; +} + +static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans, + struct sk_buff *skb, + struct iwl_tfh_tfd *tfd, + struct iwl_cmd_meta *out_meta, + int start_len, + u8 hdr_len, + struct iwl_device_tx_cmd *dev_cmd) +{ +#ifdef CONFIG_INET + struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; + struct ieee80211_hdr *hdr = (void *)skb->data; + unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; + unsigned int mss = skb_shinfo(skb)->gso_size; + dma_addr_t start_hdr_phys; + u16 length, amsdu_pad; + u8 *start_hdr; + struct sg_table *sgt; + struct tso_t tso; + + trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), + &dev_cmd->hdr, start_len, 0); + + ip_hdrlen = skb_network_header_len(skb); + snap_ip_tcp_hdrlen = 8 + ip_hdrlen + tcp_hdrlen(skb); + total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len; + amsdu_pad = 0; + + /* total amount of header we may need for this A-MSDU */ + hdr_room = DIV_ROUND_UP(total_len, mss) * + (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); + + /* Our device supports 9 segments at most, it will fit in 1 page */ + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); + if (!sgt) + return -ENOMEM; + + start_hdr_phys = iwl_pcie_get_tso_page_phys(start_hdr); + + /* + * Pull the ieee80211 header to be able to use TSO core, + * we will restore it for the tx_status flow. + */ + skb_pull(skb, hdr_len); + + /* + * Remove the length of all the headers that we don't actually + * have in the MPDU by themselves, but that we duplicate into + * all the different MSDUs inside the A-MSDU. + */ + le16_add_cpu(&tx_cmd->len, -snap_ip_tcp_hdrlen); + + tso_start(skb, &tso); + + while (total_len) { + /* this is the data left for this subframe */ + unsigned int data_left = min_t(unsigned int, mss, total_len); + unsigned int tb_len; + dma_addr_t tb_phys; + u8 *pos_hdr = start_hdr; + + total_len -= data_left; + + memset(pos_hdr, 0, amsdu_pad); + pos_hdr += amsdu_pad; + amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen + + data_left)) & 0x3; + ether_addr_copy(pos_hdr, ieee80211_get_DA(hdr)); + pos_hdr += ETH_ALEN; + ether_addr_copy(pos_hdr, ieee80211_get_SA(hdr)); + pos_hdr += ETH_ALEN; + + length = snap_ip_tcp_hdrlen + data_left; + *((__be16 *)pos_hdr) = cpu_to_be16(length); + pos_hdr += sizeof(length); + + /* + * This will copy the SNAP as well which will be considered + * as MAC header. + */ + tso_build_hdr(skb, pos_hdr, &tso, data_left, !total_len); + + pos_hdr += snap_ip_tcp_hdrlen; + + tb_len = pos_hdr - start_hdr; + tb_phys = iwl_pcie_get_tso_page_phys(start_hdr); + + /* + * No need for _with_wa, this is from the TSO page and + * we leave some space at the end of it so can't hit + * the buggy scenario. + */ + iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb_len); + trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, + tb_phys, tb_len); + /* add this subframe's headers' length to the tx_cmd */ + le16_add_cpu(&tx_cmd->len, tb_len); + + /* prepare the start_hdr for the next subframe */ + start_hdr = pos_hdr; + + /* put the payload */ + while (data_left) { + int ret; + + tb_len = min_t(unsigned int, tso.size, data_left); + tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data); + /* Not a real mapping error, use direct comparison */ + if (unlikely(tb_phys == DMA_MAPPING_ERROR)) + goto out_err; + + ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, + tb_phys, tso.data, + tb_len, NULL, false); + if (ret) + goto out_err; + + data_left -= tb_len; + tso_build_data(skb, &tso, tb_len); + } + } + + dma_sync_single_for_device(trans->dev, start_hdr_phys, hdr_room, + DMA_TO_DEVICE); + + /* re -add the WiFi header */ + skb_push(skb, hdr_len); + + return 0; + +out_err: +#endif + return -EINVAL; +} + +static struct +iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans, + struct iwl_txq *txq, + struct iwl_device_tx_cmd *dev_cmd, + struct sk_buff *skb, + struct iwl_cmd_meta *out_meta, + int hdr_len, + int tx_cmd_len) +{ + int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); + struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); + dma_addr_t tb_phys; + int len; + void *tb1_addr; + + tb_phys = iwl_txq_get_first_tb_dma(txq, idx); + + /* + * No need for _with_wa, the first TB allocation is aligned up + * to a 64-byte boundary and thus can't be at the end or cross + * a page boundary (much less a 2^32 boundary). + */ + iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); + + /* + * The second TB (tb1) points to the remainder of the TX command + * and the 802.11 header - dword aligned size + * (This calculation modifies the TX command, so do it before the + * setup of the first TB) + */ + len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len - + IWL_FIRST_TB_SIZE; + + /* do not align A-MSDU to dword as the subframe header aligns it */ + + /* map the data for TB1 */ + tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE; + tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(trans->dev, tb_phys))) + goto out_err; + /* + * No need for _with_wa(), we ensure (via alignment) that the data + * here can never cross or end at a page boundary. + */ + iwl_txq_gen2_set_tb(trans, tfd, tb_phys, len); + + if (iwl_txq_gen2_build_amsdu(trans, skb, tfd, out_meta, + len + IWL_FIRST_TB_SIZE, hdr_len, dev_cmd)) + goto out_err; + + /* building the A-MSDU might have changed this data, memcpy it now */ + memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); + return tfd; + +out_err: + iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); + return NULL; +} + +static int iwl_txq_gen2_tx_add_frags(struct iwl_trans *trans, + struct sk_buff *skb, + struct iwl_tfh_tfd *tfd, + struct iwl_cmd_meta *out_meta) +{ + int i; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + dma_addr_t tb_phys; + unsigned int fragsz = skb_frag_size(frag); + int ret; + + if (!fragsz) + continue; + + tb_phys = skb_frag_dma_map(trans->dev, frag, 0, + fragsz, DMA_TO_DEVICE); + ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, + skb_frag_address(frag), + fragsz, out_meta, true); + if (ret) + return ret; + } + + return 0; +} + +static struct +iwl_tfh_tfd *iwl_txq_gen2_build_tx(struct iwl_trans *trans, + struct iwl_txq *txq, + struct iwl_device_tx_cmd *dev_cmd, + struct sk_buff *skb, + struct iwl_cmd_meta *out_meta, + int hdr_len, + int tx_cmd_len, + bool pad) +{ + int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); + struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); + dma_addr_t tb_phys; + int len, tb1_len, tb2_len; + void *tb1_addr; + struct sk_buff *frag; + + tb_phys = iwl_txq_get_first_tb_dma(txq, idx); + + /* The first TB points to bi-directional DMA data */ + memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); + + /* + * No need for _with_wa, the first TB allocation is aligned up + * to a 64-byte boundary and thus can't be at the end or cross + * a page boundary (much less a 2^32 boundary). + */ + iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); + + /* + * The second TB (tb1) points to the remainder of the TX command + * and the 802.11 header - dword aligned size + * (This calculation modifies the TX command, so do it before the + * setup of the first TB) + */ + len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len - + IWL_FIRST_TB_SIZE; + + if (pad) + tb1_len = ALIGN(len, 4); + else + tb1_len = len; + + /* map the data for TB1 */ + tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE; + tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(trans->dev, tb_phys))) + goto out_err; + /* + * No need for _with_wa(), we ensure (via alignment) that the data + * here can never cross or end at a page boundary. + */ + iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb1_len); + trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, + IWL_FIRST_TB_SIZE + tb1_len, hdr_len); + + /* set up TFD's third entry to point to remainder of skb's head */ + tb2_len = skb_headlen(skb) - hdr_len; + + if (tb2_len > 0) { + int ret; + + tb_phys = dma_map_single(trans->dev, skb->data + hdr_len, + tb2_len, DMA_TO_DEVICE); + ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, + skb->data + hdr_len, tb2_len, + NULL, true); + if (ret) + goto out_err; + } + + if (iwl_txq_gen2_tx_add_frags(trans, skb, tfd, out_meta)) + goto out_err; + + skb_walk_frags(skb, frag) { + int ret; + + tb_phys = dma_map_single(trans->dev, frag->data, + skb_headlen(frag), DMA_TO_DEVICE); + ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, + frag->data, + skb_headlen(frag), NULL, + true); + if (ret) + goto out_err; + if (iwl_txq_gen2_tx_add_frags(trans, frag, tfd, out_meta)) + goto out_err; + } + + return tfd; + +out_err: + iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); + return NULL; +} + +static +struct iwl_tfh_tfd *iwl_txq_gen2_build_tfd(struct iwl_trans *trans, + struct iwl_txq *txq, + struct iwl_device_tx_cmd *dev_cmd, + struct sk_buff *skb, + struct iwl_cmd_meta *out_meta) +{ + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; + int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); + struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); + int len, hdr_len; + bool amsdu; + + /* There must be data left over for TB1 or this code must be changed */ + BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) < IWL_FIRST_TB_SIZE); + BUILD_BUG_ON(sizeof(struct iwl_cmd_header) + + offsetofend(struct iwl_tx_cmd_gen2, dram_info) > + IWL_FIRST_TB_SIZE); + BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen3) < IWL_FIRST_TB_SIZE); + BUILD_BUG_ON(sizeof(struct iwl_cmd_header) + + offsetofend(struct iwl_tx_cmd_gen3, dram_info) > + IWL_FIRST_TB_SIZE); + + memset(tfd, 0, sizeof(*tfd)); + + if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) + len = sizeof(struct iwl_tx_cmd_gen2); + else + len = sizeof(struct iwl_tx_cmd_gen3); + + amsdu = ieee80211_is_data_qos(hdr->frame_control) && + (*ieee80211_get_qos_ctl(hdr) & + IEEE80211_QOS_CTL_A_MSDU_PRESENT); + + hdr_len = ieee80211_hdrlen(hdr->frame_control); + + /* + * Only build A-MSDUs here if doing so by GSO, otherwise it may be + * an A-MSDU for other reasons, e.g. NAN or an A-MSDU having been + * built in the higher layers already. + */ + if (amsdu && skb_shinfo(skb)->gso_size) + return iwl_txq_gen2_build_tx_amsdu(trans, txq, dev_cmd, skb, + out_meta, hdr_len, len); + return iwl_txq_gen2_build_tx(trans, txq, dev_cmd, skb, out_meta, + hdr_len, len, !amsdu); +} + +int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q) +{ + unsigned int max; + unsigned int used; + + /* + * To avoid ambiguity between empty and completely full queues, there + * should always be less than max_tfd_queue_size elements in the queue. + * If q->n_window is smaller than max_tfd_queue_size, there is no need + * to reserve any queue entries for this purpose. + */ + if (q->n_window < trans->trans_cfg->base_params->max_tfd_queue_size) + max = q->n_window; + else + max = trans->trans_cfg->base_params->max_tfd_queue_size - 1; + + /* + * max_tfd_queue_size is a power of 2, so the following is equivalent to + * modulo by max_tfd_queue_size and is well defined. + */ + used = (q->write_ptr - q->read_ptr) & + (trans->trans_cfg->base_params->max_tfd_queue_size - 1); + + if (WARN_ON(used > max)) + return 0; + + return max - used; +} + +/* + * iwl_pcie_gen2_update_byte_tbl - Set up entry in Tx byte-count array + */ +static void iwl_pcie_gen2_update_byte_tbl(struct iwl_trans *trans, + struct iwl_txq *txq, u16 byte_cnt, + int num_tbs) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); + u8 filled_tfd_size, num_fetch_chunks; + u16 len = byte_cnt; + __le16 bc_ent; + + if (WARN(idx >= txq->n_window, "%d >= %d\n", idx, txq->n_window)) + return; + + filled_tfd_size = offsetof(struct iwl_tfh_tfd, tbs) + + num_tbs * sizeof(struct iwl_tfh_tb); + /* + * filled_tfd_size contains the number of filled bytes in the TFD. + * Dividing it by 64 will give the number of chunks to fetch + * to SRAM- 0 for one chunk, 1 for 2 and so on. + * If, for example, TFD contains only 3 TBs then 32 bytes + * of the TFD are used, and only one chunk of 64 bytes should + * be fetched + */ + num_fetch_chunks = DIV_ROUND_UP(filled_tfd_size, 64) - 1; + + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) { + struct iwl_gen3_bc_tbl_entry *scd_bc_tbl_gen3 = txq->bc_tbl.addr; + + /* Starting from AX210, the HW expects bytes */ + WARN_ON(trans_pcie->txqs.bc_table_dword); + WARN_ON(len > 0x3FFF); + bc_ent = cpu_to_le16(len | (num_fetch_chunks << 14)); + scd_bc_tbl_gen3[idx].tfd_offset = bc_ent; + } else { + struct iwlagn_scd_bc_tbl *scd_bc_tbl = txq->bc_tbl.addr; + + /* Before AX210, the HW expects DW */ + WARN_ON(!trans_pcie->txqs.bc_table_dword); + len = DIV_ROUND_UP(len, 4); + WARN_ON(len > 0xFFF); + bc_ent = cpu_to_le16(len | (num_fetch_chunks << 12)); + scd_bc_tbl->tfd_offset[idx] = bc_ent; + } +} + +static u8 iwl_txq_gen2_get_num_tbs(struct iwl_tfh_tfd *tfd) +{ + return le16_to_cpu(tfd->num_tbs) & 0x1f; +} + +int iwl_txq_gen2_set_tb(struct iwl_trans *trans, struct iwl_tfh_tfd *tfd, + dma_addr_t addr, u16 len) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int idx = iwl_txq_gen2_get_num_tbs(tfd); + struct iwl_tfh_tb *tb; + + /* Only WARN here so we know about the issue, but we mess up our + * unmap path because not every place currently checks for errors + * returned from this function - it can only return an error if + * there's no more space, and so when we know there is enough we + * don't always check ... + */ + WARN(iwl_txq_crosses_4g_boundary(addr, len), + "possible DMA problem with iova:0x%llx, len:%d\n", + (unsigned long long)addr, len); + + if (WARN_ON(idx >= IWL_TFH_NUM_TBS)) + return -EINVAL; + tb = &tfd->tbs[idx]; + + /* Each TFD can point to a maximum max_tbs Tx buffers */ + if (le16_to_cpu(tfd->num_tbs) >= trans_pcie->txqs.tfd.max_tbs) { + IWL_ERR(trans, "Error can not send more than %d chunks\n", + trans_pcie->txqs.tfd.max_tbs); + return -EINVAL; + } + + put_unaligned_le64(addr, &tb->addr); + tb->tb_len = cpu_to_le16(len); + + tfd->num_tbs = cpu_to_le16(idx + 1); + + return idx; +} + +void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans, + struct iwl_cmd_meta *meta, + struct iwl_tfh_tfd *tfd) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int i, num_tbs; + + /* Sanity check on number of chunks */ + num_tbs = iwl_txq_gen2_get_num_tbs(tfd); + + if (num_tbs > trans_pcie->txqs.tfd.max_tbs) { + IWL_ERR(trans, "Too many chunks: %i\n", num_tbs); + return; + } + + /* TB1 is mapped directly, the rest is the TSO page and SG list. */ + if (meta->sg_offset) + num_tbs = 2; + + /* first TB is never freed - it's the bidirectional DMA data */ + for (i = 1; i < num_tbs; i++) { + if (meta->tbs & BIT(i)) + dma_unmap_page(trans->dev, + le64_to_cpu(tfd->tbs[i].addr), + le16_to_cpu(tfd->tbs[i].tb_len), + DMA_TO_DEVICE); + else + dma_unmap_single(trans->dev, + le64_to_cpu(tfd->tbs[i].addr), + le16_to_cpu(tfd->tbs[i].tb_len), + DMA_TO_DEVICE); + } + + iwl_txq_set_tfd_invalid_gen2(trans, tfd); +} + +static void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq) +{ + /* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and + * idx is bounded by n_window + */ + int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); + struct sk_buff *skb; + + lockdep_assert_held(&txq->lock); + + if (!txq->entries) + return; + + iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta, + iwl_txq_get_tfd(trans, txq, idx)); + + skb = txq->entries[idx].skb; + + /* Can be called from irqs-disabled context + * If skb is not NULL, it means that the whole queue is being + * freed and that the queue is not empty - free the skb + */ + if (skb) { + iwl_op_mode_free_skb(trans->op_mode, skb); + txq->entries[idx].skb = NULL; + } +} + +/* + * iwl_txq_inc_wr_ptr - Send new write index to hardware + */ +static void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq) +{ + lockdep_assert_held(&txq->lock); + + IWL_DEBUG_TX(trans, "Q:%d WR: 0x%x\n", txq->id, txq->write_ptr); + + /* + * if not in power-save mode, uCode will never sleep when we're + * trying to tx (during RFKILL, we're not trying to tx). + */ + iwl_write32(trans, HBUS_TARG_WRPTR, txq->write_ptr | (txq->id << 16)); +} + +int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_device_tx_cmd *dev_cmd, int txq_id) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_cmd_meta *out_meta; + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; + u16 cmd_len; + int idx; + void *tfd; + + if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES, + "queue %d out of range", txq_id)) + return -EINVAL; + + if (WARN_ONCE(!test_bit(txq_id, trans_pcie->txqs.queue_used), + "TX on unused queue %d\n", txq_id)) + return -EINVAL; + + if (skb_is_nonlinear(skb) && + skb_shinfo(skb)->nr_frags > IWL_TRANS_PCIE_MAX_FRAGS(trans_pcie) && + __skb_linearize(skb)) + return -ENOMEM; + + spin_lock(&txq->lock); + + if (iwl_txq_space(trans, txq) < txq->high_mark) { + iwl_txq_stop(trans, txq); + + /* don't put the packet on the ring, if there is no room */ + if (unlikely(iwl_txq_space(trans, txq) < 3)) { + struct iwl_device_tx_cmd **dev_cmd_ptr; + + dev_cmd_ptr = (void *)((u8 *)skb->cb + + trans_pcie->txqs.dev_cmd_offs); + + *dev_cmd_ptr = dev_cmd; + __skb_queue_tail(&txq->overflow_q, skb); + spin_unlock(&txq->lock); + return 0; + } + } + + idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); + + /* Set up driver data for this TFD */ + txq->entries[idx].skb = skb; + txq->entries[idx].cmd = dev_cmd; + + dev_cmd->hdr.sequence = + cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | + INDEX_TO_SEQ(idx))); + + /* Set up first empty entry in queue's array of Tx/cmd buffers */ + out_meta = &txq->entries[idx].meta; + memset(out_meta, 0, sizeof(*out_meta)); + + tfd = iwl_txq_gen2_build_tfd(trans, txq, dev_cmd, skb, out_meta); + if (!tfd) { + spin_unlock(&txq->lock); + return -1; + } + + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) { + struct iwl_tx_cmd_gen3 *tx_cmd_gen3 = + (void *)dev_cmd->payload; + + cmd_len = le16_to_cpu(tx_cmd_gen3->len); + } else { + struct iwl_tx_cmd_gen2 *tx_cmd_gen2 = + (void *)dev_cmd->payload; + + cmd_len = le16_to_cpu(tx_cmd_gen2->len); + } + + /* Set up entry for this TFD in Tx byte-count array */ + iwl_pcie_gen2_update_byte_tbl(trans, txq, cmd_len, + iwl_txq_gen2_get_num_tbs(tfd)); + + /* start timer if queue currently empty */ + if (txq->read_ptr == txq->write_ptr && txq->wd_timeout) + mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout); + + /* Tell device the write index *just past* this latest filled TFD */ + txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr); + iwl_txq_inc_wr_ptr(trans, txq); + /* + * At this point the frame is "transmitted" successfully + * and we will get a TX status notification eventually. + */ + spin_unlock(&txq->lock); + return 0; +} + +/*************** HOST COMMAND QUEUE FUNCTIONS *****/ + +/* + * iwl_txq_gen2_unmap - Unmap any remaining DMA mappings and free skb's + */ +static void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; + + spin_lock_bh(&txq->reclaim_lock); + spin_lock(&txq->lock); + while (txq->write_ptr != txq->read_ptr) { + IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n", + txq_id, txq->read_ptr); + + if (txq_id != trans_pcie->txqs.cmd.q_id) { + int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); + struct iwl_cmd_meta *cmd_meta = &txq->entries[idx].meta; + struct sk_buff *skb = txq->entries[idx].skb; + + if (!WARN_ON_ONCE(!skb)) + iwl_pcie_free_tso_pages(trans, skb, cmd_meta); + } + iwl_txq_gen2_free_tfd(trans, txq); + txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr); + } + + while (!skb_queue_empty(&txq->overflow_q)) { + struct sk_buff *skb = __skb_dequeue(&txq->overflow_q); + + iwl_op_mode_free_skb(trans->op_mode, skb); + } + + spin_unlock(&txq->lock); + spin_unlock_bh(&txq->reclaim_lock); + + /* just in case - this queue may have been stopped */ + iwl_trans_pcie_wake_queue(trans, txq); +} + +static void iwl_txq_gen2_free_memory(struct iwl_trans *trans, + struct iwl_txq *txq) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct device *dev = trans->dev; + + /* De-alloc circular buffer of TFDs */ + if (txq->tfds) { + dma_free_coherent(dev, + trans_pcie->txqs.tfd.size * txq->n_window, + txq->tfds, txq->dma_addr); + dma_free_coherent(dev, + sizeof(*txq->first_tb_bufs) * txq->n_window, + txq->first_tb_bufs, txq->first_tb_dma); + } + + kfree(txq->entries); + if (txq->bc_tbl.addr) + dma_pool_free(trans_pcie->txqs.bc_pool, + txq->bc_tbl.addr, txq->bc_tbl.dma); + kfree(txq); +} + +/* + * iwl_pcie_txq_free - Deallocate DMA queue. + * @txq: Transmit queue to deallocate. + * + * Empty queue by removing and destroying all BD's. + * Free all buffers. + * 0-fill, but do not free "txq" descriptor structure. + */ +static void iwl_txq_gen2_free(struct iwl_trans *trans, int txq_id) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq; + int i; + + if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES, + "queue %d out of range", txq_id)) + return; + + txq = trans_pcie->txqs.txq[txq_id]; + + if (WARN_ON(!txq)) + return; + + iwl_txq_gen2_unmap(trans, txq_id); + + /* De-alloc array of command/tx buffers */ + if (txq_id == trans_pcie->txqs.cmd.q_id) + for (i = 0; i < txq->n_window; i++) { + kfree_sensitive(txq->entries[i].cmd); + kfree_sensitive(txq->entries[i].free_buf); + } + del_timer_sync(&txq->stuck_timer); + + iwl_txq_gen2_free_memory(trans, txq); + + trans_pcie->txqs.txq[txq_id] = NULL; + + clear_bit(txq_id, trans_pcie->txqs.queue_used); +} + +static struct iwl_txq * +iwl_txq_dyn_alloc_dma(struct iwl_trans *trans, int size, unsigned int timeout) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + size_t bc_tbl_size, bc_tbl_entries; + struct iwl_txq *txq; + int ret; + + WARN_ON(!trans_pcie->txqs.bc_tbl_size); + + bc_tbl_size = trans_pcie->txqs.bc_tbl_size; + bc_tbl_entries = bc_tbl_size / sizeof(u16); + + if (WARN_ON(size > bc_tbl_entries)) + return ERR_PTR(-EINVAL); + + txq = kzalloc(sizeof(*txq), GFP_KERNEL); + if (!txq) + return ERR_PTR(-ENOMEM); + + txq->bc_tbl.addr = dma_pool_alloc(trans_pcie->txqs.bc_pool, GFP_KERNEL, + &txq->bc_tbl.dma); + if (!txq->bc_tbl.addr) { + IWL_ERR(trans, "Scheduler BC Table allocation failed\n"); + kfree(txq); + return ERR_PTR(-ENOMEM); + } + + ret = iwl_pcie_txq_alloc(trans, txq, size, false); + if (ret) { + IWL_ERR(trans, "Tx queue alloc failed\n"); + goto error; + } + ret = iwl_txq_init(trans, txq, size, false); + if (ret) { + IWL_ERR(trans, "Tx queue init failed\n"); + goto error; + } + + txq->wd_timeout = msecs_to_jiffies(timeout); + + return txq; + +error: + iwl_txq_gen2_free_memory(trans, txq); + return ERR_PTR(ret); +} + +static int iwl_pcie_txq_alloc_response(struct iwl_trans *trans, + struct iwl_txq *txq, + struct iwl_host_cmd *hcmd) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_tx_queue_cfg_rsp *rsp; + int ret, qid; + u32 wr_ptr; + + if (WARN_ON(iwl_rx_packet_payload_len(hcmd->resp_pkt) != + sizeof(*rsp))) { + ret = -EINVAL; + goto error_free_resp; + } + + rsp = (void *)hcmd->resp_pkt->data; + qid = le16_to_cpu(rsp->queue_number); + wr_ptr = le16_to_cpu(rsp->write_pointer); + + if (qid >= ARRAY_SIZE(trans_pcie->txqs.txq)) { + WARN_ONCE(1, "queue index %d unsupported", qid); + ret = -EIO; + goto error_free_resp; + } + + if (test_and_set_bit(qid, trans_pcie->txqs.queue_used)) { + WARN_ONCE(1, "queue %d already used", qid); + ret = -EIO; + goto error_free_resp; + } + + if (WARN_ONCE(trans_pcie->txqs.txq[qid], + "queue %d already allocated\n", qid)) { + ret = -EIO; + goto error_free_resp; + } + + txq->id = qid; + trans_pcie->txqs.txq[qid] = txq; + wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1); + + /* Place first TFD at index corresponding to start sequence number */ + txq->read_ptr = wr_ptr; + txq->write_ptr = wr_ptr; + + IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d\n", qid); + + iwl_free_resp(hcmd); + return qid; + +error_free_resp: + iwl_free_resp(hcmd); + iwl_txq_gen2_free_memory(trans, txq); + return ret; +} + +int iwl_txq_dyn_alloc(struct iwl_trans *trans, u32 flags, u32 sta_mask, + u8 tid, int size, unsigned int timeout) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq; + union { + struct iwl_tx_queue_cfg_cmd old; + struct iwl_scd_queue_cfg_cmd new; + } cmd; + struct iwl_host_cmd hcmd = { + .flags = CMD_WANT_SKB, + }; + int ret; + + /* take the min with bytecount table entries allowed */ + size = min_t(u32, size, trans_pcie->txqs.bc_tbl_size / sizeof(u16)); + /* but must be power of 2 values for calculating read/write pointers */ + size = rounddown_pow_of_two(size); + + if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_BZ && + trans->hw_rev_step == SILICON_A_STEP) { + size = 4096; + txq = iwl_txq_dyn_alloc_dma(trans, size, timeout); + } else { + do { + txq = iwl_txq_dyn_alloc_dma(trans, size, timeout); + if (!IS_ERR(txq)) + break; + + IWL_DEBUG_TX_QUEUES(trans, + "Failed allocating TXQ of size %d for sta mask %x tid %d, ret: %ld\n", + size, sta_mask, tid, + PTR_ERR(txq)); + size /= 2; + } while (size >= 16); + } + + if (IS_ERR(txq)) + return PTR_ERR(txq); + + if (trans_pcie->txqs.queue_alloc_cmd_ver == 0) { + memset(&cmd.old, 0, sizeof(cmd.old)); + cmd.old.tfdq_addr = cpu_to_le64(txq->dma_addr); + cmd.old.byte_cnt_addr = cpu_to_le64(txq->bc_tbl.dma); + cmd.old.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size)); + cmd.old.flags = cpu_to_le16(flags | TX_QUEUE_CFG_ENABLE_QUEUE); + cmd.old.tid = tid; + + if (hweight32(sta_mask) != 1) { + ret = -EINVAL; + goto error; + } + cmd.old.sta_id = ffs(sta_mask) - 1; + + hcmd.id = SCD_QUEUE_CFG; + hcmd.len[0] = sizeof(cmd.old); + hcmd.data[0] = &cmd.old; + } else if (trans_pcie->txqs.queue_alloc_cmd_ver == 3) { + memset(&cmd.new, 0, sizeof(cmd.new)); + cmd.new.operation = cpu_to_le32(IWL_SCD_QUEUE_ADD); + cmd.new.u.add.tfdq_dram_addr = cpu_to_le64(txq->dma_addr); + cmd.new.u.add.bc_dram_addr = cpu_to_le64(txq->bc_tbl.dma); + cmd.new.u.add.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size)); + cmd.new.u.add.flags = cpu_to_le32(flags); + cmd.new.u.add.sta_mask = cpu_to_le32(sta_mask); + cmd.new.u.add.tid = tid; + + hcmd.id = WIDE_ID(DATA_PATH_GROUP, SCD_QUEUE_CONFIG_CMD); + hcmd.len[0] = sizeof(cmd.new); + hcmd.data[0] = &cmd.new; + } else { + ret = -EOPNOTSUPP; + goto error; + } + + ret = iwl_trans_send_cmd(trans, &hcmd); + if (ret) + goto error; + + return iwl_pcie_txq_alloc_response(trans, txq, &hcmd); + +error: + iwl_txq_gen2_free_memory(trans, txq); + return ret; +} + +void iwl_txq_dyn_free(struct iwl_trans *trans, int queue) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + if (WARN(queue >= IWL_MAX_TVQM_QUEUES, + "queue %d out of range", queue)) + return; + + /* + * Upon HW Rfkill - we stop the device, and then stop the queues + * in the op_mode. Just for the sake of the simplicity of the op_mode, + * allow the op_mode to call txq_disable after it already called + * stop_device. + */ + if (!test_and_clear_bit(queue, trans_pcie->txqs.queue_used)) { + WARN_ONCE(test_bit(STATUS_DEVICE_ENABLED, &trans->status), + "queue %d not used", queue); + return; + } + + iwl_txq_gen2_free(trans, queue); + + IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue); +} + +void iwl_txq_gen2_tx_free(struct iwl_trans *trans) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int i; + + memset(trans_pcie->txqs.queue_used, 0, + sizeof(trans_pcie->txqs.queue_used)); + + /* Free all TX queues */ + for (i = 0; i < ARRAY_SIZE(trans_pcie->txqs.txq); i++) { + if (!trans_pcie->txqs.txq[i]) + continue; + + iwl_txq_gen2_free(trans, i); + } +} + +int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, int queue_size) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *queue; + int ret; + + /* alloc and init the tx queue */ + if (!trans_pcie->txqs.txq[txq_id]) { + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + if (!queue) { + IWL_ERR(trans, "Not enough memory for tx queue\n"); + return -ENOMEM; + } + trans_pcie->txqs.txq[txq_id] = queue; + ret = iwl_pcie_txq_alloc(trans, queue, queue_size, true); + if (ret) { + IWL_ERR(trans, "Tx %d queue init failed\n", txq_id); + goto error; + } + } else { + queue = trans_pcie->txqs.txq[txq_id]; + } + + ret = iwl_txq_init(trans, queue, queue_size, + (txq_id == trans_pcie->txqs.cmd.q_id)); + if (ret) { + IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id); + goto error; + } + trans_pcie->txqs.txq[txq_id]->id = txq_id; + set_bit(txq_id, trans_pcie->txqs.queue_used); + + return 0; + +error: + iwl_txq_gen2_tx_free(trans); + return ret; +} /*************** HOST COMMAND QUEUE FUNCTIONS *****/ @@ -28,7 +1201,7 @@ int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id]; + struct iwl_txq *txq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; struct iwl_device_cmd *out_cmd; struct iwl_cmd_meta *out_meta; void *dup_buf = NULL; @@ -130,7 +1303,7 @@ int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans, out_cmd = txq->entries[idx].cmd; out_meta = &txq->entries[idx].meta; - /* re-initialize to NULL */ + /* re-initialize, this also marks the SG list as unused */ memset(out_meta, 0, sizeof(*out_meta)); if (cmd->flags & CMD_WANT_SKB) out_meta->source = cmd; @@ -143,7 +1316,7 @@ int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans, cpu_to_le16(cmd_size - sizeof(struct iwl_cmd_header_wide)); out_cmd->hdr_wide.reserved = 0; out_cmd->hdr_wide.sequence = - cpu_to_le16(QUEUE_TO_SEQ(trans->txqs.cmd.q_id) | + cpu_to_le16(QUEUE_TO_SEQ(trans_pcie->txqs.cmd.q_id) | INDEX_TO_SEQ(txq->write_ptr)); cmd_pos = sizeof(struct iwl_cmd_header_wide); @@ -191,7 +1364,7 @@ int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans, "Sending command %s (%.2x.%.2x), seq: 0x%04X, %d bytes at %d[%d]:%d\n", iwl_get_cmd_string(trans, cmd->id), group_id, out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), - cmd_size, txq->write_ptr, idx, trans->txqs.cmd.q_id); + cmd_size, txq->write_ptr, idx, trans_pcie->txqs.cmd.q_id); /* start the TFD with the minimum copy bytes */ tb0_size = min_t(int, copy_size, IWL_FIRST_TB_SIZE); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c index fa8eba47dc4c..22d482ae53d9 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c @@ -1,16 +1,22 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2003-2014, 2018-2021, 2023 Intel Corporation + * Copyright (C) 2003-2014, 2018-2021, 2023-2024 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ #include <linux/etherdevice.h> #include <linux/ieee80211.h> +#include <linux/dmapool.h> #include <linux/slab.h> #include <linux/sched.h> +#include <linux/tcp.h> #include <net/ip6_checksum.h> #include <net/tso.h> +#include "fw/api/commands.h" +#include "fw/api/datapath.h" +#include "fw/api/debug.h" +#include "iwl-fh.h" #include "iwl-debug.h" #include "iwl-csr.h" #include "iwl-prph.h" @@ -72,6 +78,7 @@ void iwl_pcie_free_dma_ptr(struct iwl_trans *trans, struct iwl_dma_ptr *ptr) static void iwl_pcie_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 reg = 0; int txq_id = txq->id; @@ -84,7 +91,7 @@ static void iwl_pcie_txq_inc_wr_ptr(struct iwl_trans *trans, * 3. there is a chance that the NIC is asleep */ if (!trans->trans_cfg->base_params->shadow_reg_enable && - txq_id != trans->txqs.cmd.q_id && + txq_id != trans_pcie->txqs.cmd.q_id && test_bit(STATUS_TPOWER_PMI, &trans->status)) { /* * wake up nic if it's powered down ... @@ -115,12 +122,13 @@ static void iwl_pcie_txq_inc_wr_ptr(struct iwl_trans *trans, void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int i; for (i = 0; i < trans->trans_cfg->base_params->num_of_queues; i++) { - struct iwl_txq *txq = trans->txqs.txq[i]; + struct iwl_txq *txq = trans_pcie->txqs.txq[i]; - if (!test_bit(i, trans->txqs.queue_used)) + if (!test_bit(i, trans_pcie->txqs.queue_used)) continue; spin_lock_bh(&txq->lock); @@ -132,23 +140,43 @@ void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans) } } +static inline void iwl_pcie_gen1_tfd_set_tb(struct iwl_tfd *tfd, + u8 idx, dma_addr_t addr, u16 len) +{ + struct iwl_tfd_tb *tb = &tfd->tbs[idx]; + u16 hi_n_len = len << 4; + + put_unaligned_le32(addr, &tb->lo); + hi_n_len |= iwl_get_dma_hi_addr(addr); + + tb->hi_n_len = cpu_to_le16(hi_n_len); + + tfd->num_tbs = idx + 1; +} + +static inline u8 iwl_txq_gen1_tfd_get_num_tbs(struct iwl_tfd *tfd) +{ + return tfd->num_tbs & 0x1f; +} + static int iwl_pcie_txq_build_tfd(struct iwl_trans *trans, struct iwl_txq *txq, dma_addr_t addr, u16 len, bool reset) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); void *tfd; u32 num_tbs; - tfd = (u8 *)txq->tfds + trans->txqs.tfd.size * txq->write_ptr; + tfd = (u8 *)txq->tfds + trans_pcie->txqs.tfd.size * txq->write_ptr; if (reset) - memset(tfd, 0, trans->txqs.tfd.size); + memset(tfd, 0, trans_pcie->txqs.tfd.size); - num_tbs = iwl_txq_gen1_tfd_get_num_tbs(trans, tfd); + num_tbs = iwl_txq_gen1_tfd_get_num_tbs(tfd); /* Each TFD can point to a maximum max_tbs Tx buffers */ - if (num_tbs >= trans->txqs.tfd.max_tbs) { + if (num_tbs >= trans_pcie->txqs.tfd.max_tbs) { IWL_ERR(trans, "Error can not send more than %d chunks\n", - trans->txqs.tfd.max_tbs); + trans_pcie->txqs.tfd.max_tbs); return -EINVAL; } @@ -156,7 +184,7 @@ static int iwl_pcie_txq_build_tfd(struct iwl_trans *trans, struct iwl_txq *txq, "Unaligned address = %llx\n", (unsigned long long)addr)) return -EINVAL; - iwl_pcie_gen1_tfd_set_tb(trans, tfd, num_tbs, addr, len); + iwl_pcie_gen1_tfd_set_tb(tfd, num_tbs, addr, len); return num_tbs; } @@ -181,36 +209,206 @@ static void iwl_pcie_clear_cmd_in_flight(struct iwl_trans *trans) spin_unlock(&trans_pcie->reg_lock); } +static void iwl_pcie_free_and_unmap_tso_page(struct iwl_trans *trans, + struct page *page) +{ + struct iwl_tso_page_info *info = IWL_TSO_PAGE_INFO(page_address(page)); + + /* Decrease internal use count and unmap/free page if needed */ + if (refcount_dec_and_test(&info->use_count)) { + dma_unmap_page(trans->dev, info->dma_addr, PAGE_SIZE, + DMA_TO_DEVICE); + + __free_page(page); + } +} + +void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_cmd_meta *cmd_meta) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct page **page_ptr; + struct page *next; + + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->txqs.page_offs); + next = *page_ptr; + *page_ptr = NULL; + + while (next) { + struct iwl_tso_page_info *info; + struct page *tmp = next; + + info = IWL_TSO_PAGE_INFO(page_address(next)); + next = info->next; + + /* Unmap the scatter gather list that is on the last page */ + if (!next && cmd_meta->sg_offset) { + struct sg_table *sgt; + + sgt = (void *)((u8 *)page_address(tmp) + + cmd_meta->sg_offset); + + dma_unmap_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0); + } + + iwl_pcie_free_and_unmap_tso_page(trans, tmp); + } +} + +static inline dma_addr_t +iwl_txq_gen1_tfd_tb_get_addr(struct iwl_tfd *tfd, u8 idx) +{ + struct iwl_tfd_tb *tb = &tfd->tbs[idx]; + dma_addr_t addr; + dma_addr_t hi_len; + + addr = get_unaligned_le32(&tb->lo); + + if (sizeof(dma_addr_t) <= sizeof(u32)) + return addr; + + hi_len = le16_to_cpu(tb->hi_n_len) & 0xF; + + /* + * shift by 16 twice to avoid warnings on 32-bit + * (where this code never runs anyway due to the + * if statement above) + */ + return addr | ((hi_len << 16) << 16); +} + +static void iwl_txq_set_tfd_invalid_gen1(struct iwl_trans *trans, + struct iwl_tfd *tfd) +{ + tfd->num_tbs = 0; + + iwl_pcie_gen1_tfd_set_tb(tfd, 0, trans->invalid_tx_cmd.dma, + trans->invalid_tx_cmd.size); +} + +static void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans, + struct iwl_cmd_meta *meta, + struct iwl_txq *txq, int index) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int i, num_tbs; + struct iwl_tfd *tfd = iwl_txq_get_tfd(trans, txq, index); + + /* Sanity check on number of chunks */ + num_tbs = iwl_txq_gen1_tfd_get_num_tbs(tfd); + + if (num_tbs > trans_pcie->txqs.tfd.max_tbs) { + IWL_ERR(trans, "Too many chunks: %i\n", num_tbs); + /* @todo issue fatal error, it is quite serious situation */ + return; + } + + /* TB1 is mapped directly, the rest is the TSO page and SG list. */ + if (meta->sg_offset) + num_tbs = 2; + + /* first TB is never freed - it's the bidirectional DMA data */ + + for (i = 1; i < num_tbs; i++) { + if (meta->tbs & BIT(i)) + dma_unmap_page(trans->dev, + iwl_txq_gen1_tfd_tb_get_addr(tfd, i), + iwl_txq_gen1_tfd_tb_get_len(trans, + tfd, i), + DMA_TO_DEVICE); + else + dma_unmap_single(trans->dev, + iwl_txq_gen1_tfd_tb_get_addr(tfd, i), + iwl_txq_gen1_tfd_tb_get_len(trans, + tfd, i), + DMA_TO_DEVICE); + } + + meta->tbs = 0; + + iwl_txq_set_tfd_invalid_gen1(trans, tfd); +} + +/** + * iwl_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr] + * @trans: transport private data + * @txq: tx queue + * @read_ptr: the TXQ read_ptr to free + * + * Does NOT advance any TFD circular buffer read/write indexes + * Does NOT free the TFD itself (which is within circular buffer) + */ +static void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq, + int read_ptr) +{ + /* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and + * idx is bounded by n_window + */ + int idx = iwl_txq_get_cmd_index(txq, read_ptr); + struct sk_buff *skb; + + lockdep_assert_held(&txq->reclaim_lock); + + if (!txq->entries) + return; + + /* We have only q->n_window txq->entries, but we use + * TFD_QUEUE_SIZE_MAX tfds + */ + if (trans->trans_cfg->gen2) + iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta, + iwl_txq_get_tfd(trans, txq, read_ptr)); + else + iwl_txq_gen1_tfd_unmap(trans, &txq->entries[idx].meta, + txq, read_ptr); + + /* free SKB */ + skb = txq->entries[idx].skb; + + /* Can be called from irqs-disabled context + * If skb is not NULL, it means that the whole queue is being + * freed and that the queue is not empty - free the skb + */ + if (skb) { + iwl_op_mode_free_skb(trans->op_mode, skb); + txq->entries[idx].skb = NULL; + } +} + /* * iwl_pcie_txq_unmap - Unmap any remaining DMA mappings and free skb's */ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id) { - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; if (!txq) { IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n"); return; } - spin_lock_bh(&txq->lock); + spin_lock_bh(&txq->reclaim_lock); + spin_lock(&txq->lock); while (txq->write_ptr != txq->read_ptr) { IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n", txq_id, txq->read_ptr); - if (txq_id != trans->txqs.cmd.q_id) { + if (txq_id != trans_pcie->txqs.cmd.q_id) { struct sk_buff *skb = txq->entries[txq->read_ptr].skb; + struct iwl_cmd_meta *cmd_meta = + &txq->entries[txq->read_ptr].meta; if (WARN_ON_ONCE(!skb)) continue; - iwl_txq_free_tso_page(trans, skb); + iwl_pcie_free_tso_pages(trans, skb, cmd_meta); } - iwl_txq_free_tfd(trans, txq); + iwl_txq_free_tfd(trans, txq, txq->read_ptr); txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr); if (txq->read_ptr == txq->write_ptr && - txq_id == trans->txqs.cmd.q_id) + txq_id == trans_pcie->txqs.cmd.q_id) iwl_pcie_clear_cmd_in_flight(trans); } @@ -220,10 +418,11 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id) iwl_op_mode_free_skb(trans->op_mode, skb); } - spin_unlock_bh(&txq->lock); + spin_unlock(&txq->lock); + spin_unlock_bh(&txq->reclaim_lock); /* just in case - this queue may have been stopped */ - iwl_wake_queue(trans, txq); + iwl_trans_pcie_wake_queue(trans, txq); } /* @@ -236,7 +435,8 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id) */ static void iwl_pcie_txq_free(struct iwl_trans *trans, int txq_id) { - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; struct device *dev = trans->dev; int i; @@ -246,7 +446,7 @@ static void iwl_pcie_txq_free(struct iwl_trans *trans, int txq_id) iwl_pcie_txq_unmap(trans, txq_id); /* De-alloc array of command/tx buffers */ - if (txq_id == trans->txqs.cmd.q_id) + if (txq_id == trans_pcie->txqs.cmd.q_id) for (i = 0; i < txq->n_window; i++) { kfree_sensitive(txq->entries[i].cmd); kfree_sensitive(txq->entries[i].free_buf); @@ -255,7 +455,7 @@ static void iwl_pcie_txq_free(struct iwl_trans *trans, int txq_id) /* De-alloc circular buffer of TFDs */ if (txq->tfds) { dma_free_coherent(dev, - trans->txqs.tfd.size * + trans_pcie->txqs.tfd.size * trans->trans_cfg->base_params->max_tfd_queue_size, txq->tfds, txq->dma_addr); txq->dma_addr = 0; @@ -285,9 +485,10 @@ void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr) SCD_CONTEXT_MEM_LOWER_BOUND) / sizeof(u32); /* make sure all queue are not stopped/used */ - memset(trans->txqs.queue_stopped, 0, - sizeof(trans->txqs.queue_stopped)); - memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); + memset(trans_pcie->txqs.queue_stopped, 0, + sizeof(trans_pcie->txqs.queue_stopped)); + memset(trans_pcie->txqs.queue_used, 0, + sizeof(trans_pcie->txqs.queue_used)); trans_pcie->scd_base_addr = iwl_read_prph(trans, SCD_SRAM_BASE_ADDR); @@ -301,7 +502,7 @@ void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr) NULL, clear_dwords); iwl_write_prph(trans, SCD_DRAM_BASE_ADDR, - trans->txqs.scd_bc_tbls.dma >> 10); + trans_pcie->txqs.scd_bc_tbls.dma >> 10); /* The chain extension of the SCD doesn't work well. This feature is * enabled by default by the HW, so we need to disable it manually. @@ -309,9 +510,9 @@ void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr) if (trans->trans_cfg->base_params->scd_chain_ext_wa) iwl_write_prph(trans, SCD_CHAINEXT_EN, 0); - iwl_trans_ac_txq_enable(trans, trans->txqs.cmd.q_id, - trans->txqs.cmd.fifo, - trans->txqs.cmd.wdg_timeout); + iwl_trans_ac_txq_enable(trans, trans_pcie->txqs.cmd.q_id, + trans_pcie->txqs.cmd.fifo, + trans_pcie->txqs.cmd.wdg_timeout); /* Activate all Tx DMA/FIFO channels */ iwl_scd_activate_fifos(trans); @@ -347,7 +548,7 @@ void iwl_trans_pcie_tx_reset(struct iwl_trans *trans) for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues; txq_id++) { - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; if (trans->trans_cfg->gen2) iwl_write_direct64(trans, FH_MEM_CBBC_QUEUE(trans, txq_id), @@ -422,9 +623,10 @@ int iwl_pcie_tx_stop(struct iwl_trans *trans) * queues. This happens when we have an rfkill interrupt. * Since we stop Tx altogether - mark the queues as stopped. */ - memset(trans->txqs.queue_stopped, 0, - sizeof(trans->txqs.queue_stopped)); - memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); + memset(trans_pcie->txqs.queue_stopped, 0, + sizeof(trans_pcie->txqs.queue_stopped)); + memset(trans_pcie->txqs.queue_used, 0, + sizeof(trans_pcie->txqs.queue_used)); /* This can happen: start_hw, stop_device */ if (!trans_pcie->txq_memory) @@ -448,7 +650,8 @@ void iwl_pcie_tx_free(struct iwl_trans *trans) int txq_id; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); + memset(trans_pcie->txqs.queue_used, 0, + sizeof(trans_pcie->txqs.queue_used)); /* Tx queues */ if (trans_pcie->txq_memory) { @@ -456,7 +659,7 @@ void iwl_pcie_tx_free(struct iwl_trans *trans) txq_id < trans->trans_cfg->base_params->num_of_queues; txq_id++) { iwl_pcie_txq_free(trans, txq_id); - trans->txqs.txq[txq_id] = NULL; + trans_pcie->txqs.txq[txq_id] = NULL; } } @@ -465,7 +668,135 @@ void iwl_pcie_tx_free(struct iwl_trans *trans) iwl_pcie_free_dma_ptr(trans, &trans_pcie->kw); - iwl_pcie_free_dma_ptr(trans, &trans->txqs.scd_bc_tbls); + iwl_pcie_free_dma_ptr(trans, &trans_pcie->txqs.scd_bc_tbls); +} + +void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq) +{ + u32 txq_id = txq->id; + u32 status; + bool active; + u8 fifo; + + if (trans->trans_cfg->gen2) { + IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id, + txq->read_ptr, txq->write_ptr); + /* TODO: access new SCD registers and dump them */ + return; + } + + status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id)); + fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7; + active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE)); + + IWL_ERR(trans, + "Queue %d is %sactive on fifo %d and stuck for %u ms. SW [%d, %d] HW [%d, %d] FH TRB=0x0%x\n", + txq_id, active ? "" : "in", fifo, + jiffies_to_msecs(txq->wd_timeout), + txq->read_ptr, txq->write_ptr, + iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq_id)) & + (trans->trans_cfg->base_params->max_tfd_queue_size - 1), + iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq_id)) & + (trans->trans_cfg->base_params->max_tfd_queue_size - 1), + iwl_read_direct32(trans, FH_TX_TRB_REG(fifo))); +} + +static void iwl_txq_stuck_timer(struct timer_list *t) +{ + struct iwl_txq *txq = from_timer(txq, t, stuck_timer); + struct iwl_trans *trans = txq->trans; + + spin_lock(&txq->lock); + /* check if triggered erroneously */ + if (txq->read_ptr == txq->write_ptr) { + spin_unlock(&txq->lock); + return; + } + spin_unlock(&txq->lock); + + iwl_txq_log_scd_error(trans, txq); + + iwl_force_nmi(trans); +} + +int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, + int slots_num, bool cmd_queue) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + size_t num_entries = trans->trans_cfg->gen2 ? + slots_num : trans->trans_cfg->base_params->max_tfd_queue_size; + size_t tfd_sz; + size_t tb0_buf_sz; + int i; + + if (WARN_ONCE(slots_num <= 0, "Invalid slots num:%d\n", slots_num)) + return -EINVAL; + + if (WARN_ON(txq->entries || txq->tfds)) + return -EINVAL; + + tfd_sz = trans_pcie->txqs.tfd.size * num_entries; + + timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0); + txq->trans = trans; + + txq->n_window = slots_num; + + txq->entries = kcalloc(slots_num, + sizeof(struct iwl_pcie_txq_entry), + GFP_KERNEL); + + if (!txq->entries) + goto error; + + if (cmd_queue) + for (i = 0; i < slots_num; i++) { + txq->entries[i].cmd = + kmalloc(sizeof(struct iwl_device_cmd), + GFP_KERNEL); + if (!txq->entries[i].cmd) + goto error; + } + + /* Circular buffer of transmit frame descriptors (TFDs), + * shared with device + */ + txq->tfds = dma_alloc_coherent(trans->dev, tfd_sz, + &txq->dma_addr, GFP_KERNEL); + if (!txq->tfds) + goto error; + + BUILD_BUG_ON(sizeof(*txq->first_tb_bufs) != IWL_FIRST_TB_SIZE_ALIGN); + + tb0_buf_sz = sizeof(*txq->first_tb_bufs) * slots_num; + + txq->first_tb_bufs = dma_alloc_coherent(trans->dev, tb0_buf_sz, + &txq->first_tb_dma, + GFP_KERNEL); + if (!txq->first_tb_bufs) + goto err_free_tfds; + + for (i = 0; i < num_entries; i++) { + void *tfd = iwl_txq_get_tfd(trans, txq, i); + + if (trans->trans_cfg->gen2) + iwl_txq_set_tfd_invalid_gen2(trans, tfd); + else + iwl_txq_set_tfd_invalid_gen1(trans, tfd); + } + + return 0; +err_free_tfds: + dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr); + txq->tfds = NULL; +error: + if (txq->entries && cmd_queue) + for (i = 0; i < slots_num; i++) + kfree(txq->entries[i].cmd); + kfree(txq->entries); + txq->entries = NULL; + + return -ENOMEM; } /* @@ -491,7 +822,7 @@ static int iwl_pcie_tx_alloc(struct iwl_trans *trans) goto error; } - ret = iwl_pcie_alloc_dma_ptr(trans, &trans->txqs.scd_bc_tbls, + ret = iwl_pcie_alloc_dma_ptr(trans, &trans_pcie->txqs.scd_bc_tbls, bc_tbls_size); if (ret) { IWL_ERR(trans, "Scheduler BC Table allocation failed\n"); @@ -517,7 +848,7 @@ static int iwl_pcie_tx_alloc(struct iwl_trans *trans) /* Alloc and init all Tx queues, including the command queue (#4/#9) */ for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues; txq_id++) { - bool cmd_queue = (txq_id == trans->txqs.cmd.q_id); + bool cmd_queue = (txq_id == trans_pcie->txqs.cmd.q_id); if (cmd_queue) slots_num = max_t(u32, IWL_CMD_QUEUE_SIZE, @@ -525,14 +856,14 @@ static int iwl_pcie_tx_alloc(struct iwl_trans *trans) else slots_num = max_t(u32, IWL_DEFAULT_QUEUE_SIZE, trans->cfg->min_ba_txq_size); - trans->txqs.txq[txq_id] = &trans_pcie->txq_memory[txq_id]; - ret = iwl_txq_alloc(trans, trans->txqs.txq[txq_id], slots_num, - cmd_queue); + trans_pcie->txqs.txq[txq_id] = &trans_pcie->txq_memory[txq_id]; + ret = iwl_pcie_txq_alloc(trans, trans_pcie->txqs.txq[txq_id], + slots_num, cmd_queue); if (ret) { IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id); goto error; } - trans->txqs.txq[txq_id]->id = txq_id; + trans_pcie->txqs.txq[txq_id]->id = txq_id; } return 0; @@ -543,6 +874,69 @@ error: return ret; } +/* + * iwl_queue_init - Initialize queue's high/low-water and read/write indexes + */ +static int iwl_queue_init(struct iwl_txq *q, int slots_num) +{ + q->n_window = slots_num; + + /* slots_num must be power-of-two size, otherwise + * iwl_txq_get_cmd_index is broken. + */ + if (WARN_ON(!is_power_of_2(slots_num))) + return -EINVAL; + + q->low_mark = q->n_window / 4; + if (q->low_mark < 4) + q->low_mark = 4; + + q->high_mark = q->n_window / 8; + if (q->high_mark < 2) + q->high_mark = 2; + + q->write_ptr = 0; + q->read_ptr = 0; + + return 0; +} + +int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, + int slots_num, bool cmd_queue) +{ + u32 tfd_queue_max_size = + trans->trans_cfg->base_params->max_tfd_queue_size; + int ret; + + txq->need_update = false; + + /* max_tfd_queue_size must be power-of-two size, otherwise + * iwl_txq_inc_wrap and iwl_txq_dec_wrap are broken. + */ + if (WARN_ONCE(tfd_queue_max_size & (tfd_queue_max_size - 1), + "Max tfd queue size must be a power of two, but is %d", + tfd_queue_max_size)) + return -EINVAL; + + /* Initialize queue's high/low-water marks, and head/tail indexes */ + ret = iwl_queue_init(txq, slots_num); + if (ret) + return ret; + + spin_lock_init(&txq->lock); + spin_lock_init(&txq->reclaim_lock); + + if (cmd_queue) { + static struct lock_class_key iwl_txq_cmd_queue_lock_class; + + lockdep_set_class(&txq->lock, &iwl_txq_cmd_queue_lock_class); + } + + __skb_queue_head_init(&txq->overflow_q); + + return 0; +} + int iwl_pcie_tx_init(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -571,7 +965,7 @@ int iwl_pcie_tx_init(struct iwl_trans *trans) /* Alloc and init all Tx queues, including the command queue (#4/#9) */ for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues; txq_id++) { - bool cmd_queue = (txq_id == trans->txqs.cmd.q_id); + bool cmd_queue = (txq_id == trans_pcie->txqs.cmd.q_id); if (cmd_queue) slots_num = max_t(u32, IWL_CMD_QUEUE_SIZE, @@ -579,7 +973,7 @@ int iwl_pcie_tx_init(struct iwl_trans *trans) else slots_num = max_t(u32, IWL_DEFAULT_QUEUE_SIZE, trans->cfg->min_ba_txq_size); - ret = iwl_txq_init(trans, trans->txqs.txq[txq_id], slots_num, + ret = iwl_txq_init(trans, trans_pcie->txqs.txq[txq_id], slots_num, cmd_queue); if (ret) { IWL_ERR(trans, "Tx %d queue init failed\n", txq_id); @@ -593,7 +987,7 @@ int iwl_pcie_tx_init(struct iwl_trans *trans) * Circular buffer (TFD queue in DRAM) physical base address */ iwl_write_direct32(trans, FH_MEM_CBBC_QUEUE(trans, txq_id), - trans->txqs.txq[txq_id]->dma_addr >> 8); + trans_pcie->txqs.txq[txq_id]->dma_addr >> 8); } iwl_set_bits_prph(trans, SCD_GP_CTRL, SCD_GP_CTRL_AUTO_ACTIVE_MODE); @@ -641,6 +1035,42 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans, return 0; } +static void iwl_txq_progress(struct iwl_txq *txq) +{ + lockdep_assert_held(&txq->lock); + + if (!txq->wd_timeout) + return; + + /* + * station is asleep and we send data - that must + * be uAPSD or PS-Poll. Don't rearm the timer. + */ + if (txq->frozen) + return; + + /* + * if empty delete timer, otherwise move timer forward + * since we're making progress on this queue + */ + if (txq->read_ptr == txq->write_ptr) + del_timer(&txq->stuck_timer); + else + mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout); +} + +static inline bool iwl_txq_used(const struct iwl_txq *q, int i, + int read_ptr, int write_ptr) +{ + int index = iwl_txq_get_cmd_index(q, i); + int r = iwl_txq_get_cmd_index(q, read_ptr); + int w = iwl_txq_get_cmd_index(q, write_ptr); + + return w >= r ? + (index >= r && index < w) : + !(index < r && index >= w); +} + /* * iwl_pcie_cmdq_reclaim - Reclaim TX command queue entries already Tx'd * @@ -650,7 +1080,8 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans, */ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx) { - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; int nfreed = 0; u16 r; @@ -660,8 +1091,8 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx) r = iwl_txq_get_cmd_index(txq, txq->read_ptr); if (idx >= trans->trans_cfg->base_params->max_tfd_queue_size || - (!iwl_txq_used(txq, idx))) { - WARN_ONCE(test_bit(txq_id, trans->txqs.queue_used), + (!iwl_txq_used(txq, idx, txq->read_ptr, txq->write_ptr))) { + WARN_ONCE(test_bit(txq_id, trans_pcie->txqs.queue_used), "%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n", __func__, txq_id, idx, trans->trans_cfg->base_params->max_tfd_queue_size, @@ -720,11 +1151,11 @@ bool iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn, unsigned int wdg_timeout) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; int fifo = -1; bool scd_bug = false; - if (test_and_set_bit(txq_id, trans->txqs.queue_used)) + if (test_and_set_bit(txq_id, trans_pcie->txqs.queue_used)) WARN_ONCE(1, "queue %d already used - expect issues", txq_id); txq->wd_timeout = msecs_to_jiffies(wdg_timeout); @@ -733,7 +1164,7 @@ bool iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn, fifo = cfg->fifo; /* Disable the scheduler prior configuring the cmd queue */ - if (txq_id == trans->txqs.cmd.q_id && + if (txq_id == trans_pcie->txqs.cmd.q_id && trans_pcie->scd_set_active) iwl_scd_enable_set_active(trans, 0); @@ -741,7 +1172,7 @@ bool iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn, iwl_scd_txq_set_inactive(trans, txq_id); /* Set this queue as a chain-building queue unless it is CMD */ - if (txq_id != trans->txqs.cmd.q_id) + if (txq_id != trans_pcie->txqs.cmd.q_id) iwl_scd_txq_set_chain(trans, txq_id); if (cfg->aggregate) { @@ -811,7 +1242,7 @@ bool iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn, SCD_QUEUE_STTS_REG_MSK); /* enable the scheduler for this queue (only) */ - if (txq_id == trans->txqs.cmd.q_id && + if (txq_id == trans_pcie->txqs.cmd.q_id && trans_pcie->scd_set_active) iwl_scd_enable_set_active(trans, BIT(txq_id)); @@ -830,7 +1261,8 @@ bool iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn, void iwl_trans_pcie_txq_set_shared_mode(struct iwl_trans *trans, u32 txq_id, bool shared_mode) { - struct iwl_txq *txq = trans->txqs.txq[txq_id]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; txq->ampdu = !shared_mode; } @@ -843,8 +1275,8 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id, SCD_TX_STTS_QUEUE_OFFSET(txq_id); static const u32 zero_val[4] = {}; - trans->txqs.txq[txq_id]->frozen_expiry_remainder = 0; - trans->txqs.txq[txq_id]->frozen = false; + trans_pcie->txqs.txq[txq_id]->frozen_expiry_remainder = 0; + trans_pcie->txqs.txq[txq_id]->frozen = false; /* * Upon HW Rfkill - we stop the device, and then stop the queues @@ -852,7 +1284,7 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id, * allow the op_mode to call txq_disable after it already called * stop_device. */ - if (!test_and_clear_bit(txq_id, trans->txqs.queue_used)) { + if (!test_and_clear_bit(txq_id, trans_pcie->txqs.queue_used)) { WARN_ONCE(test_bit(STATUS_DEVICE_ENABLED, &trans->status), "queue %d not used", txq_id); return; @@ -866,7 +1298,7 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id, } iwl_pcie_txq_unmap(trans, txq_id); - trans->txqs.txq[txq_id]->ampdu = false; + trans_pcie->txqs.txq[txq_id]->ampdu = false; IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", txq_id); } @@ -875,12 +1307,13 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id, static void iwl_trans_pcie_block_txq_ptrs(struct iwl_trans *trans, bool block) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int i; for (i = 0; i < trans->trans_cfg->base_params->num_of_queues; i++) { - struct iwl_txq *txq = trans->txqs.txq[i]; + struct iwl_txq *txq = trans_pcie->txqs.txq[i]; - if (i == trans->txqs.cmd.q_id) + if (i == trans_pcie->txqs.cmd.q_id) continue; /* we skip the command queue (obviously) so it's OK to nest */ @@ -912,7 +1345,8 @@ static void iwl_trans_pcie_block_txq_ptrs(struct iwl_trans *trans, bool block) int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) { - struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id]; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; struct iwl_device_cmd *out_cmd; struct iwl_cmd_meta *out_meta; void *dup_buf = NULL; @@ -1024,7 +1458,8 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans, out_cmd = txq->entries[idx].cmd; out_meta = &txq->entries[idx].meta; - memset(out_meta, 0, sizeof(*out_meta)); /* re-initialize to NULL */ + /* re-initialize, this also marks the SG list as unused */ + memset(out_meta, 0, sizeof(*out_meta)); if (cmd->flags & CMD_WANT_SKB) out_meta->source = cmd; @@ -1038,7 +1473,7 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans, sizeof(struct iwl_cmd_header_wide)); out_cmd->hdr_wide.reserved = 0; out_cmd->hdr_wide.sequence = - cpu_to_le16(QUEUE_TO_SEQ(trans->txqs.cmd.q_id) | + cpu_to_le16(QUEUE_TO_SEQ(trans_pcie->txqs.cmd.q_id) | INDEX_TO_SEQ(txq->write_ptr)); cmd_pos = sizeof(struct iwl_cmd_header_wide); @@ -1046,7 +1481,7 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans, } else { out_cmd->hdr.cmd = iwl_cmd_opcode(cmd->id); out_cmd->hdr.sequence = - cpu_to_le16(QUEUE_TO_SEQ(trans->txqs.cmd.q_id) | + cpu_to_le16(QUEUE_TO_SEQ(trans_pcie->txqs.cmd.q_id) | INDEX_TO_SEQ(txq->write_ptr)); out_cmd->hdr.group_id = 0; @@ -1097,7 +1532,7 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans, iwl_get_cmd_string(trans, cmd->id), group_id, out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), - cmd_size, txq->write_ptr, idx, trans->txqs.cmd.q_id); + cmd_size, txq->write_ptr, idx, trans_pcie->txqs.cmd.q_id); /* start the TFD with the minimum copy bytes */ tb0_size = min_t(int, copy_size, IWL_FIRST_TB_SIZE); @@ -1196,14 +1631,14 @@ void iwl_pcie_hcmd_complete(struct iwl_trans *trans, struct iwl_device_cmd *cmd; struct iwl_cmd_meta *meta; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id]; + struct iwl_txq *txq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; /* If a Tx command is being handled and it isn't in the actual * command queue then there a command routing bug has been introduced * in the queue management code. */ - if (WARN(txq_id != trans->txqs.cmd.q_id, + if (WARN(txq_id != trans_pcie->txqs.cmd.q_id, "wrong command queue %d (should be %d), sequence 0x%X readp=%d writep=%d\n", - txq_id, trans->txqs.cmd.q_id, sequence, txq->read_ptr, + txq_id, trans_pcie->txqs.cmd.q_id, sequence, txq->read_ptr, txq->write_ptr)) { iwl_print_hex_error(trans, pkt, 32); return; @@ -1306,19 +1741,169 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb, } #ifdef CONFIG_INET +static void *iwl_pcie_get_page_hdr(struct iwl_trans *trans, + size_t len, struct sk_buff *skb) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->txqs.tso_hdr_page); + struct iwl_tso_page_info *info; + struct page **page_ptr; + dma_addr_t phys; + void *ret; + + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->txqs.page_offs); + + if (WARN_ON(*page_ptr)) + return NULL; + + if (!p->page) + goto alloc; + + /* + * Check if there's enough room on this page + * + * Note that we put a page chaining pointer *last* in the + * page - we need it somewhere, and if it's there then we + * avoid DMA mapping the last bits of the page which may + * trigger the 32-bit boundary hardware bug. + * + * (see also get_workaround_page() in tx-gen2.c) + */ + if (((unsigned long)p->pos & ~PAGE_MASK) + len < IWL_TSO_PAGE_DATA_SIZE) { + info = IWL_TSO_PAGE_INFO(page_address(p->page)); + goto out; + } + + /* We don't have enough room on this page, get a new one. */ + iwl_pcie_free_and_unmap_tso_page(trans, p->page); + +alloc: + p->page = alloc_page(GFP_ATOMIC); + if (!p->page) + return NULL; + p->pos = page_address(p->page); + + info = IWL_TSO_PAGE_INFO(page_address(p->page)); + + /* set the chaining pointer to NULL */ + info->next = NULL; + + /* Create a DMA mapping for the page */ + phys = dma_map_page_attrs(trans->dev, p->page, 0, PAGE_SIZE, + DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + if (unlikely(dma_mapping_error(trans->dev, phys))) { + __free_page(p->page); + p->page = NULL; + + return NULL; + } + + /* Store physical address and set use count */ + info->dma_addr = phys; + refcount_set(&info->use_count, 1); +out: + *page_ptr = p->page; + /* Return an internal reference for the caller */ + refcount_inc(&info->use_count); + ret = p->pos; + p->pos += len; + + return ret; +} + +/** + * iwl_pcie_get_sgt_tb_phys - Find TB address in mapped SG list + * @sgt: scatter gather table + * @addr: Virtual address + * + * Find the entry that includes the address for the given address and return + * correct physical address for the TB entry. + * + * Returns: Address for TB entry + */ +dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr) +{ + struct scatterlist *sg; + int i; + + for_each_sgtable_dma_sg(sgt, sg, i) { + if (addr >= sg_virt(sg) && + (u8 *)addr < (u8 *)sg_virt(sg) + sg_dma_len(sg)) + return sg_dma_address(sg) + + ((unsigned long)addr - (unsigned long)sg_virt(sg)); + } + + WARN_ON_ONCE(1); + + return DMA_MAPPING_ERROR; +} + +/** + * iwl_pcie_prep_tso - Prepare TSO page and SKB for sending + * @trans: transport private data + * @skb: the SKB to map + * @cmd_meta: command meta to store the scatter list information for unmapping + * @hdr: output argument for TSO headers + * @hdr_room: requested length for TSO headers + * + * Allocate space for a scatter gather list and TSO headers and map the SKB + * using the scatter gather list. The SKB is unmapped again when the page is + * free'ed again at the end of the operation. + * + * Returns: newly allocated and mapped scatter gather table with list + */ +struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, + struct iwl_cmd_meta *cmd_meta, + u8 **hdr, unsigned int hdr_room) +{ + struct sg_table *sgt; + + if (WARN_ON_ONCE(skb_has_frag_list(skb))) + return NULL; + + *hdr = iwl_pcie_get_page_hdr(trans, + hdr_room + __alignof__(struct sg_table) + + sizeof(struct sg_table) + + (skb_shinfo(skb)->nr_frags + 1) * + sizeof(struct scatterlist), + skb); + if (!*hdr) + return NULL; + + sgt = (void *)PTR_ALIGN(*hdr + hdr_room, __alignof__(struct sg_table)); + sgt->sgl = (void *)(sgt + 1); + + sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1); + + sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, 0, skb->len); + if (WARN_ON_ONCE(sgt->orig_nents <= 0)) + return NULL; + + /* And map the entire SKB */ + if (dma_map_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0) < 0) + return NULL; + + /* Store non-zero (i.e. valid) offset for unmapping */ + cmd_meta->sg_offset = (unsigned long) sgt & ~PAGE_MASK; + + return sgt; +} + static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_txq *txq, u8 hdr_len, struct iwl_cmd_meta *out_meta, struct iwl_device_tx_cmd *dev_cmd, u16 tb1_len) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; struct ieee80211_hdr *hdr = (void *)skb->data; unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; unsigned int mss = skb_shinfo(skb)->gso_size; u16 length, iv_len, amsdu_pad; - u8 *start_hdr; - struct iwl_tso_hdr_page *hdr_page; + dma_addr_t start_hdr_phys; + u8 *start_hdr, *pos_hdr; + struct sg_table *sgt; struct tso_t tso; /* if the packet is protected, then it must be CCMP or GCMP */ @@ -1328,7 +1913,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, trace_iwlwifi_dev_tx(trans->dev, skb, iwl_txq_get_tfd(trans, txq, txq->write_ptr), - trans->txqs.tfd.size, + trans_pcie->txqs.tfd.size, &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len, 0); ip_hdrlen = skb_network_header_len(skb); @@ -1341,13 +1926,14 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len; /* Our device supports 9 segments at most, it will fit in 1 page */ - hdr_page = get_page_hdr(trans, hdr_room, skb); - if (!hdr_page) + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); + if (!sgt) return -ENOMEM; - start_hdr = hdr_page->pos; - memcpy(hdr_page->pos, skb->data + hdr_len, iv_len); - hdr_page->pos += iv_len; + start_hdr_phys = iwl_pcie_get_tso_page_phys(start_hdr); + pos_hdr = start_hdr; + memcpy(pos_hdr, skb->data + hdr_len, iv_len); + pos_hdr += iv_len; /* * Pull the ieee80211 header + IV to be able to use TSO core, @@ -1370,45 +1956,43 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, min_t(unsigned int, mss, total_len); unsigned int hdr_tb_len; dma_addr_t hdr_tb_phys; - u8 *subf_hdrs_start = hdr_page->pos; + u8 *subf_hdrs_start = pos_hdr; total_len -= data_left; - memset(hdr_page->pos, 0, amsdu_pad); - hdr_page->pos += amsdu_pad; + memset(pos_hdr, 0, amsdu_pad); + pos_hdr += amsdu_pad; amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen + data_left)) & 0x3; - ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr)); - hdr_page->pos += ETH_ALEN; - ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr)); - hdr_page->pos += ETH_ALEN; + ether_addr_copy(pos_hdr, ieee80211_get_DA(hdr)); + pos_hdr += ETH_ALEN; + ether_addr_copy(pos_hdr, ieee80211_get_SA(hdr)); + pos_hdr += ETH_ALEN; length = snap_ip_tcp_hdrlen + data_left; - *((__be16 *)hdr_page->pos) = cpu_to_be16(length); - hdr_page->pos += sizeof(length); + *((__be16 *)pos_hdr) = cpu_to_be16(length); + pos_hdr += sizeof(length); /* * This will copy the SNAP as well which will be considered * as MAC header. */ - tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len); + tso_build_hdr(skb, pos_hdr, &tso, data_left, !total_len); - hdr_page->pos += snap_ip_tcp_hdrlen; + pos_hdr += snap_ip_tcp_hdrlen; + + hdr_tb_len = pos_hdr - start_hdr; + hdr_tb_phys = iwl_pcie_get_tso_page_phys(start_hdr); - hdr_tb_len = hdr_page->pos - start_hdr; - hdr_tb_phys = dma_map_single(trans->dev, start_hdr, - hdr_tb_len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, hdr_tb_phys))) - return -EINVAL; iwl_pcie_txq_build_tfd(trans, txq, hdr_tb_phys, hdr_tb_len, false); trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, hdr_tb_phys, hdr_tb_len); /* add this subframe's headers' length to the tx_cmd */ - le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start); + le16_add_cpu(&tx_cmd->len, pos_hdr - subf_hdrs_start); /* prepare the start_hdr for the next subframe */ - start_hdr = hdr_page->pos; + start_hdr = pos_hdr; /* put the payload */ while (data_left) { @@ -1416,9 +2000,9 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, data_left); dma_addr_t tb_phys; - tb_phys = dma_map_single(trans->dev, tso.data, - size, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) + tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data); + /* Not a real mapping error, use direct comparison */ + if (unlikely(tb_phys == DMA_MAPPING_ERROR)) return -EINVAL; iwl_pcie_txq_build_tfd(trans, txq, tb_phys, @@ -1431,6 +2015,9 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, } } + dma_sync_single_for_device(trans->dev, start_hdr_phys, hdr_room, + DMA_TO_DEVICE); + /* re -add the WiFi header and IV */ skb_push(skb, hdr_len + iv_len); @@ -1450,9 +2037,61 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, } #endif /* CONFIG_INET */ +#define IWL_TX_CRC_SIZE 4 +#define IWL_TX_DELIMITER_SIZE 4 + +/* + * iwl_txq_gen1_update_byte_cnt_tbl - Set up entry in Tx byte-count array + */ +static void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans, + struct iwl_txq *txq, u16 byte_cnt, + int num_tbs) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwlagn_scd_bc_tbl *scd_bc_tbl; + int write_ptr = txq->write_ptr; + int txq_id = txq->id; + u8 sec_ctl = 0; + u16 len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE; + __le16 bc_ent; + struct iwl_device_tx_cmd *dev_cmd = txq->entries[txq->write_ptr].cmd; + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; + u8 sta_id = tx_cmd->sta_id; + + scd_bc_tbl = trans_pcie->txqs.scd_bc_tbls.addr; + + sec_ctl = tx_cmd->sec_ctl; + + switch (sec_ctl & TX_CMD_SEC_MSK) { + case TX_CMD_SEC_CCM: + len += IEEE80211_CCMP_MIC_LEN; + break; + case TX_CMD_SEC_TKIP: + len += IEEE80211_TKIP_ICV_LEN; + break; + case TX_CMD_SEC_WEP: + len += IEEE80211_WEP_IV_LEN + IEEE80211_WEP_ICV_LEN; + break; + } + if (trans_pcie->txqs.bc_table_dword) + len = DIV_ROUND_UP(len, 4); + + if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX)) + return; + + bc_ent = cpu_to_le16(len | (sta_id << 12)); + + scd_bc_tbl[txq_id].tfd_offset[write_ptr] = bc_ent; + + if (write_ptr < TFD_QUEUE_SIZE_BC_DUP) + scd_bc_tbl[txq_id].tfd_offset[TFD_QUEUE_SIZE_MAX + write_ptr] = + bc_ent; +} + int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_device_tx_cmd *dev_cmd, int txq_id) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct ieee80211_hdr *hdr; struct iwl_tx_cmd *tx_cmd = (struct iwl_tx_cmd *)dev_cmd->payload; struct iwl_cmd_meta *out_meta; @@ -1467,14 +2106,14 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, u16 wifi_seq; bool amsdu; - txq = trans->txqs.txq[txq_id]; + txq = trans_pcie->txqs.txq[txq_id]; - if (WARN_ONCE(!test_bit(txq_id, trans->txqs.queue_used), + if (WARN_ONCE(!test_bit(txq_id, trans_pcie->txqs.queue_used), "TX on unused queue %d\n", txq_id)) return -EINVAL; if (skb_is_nonlinear(skb) && - skb_shinfo(skb)->nr_frags > IWL_TRANS_MAX_FRAGS(trans) && + skb_shinfo(skb)->nr_frags > IWL_TRANS_PCIE_MAX_FRAGS(trans_pcie) && __skb_linearize(skb)) return -ENOMEM; @@ -1495,7 +2134,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_device_tx_cmd **dev_cmd_ptr; dev_cmd_ptr = (void *)((u8 *)skb->cb + - trans->txqs.dev_cmd_offs); + trans_pcie->txqs.dev_cmd_offs); *dev_cmd_ptr = dev_cmd; __skb_queue_tail(&txq->overflow_q, skb); @@ -1533,7 +2172,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, /* Set up first empty entry in queue's array of Tx/cmd buffers */ out_meta = &txq->entries[txq->write_ptr].meta; - out_meta->flags = 0; + memset(out_meta, 0, sizeof(*out_meta)); /* * The second TB (tb1) points to the remainder of the TX command @@ -1578,7 +2217,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, trace_iwlwifi_dev_tx(trans->dev, skb, iwl_txq_get_tfd(trans, txq, txq->write_ptr), - trans->txqs.tfd.size, + trans_pcie->txqs.tfd.size, &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len, hdr_len); @@ -1613,8 +2252,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, tfd = iwl_txq_get_tfd(trans, txq, txq->write_ptr); /* Set up entry for this TFD in Tx byte-count array */ iwl_txq_gen1_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len), - iwl_txq_gen1_tfd_get_num_tbs(trans, - tfd)); + iwl_txq_gen1_tfd_get_num_tbs(tfd)); wait_write_ptr = ieee80211_has_morefrags(fc); @@ -1649,3 +2287,379 @@ out_err: spin_unlock(&txq->lock); return -1; } + +static void iwl_txq_gen1_inval_byte_cnt_tbl(struct iwl_trans *trans, + struct iwl_txq *txq, + int read_ptr) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwlagn_scd_bc_tbl *scd_bc_tbl = trans_pcie->txqs.scd_bc_tbls.addr; + int txq_id = txq->id; + u8 sta_id = 0; + __le16 bc_ent; + struct iwl_device_tx_cmd *dev_cmd = txq->entries[read_ptr].cmd; + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; + + WARN_ON(read_ptr >= TFD_QUEUE_SIZE_MAX); + + if (txq_id != trans_pcie->txqs.cmd.q_id) + sta_id = tx_cmd->sta_id; + + bc_ent = cpu_to_le16(1 | (sta_id << 12)); + + scd_bc_tbl[txq_id].tfd_offset[read_ptr] = bc_ent; + + if (read_ptr < TFD_QUEUE_SIZE_BC_DUP) + scd_bc_tbl[txq_id].tfd_offset[TFD_QUEUE_SIZE_MAX + read_ptr] = + bc_ent; +} + +/* Frees buffers until index _not_ inclusive */ +void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn, + struct sk_buff_head *skbs, bool is_flush) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; + int tfd_num, read_ptr, last_to_free; + int txq_read_ptr, txq_write_ptr; + + /* This function is not meant to release cmd queue*/ + if (WARN_ON(txq_id == trans_pcie->txqs.cmd.q_id)) + return; + + if (WARN_ON(!txq)) + return; + + tfd_num = iwl_txq_get_cmd_index(txq, ssn); + + spin_lock_bh(&txq->reclaim_lock); + + spin_lock(&txq->lock); + txq_read_ptr = txq->read_ptr; + txq_write_ptr = txq->write_ptr; + spin_unlock(&txq->lock); + + read_ptr = iwl_txq_get_cmd_index(txq, txq_read_ptr); + + if (!test_bit(txq_id, trans_pcie->txqs.queue_used)) { + IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n", + txq_id, ssn); + goto out; + } + + if (read_ptr == tfd_num) + goto out; + + IWL_DEBUG_TX_REPLY(trans, "[Q %d] %d (%d) -> %d (%d)\n", + txq_id, read_ptr, txq_read_ptr, tfd_num, ssn); + + /* Since we free until index _not_ inclusive, the one before index is + * the last we will free. This one must be used + */ + last_to_free = iwl_txq_dec_wrap(trans, tfd_num); + + if (!iwl_txq_used(txq, last_to_free, txq_read_ptr, txq_write_ptr)) { + IWL_ERR(trans, + "%s: Read index for txq id (%d), last_to_free %d is out of range [0-%d] %d %d.\n", + __func__, txq_id, last_to_free, + trans->trans_cfg->base_params->max_tfd_queue_size, + txq_write_ptr, txq_read_ptr); + + iwl_op_mode_time_point(trans->op_mode, + IWL_FW_INI_TIME_POINT_FAKE_TX, + NULL); + goto out; + } + + if (WARN_ON(!skb_queue_empty(skbs))) + goto out; + + for (; + read_ptr != tfd_num; + txq_read_ptr = iwl_txq_inc_wrap(trans, txq_read_ptr), + read_ptr = iwl_txq_get_cmd_index(txq, txq_read_ptr)) { + struct iwl_cmd_meta *cmd_meta = &txq->entries[read_ptr].meta; + struct sk_buff *skb = txq->entries[read_ptr].skb; + + if (WARN_ONCE(!skb, "no SKB at %d (%d) on queue %d\n", + read_ptr, txq_read_ptr, txq_id)) + continue; + + iwl_pcie_free_tso_pages(trans, skb, cmd_meta); + + __skb_queue_tail(skbs, skb); + + txq->entries[read_ptr].skb = NULL; + + if (!trans->trans_cfg->gen2) + iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq, + txq_read_ptr); + + iwl_txq_free_tfd(trans, txq, txq_read_ptr); + } + + spin_lock(&txq->lock); + txq->read_ptr = txq_read_ptr; + + iwl_txq_progress(txq); + + if (iwl_txq_space(trans, txq) > txq->low_mark && + test_bit(txq_id, trans_pcie->txqs.queue_stopped)) { + struct sk_buff_head overflow_skbs; + struct sk_buff *skb; + + __skb_queue_head_init(&overflow_skbs); + skb_queue_splice_init(&txq->overflow_q, + is_flush ? skbs : &overflow_skbs); + + /* + * We are going to transmit from the overflow queue. + * Remember this state so that wait_for_txq_empty will know we + * are adding more packets to the TFD queue. It cannot rely on + * the state of &txq->overflow_q, as we just emptied it, but + * haven't TXed the content yet. + */ + txq->overflow_tx = true; + + /* + * This is tricky: we are in reclaim path and are holding + * reclaim_lock, so noone will try to access the txq data + * from that path. We stopped tx, so we can't have tx as well. + * Bottom line, we can unlock and re-lock later. + */ + spin_unlock(&txq->lock); + + while ((skb = __skb_dequeue(&overflow_skbs))) { + struct iwl_device_tx_cmd *dev_cmd_ptr; + + dev_cmd_ptr = *(void **)((u8 *)skb->cb + + trans_pcie->txqs.dev_cmd_offs); + + /* + * Note that we can very well be overflowing again. + * In that case, iwl_txq_space will be small again + * and we won't wake mac80211's queue. + */ + iwl_trans_tx(trans, skb, dev_cmd_ptr, txq_id); + } + + if (iwl_txq_space(trans, txq) > txq->low_mark) + iwl_trans_pcie_wake_queue(trans, txq); + + spin_lock(&txq->lock); + txq->overflow_tx = false; + } + + spin_unlock(&txq->lock); +out: + spin_unlock_bh(&txq->reclaim_lock); +} + +/* Set wr_ptr of specific device and txq */ +void iwl_pcie_set_q_ptrs(struct iwl_trans *trans, int txq_id, int ptr) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id]; + + spin_lock_bh(&txq->lock); + + txq->write_ptr = ptr; + txq->read_ptr = txq->write_ptr; + + spin_unlock_bh(&txq->lock); +} + +void iwl_pcie_freeze_txq_timer(struct iwl_trans *trans, + unsigned long txqs, bool freeze) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + int queue; + + for_each_set_bit(queue, &txqs, BITS_PER_LONG) { + struct iwl_txq *txq = trans_pcie->txqs.txq[queue]; + unsigned long now; + + spin_lock_bh(&txq->lock); + + now = jiffies; + + if (txq->frozen == freeze) + goto next_queue; + + IWL_DEBUG_TX_QUEUES(trans, "%s TXQ %d\n", + freeze ? "Freezing" : "Waking", queue); + + txq->frozen = freeze; + + if (txq->read_ptr == txq->write_ptr) + goto next_queue; + + if (freeze) { + if (unlikely(time_after(now, + txq->stuck_timer.expires))) { + /* + * The timer should have fired, maybe it is + * spinning right now on the lock. + */ + goto next_queue; + } + /* remember how long until the timer fires */ + txq->frozen_expiry_remainder = + txq->stuck_timer.expires - now; + del_timer(&txq->stuck_timer); + goto next_queue; + } + + /* + * Wake a non-empty queue -> arm timer with the + * remainder before it froze + */ + mod_timer(&txq->stuck_timer, + now + txq->frozen_expiry_remainder); + +next_queue: + spin_unlock_bh(&txq->lock); + } +} + +#define HOST_COMPLETE_TIMEOUT (2 * HZ) + +static int iwl_trans_pcie_send_hcmd_sync(struct iwl_trans *trans, + struct iwl_host_cmd *cmd) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + const char *cmd_str = iwl_get_cmd_string(trans, cmd->id); + struct iwl_txq *txq = trans_pcie->txqs.txq[trans_pcie->txqs.cmd.q_id]; + int cmd_idx; + int ret; + + IWL_DEBUG_INFO(trans, "Attempting to send sync command %s\n", cmd_str); + + if (WARN(test_and_set_bit(STATUS_SYNC_HCMD_ACTIVE, + &trans->status), + "Command %s: a command is already active!\n", cmd_str)) + return -EIO; + + IWL_DEBUG_INFO(trans, "Setting HCMD_ACTIVE for command %s\n", cmd_str); + + if (trans->trans_cfg->gen2) + cmd_idx = iwl_pcie_gen2_enqueue_hcmd(trans, cmd); + else + cmd_idx = iwl_pcie_enqueue_hcmd(trans, cmd); + + if (cmd_idx < 0) { + ret = cmd_idx; + clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status); + IWL_ERR(trans, "Error sending %s: enqueue_hcmd failed: %d\n", + cmd_str, ret); + return ret; + } + + ret = wait_event_timeout(trans->wait_command_queue, + !test_bit(STATUS_SYNC_HCMD_ACTIVE, + &trans->status), + HOST_COMPLETE_TIMEOUT); + if (!ret) { + IWL_ERR(trans, "Error sending %s: time out after %dms.\n", + cmd_str, jiffies_to_msecs(HOST_COMPLETE_TIMEOUT)); + + IWL_ERR(trans, "Current CMD queue read_ptr %d write_ptr %d\n", + txq->read_ptr, txq->write_ptr); + + clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status); + IWL_DEBUG_INFO(trans, "Clearing HCMD_ACTIVE for command %s\n", + cmd_str); + ret = -ETIMEDOUT; + + iwl_trans_sync_nmi(trans); + goto cancel; + } + + if (test_bit(STATUS_FW_ERROR, &trans->status)) { + if (!test_and_clear_bit(STATUS_SUPPRESS_CMD_ERROR_ONCE, + &trans->status)) { + IWL_ERR(trans, "FW error in SYNC CMD %s\n", cmd_str); + dump_stack(); + } + ret = -EIO; + goto cancel; + } + + if (!(cmd->flags & CMD_SEND_IN_RFKILL) && + test_bit(STATUS_RFKILL_OPMODE, &trans->status)) { + IWL_DEBUG_RF_KILL(trans, "RFKILL in SYNC CMD... no rsp\n"); + ret = -ERFKILL; + goto cancel; + } + + if ((cmd->flags & CMD_WANT_SKB) && !cmd->resp_pkt) { + IWL_ERR(trans, "Error: Response NULL in '%s'\n", cmd_str); + ret = -EIO; + goto cancel; + } + + return 0; + +cancel: + if (cmd->flags & CMD_WANT_SKB) { + /* + * Cancel the CMD_WANT_SKB flag for the cmd in the + * TX cmd queue. Otherwise in case the cmd comes + * in later, it will possibly set an invalid + * address (cmd->meta.source). + */ + txq->entries[cmd_idx].meta.flags &= ~CMD_WANT_SKB; + } + + if (cmd->resp_pkt) { + iwl_free_resp(cmd); + cmd->resp_pkt = NULL; + } + + return ret; +} + +int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, + struct iwl_host_cmd *cmd) +{ + /* Make sure the NIC is still alive in the bus */ + if (test_bit(STATUS_TRANS_DEAD, &trans->status)) + return -ENODEV; + + if (!(cmd->flags & CMD_SEND_IN_RFKILL) && + test_bit(STATUS_RFKILL_OPMODE, &trans->status)) { + IWL_DEBUG_RF_KILL(trans, "Dropping CMD 0x%x: RF KILL\n", + cmd->id); + return -ERFKILL; + } + + if (unlikely(trans->system_pm_mode == IWL_PLAT_PM_MODE_D3 && + !(cmd->flags & CMD_SEND_IN_D3))) { + IWL_DEBUG_WOWLAN(trans, "Dropping CMD 0x%x: D3\n", cmd->id); + return -EHOSTDOWN; + } + + if (cmd->flags & CMD_ASYNC) { + int ret; + + /* An asynchronous command can not expect an SKB to be set. */ + if (WARN_ON(cmd->flags & CMD_WANT_SKB)) + return -EINVAL; + + if (trans->trans_cfg->gen2) + ret = iwl_pcie_gen2_enqueue_hcmd(trans, cmd); + else + ret = iwl_pcie_enqueue_hcmd(trans, cmd); + + if (ret < 0) { + IWL_ERR(trans, + "Error sending %s: enqueue_hcmd failed: %d\n", + iwl_get_cmd_string(trans, cmd->id), ret); + return ret; + } + return 0; + } + + return iwl_trans_pcie_send_hcmd_sync(trans, cmd); +} +IWL_EXPORT_SYMBOL(iwl_trans_pcie_send_hcmd); diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c deleted file mode 100644 index 6229c785c845..000000000000 --- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c +++ /dev/null @@ -1,1900 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause -/* - * Copyright (C) 2020-2024 Intel Corporation - */ -#include <net/tso.h> -#include <linux/tcp.h> - -#include "iwl-debug.h" -#include "iwl-io.h" -#include "fw/api/commands.h" -#include "fw/api/tx.h" -#include "fw/api/datapath.h" -#include "fw/api/debug.h" -#include "queue/tx.h" -#include "iwl-fh.h" -#include "iwl-scd.h" -#include <linux/dmapool.h> - -/* - * iwl_txq_update_byte_tbl - Set up entry in Tx byte-count array - */ -static void iwl_pcie_gen2_update_byte_tbl(struct iwl_trans *trans, - struct iwl_txq *txq, u16 byte_cnt, - int num_tbs) -{ - int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); - u8 filled_tfd_size, num_fetch_chunks; - u16 len = byte_cnt; - __le16 bc_ent; - - if (WARN(idx >= txq->n_window, "%d >= %d\n", idx, txq->n_window)) - return; - - filled_tfd_size = offsetof(struct iwl_tfh_tfd, tbs) + - num_tbs * sizeof(struct iwl_tfh_tb); - /* - * filled_tfd_size contains the number of filled bytes in the TFD. - * Dividing it by 64 will give the number of chunks to fetch - * to SRAM- 0 for one chunk, 1 for 2 and so on. - * If, for example, TFD contains only 3 TBs then 32 bytes - * of the TFD are used, and only one chunk of 64 bytes should - * be fetched - */ - num_fetch_chunks = DIV_ROUND_UP(filled_tfd_size, 64) - 1; - - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) { - struct iwl_gen3_bc_tbl_entry *scd_bc_tbl_gen3 = txq->bc_tbl.addr; - - /* Starting from AX210, the HW expects bytes */ - WARN_ON(trans->txqs.bc_table_dword); - WARN_ON(len > 0x3FFF); - bc_ent = cpu_to_le16(len | (num_fetch_chunks << 14)); - scd_bc_tbl_gen3[idx].tfd_offset = bc_ent; - } else { - struct iwlagn_scd_bc_tbl *scd_bc_tbl = txq->bc_tbl.addr; - - /* Before AX210, the HW expects DW */ - WARN_ON(!trans->txqs.bc_table_dword); - len = DIV_ROUND_UP(len, 4); - WARN_ON(len > 0xFFF); - bc_ent = cpu_to_le16(len | (num_fetch_chunks << 12)); - scd_bc_tbl->tfd_offset[idx] = bc_ent; - } -} - -/* - * iwl_txq_inc_wr_ptr - Send new write index to hardware - */ -void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq) -{ - lockdep_assert_held(&txq->lock); - - IWL_DEBUG_TX(trans, "Q:%d WR: 0x%x\n", txq->id, txq->write_ptr); - - /* - * if not in power-save mode, uCode will never sleep when we're - * trying to tx (during RFKILL, we're not trying to tx). - */ - iwl_write32(trans, HBUS_TARG_WRPTR, txq->write_ptr | (txq->id << 16)); -} - -static u8 iwl_txq_gen2_get_num_tbs(struct iwl_trans *trans, - struct iwl_tfh_tfd *tfd) -{ - return le16_to_cpu(tfd->num_tbs) & 0x1f; -} - -int iwl_txq_gen2_set_tb(struct iwl_trans *trans, struct iwl_tfh_tfd *tfd, - dma_addr_t addr, u16 len) -{ - int idx = iwl_txq_gen2_get_num_tbs(trans, tfd); - struct iwl_tfh_tb *tb; - - /* Only WARN here so we know about the issue, but we mess up our - * unmap path because not every place currently checks for errors - * returned from this function - it can only return an error if - * there's no more space, and so when we know there is enough we - * don't always check ... - */ - WARN(iwl_txq_crosses_4g_boundary(addr, len), - "possible DMA problem with iova:0x%llx, len:%d\n", - (unsigned long long)addr, len); - - if (WARN_ON(idx >= IWL_TFH_NUM_TBS)) - return -EINVAL; - tb = &tfd->tbs[idx]; - - /* Each TFD can point to a maximum max_tbs Tx buffers */ - if (le16_to_cpu(tfd->num_tbs) >= trans->txqs.tfd.max_tbs) { - IWL_ERR(trans, "Error can not send more than %d chunks\n", - trans->txqs.tfd.max_tbs); - return -EINVAL; - } - - put_unaligned_le64(addr, &tb->addr); - tb->tb_len = cpu_to_le16(len); - - tfd->num_tbs = cpu_to_le16(idx + 1); - - return idx; -} - -static void iwl_txq_set_tfd_invalid_gen2(struct iwl_trans *trans, - struct iwl_tfh_tfd *tfd) -{ - tfd->num_tbs = 0; - - iwl_txq_gen2_set_tb(trans, tfd, trans->invalid_tx_cmd.dma, - trans->invalid_tx_cmd.size); -} - -void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans, struct iwl_cmd_meta *meta, - struct iwl_tfh_tfd *tfd) -{ - int i, num_tbs; - - /* Sanity check on number of chunks */ - num_tbs = iwl_txq_gen2_get_num_tbs(trans, tfd); - - if (num_tbs > trans->txqs.tfd.max_tbs) { - IWL_ERR(trans, "Too many chunks: %i\n", num_tbs); - return; - } - - /* first TB is never freed - it's the bidirectional DMA data */ - for (i = 1; i < num_tbs; i++) { - if (meta->tbs & BIT(i)) - dma_unmap_page(trans->dev, - le64_to_cpu(tfd->tbs[i].addr), - le16_to_cpu(tfd->tbs[i].tb_len), - DMA_TO_DEVICE); - else - dma_unmap_single(trans->dev, - le64_to_cpu(tfd->tbs[i].addr), - le16_to_cpu(tfd->tbs[i].tb_len), - DMA_TO_DEVICE); - } - - iwl_txq_set_tfd_invalid_gen2(trans, tfd); -} - -void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq) -{ - /* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and - * idx is bounded by n_window - */ - int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); - struct sk_buff *skb; - - lockdep_assert_held(&txq->lock); - - if (!txq->entries) - return; - - iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta, - iwl_txq_get_tfd(trans, txq, idx)); - - skb = txq->entries[idx].skb; - - /* Can be called from irqs-disabled context - * If skb is not NULL, it means that the whole queue is being - * freed and that the queue is not empty - free the skb - */ - if (skb) { - iwl_op_mode_free_skb(trans->op_mode, skb); - txq->entries[idx].skb = NULL; - } -} - -static struct page *get_workaround_page(struct iwl_trans *trans, - struct sk_buff *skb) -{ - struct page **page_ptr; - struct page *ret; - - page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs); - - ret = alloc_page(GFP_ATOMIC); - if (!ret) - return NULL; - - /* set the chaining pointer to the previous page if there */ - *(void **)((u8 *)page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr; - *page_ptr = ret; - - return ret; -} - -/* - * Add a TB and if needed apply the FH HW bug workaround; - * meta != NULL indicates that it's a page mapping and we - * need to dma_unmap_page() and set the meta->tbs bit in - * this case. - */ -static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans, - struct sk_buff *skb, - struct iwl_tfh_tfd *tfd, - dma_addr_t phys, void *virt, - u16 len, struct iwl_cmd_meta *meta) -{ - dma_addr_t oldphys = phys; - struct page *page; - int ret; - - if (unlikely(dma_mapping_error(trans->dev, phys))) - return -ENOMEM; - - if (likely(!iwl_txq_crosses_4g_boundary(phys, len))) { - ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len); - - if (ret < 0) - goto unmap; - - if (meta) - meta->tbs |= BIT(ret); - - ret = 0; - goto trace; - } - - /* - * Work around a hardware bug. If (as expressed in the - * condition above) the TB ends on a 32-bit boundary, - * then the next TB may be accessed with the wrong - * address. - * To work around it, copy the data elsewhere and make - * a new mapping for it so the device will not fail. - */ - - if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) { - ret = -ENOBUFS; - goto unmap; - } - - page = get_workaround_page(trans, skb); - if (!page) { - ret = -ENOMEM; - goto unmap; - } - - memcpy(page_address(page), virt, len); - - phys = dma_map_single(trans->dev, page_address(page), len, - DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, phys))) - return -ENOMEM; - ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len); - if (ret < 0) { - /* unmap the new allocation as single */ - oldphys = phys; - meta = NULL; - goto unmap; - } - IWL_DEBUG_TX(trans, - "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n", - len, (unsigned long long)oldphys, - (unsigned long long)phys); - - ret = 0; -unmap: - if (meta) - dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE); - else - dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE); -trace: - trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len); - - return ret; -} - -#ifdef CONFIG_INET -struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, - struct sk_buff *skb) -{ - struct iwl_tso_hdr_page *p = this_cpu_ptr(trans->txqs.tso_hdr_page); - struct page **page_ptr; - - page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs); - - if (WARN_ON(*page_ptr)) - return NULL; - - if (!p->page) - goto alloc; - - /* - * Check if there's enough room on this page - * - * Note that we put a page chaining pointer *last* in the - * page - we need it somewhere, and if it's there then we - * avoid DMA mapping the last bits of the page which may - * trigger the 32-bit boundary hardware bug. - * - * (see also get_workaround_page() in tx-gen2.c) - */ - if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE - - sizeof(void *)) - goto out; - - /* We don't have enough room on this page, get a new one. */ - __free_page(p->page); - -alloc: - p->page = alloc_page(GFP_ATOMIC); - if (!p->page) - return NULL; - p->pos = page_address(p->page); - /* set the chaining pointer to NULL */ - *(void **)((u8 *)page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL; -out: - *page_ptr = p->page; - get_page(p->page); - return p; -} -#endif - -static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans, - struct sk_buff *skb, - struct iwl_tfh_tfd *tfd, int start_len, - u8 hdr_len, - struct iwl_device_tx_cmd *dev_cmd) -{ -#ifdef CONFIG_INET - struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; - struct ieee80211_hdr *hdr = (void *)skb->data; - unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; - unsigned int mss = skb_shinfo(skb)->gso_size; - u16 length, amsdu_pad; - u8 *start_hdr; - struct iwl_tso_hdr_page *hdr_page; - struct tso_t tso; - - trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), - &dev_cmd->hdr, start_len, 0); - - ip_hdrlen = skb_network_header_len(skb); - snap_ip_tcp_hdrlen = 8 + ip_hdrlen + tcp_hdrlen(skb); - total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len; - amsdu_pad = 0; - - /* total amount of header we may need for this A-MSDU */ - hdr_room = DIV_ROUND_UP(total_len, mss) * - (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); - - /* Our device supports 9 segments at most, it will fit in 1 page */ - hdr_page = get_page_hdr(trans, hdr_room, skb); - if (!hdr_page) - return -ENOMEM; - - start_hdr = hdr_page->pos; - - /* - * Pull the ieee80211 header to be able to use TSO core, - * we will restore it for the tx_status flow. - */ - skb_pull(skb, hdr_len); - - /* - * Remove the length of all the headers that we don't actually - * have in the MPDU by themselves, but that we duplicate into - * all the different MSDUs inside the A-MSDU. - */ - le16_add_cpu(&tx_cmd->len, -snap_ip_tcp_hdrlen); - - tso_start(skb, &tso); - - while (total_len) { - /* this is the data left for this subframe */ - unsigned int data_left = min_t(unsigned int, mss, total_len); - unsigned int tb_len; - dma_addr_t tb_phys; - u8 *subf_hdrs_start = hdr_page->pos; - - total_len -= data_left; - - memset(hdr_page->pos, 0, amsdu_pad); - hdr_page->pos += amsdu_pad; - amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen + - data_left)) & 0x3; - ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr)); - hdr_page->pos += ETH_ALEN; - ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr)); - hdr_page->pos += ETH_ALEN; - - length = snap_ip_tcp_hdrlen + data_left; - *((__be16 *)hdr_page->pos) = cpu_to_be16(length); - hdr_page->pos += sizeof(length); - - /* - * This will copy the SNAP as well which will be considered - * as MAC header. - */ - tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len); - - hdr_page->pos += snap_ip_tcp_hdrlen; - - tb_len = hdr_page->pos - start_hdr; - tb_phys = dma_map_single(trans->dev, start_hdr, - tb_len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) - goto out_err; - /* - * No need for _with_wa, this is from the TSO page and - * we leave some space at the end of it so can't hit - * the buggy scenario. - */ - iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb_len); - trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, - tb_phys, tb_len); - /* add this subframe's headers' length to the tx_cmd */ - le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start); - - /* prepare the start_hdr for the next subframe */ - start_hdr = hdr_page->pos; - - /* put the payload */ - while (data_left) { - int ret; - - tb_len = min_t(unsigned int, tso.size, data_left); - tb_phys = dma_map_single(trans->dev, tso.data, - tb_len, DMA_TO_DEVICE); - ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, - tb_phys, tso.data, - tb_len, NULL); - if (ret) - goto out_err; - - data_left -= tb_len; - tso_build_data(skb, &tso, tb_len); - } - } - - /* re -add the WiFi header */ - skb_push(skb, hdr_len); - - return 0; - -out_err: -#endif - return -EINVAL; -} - -static struct -iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans, - struct iwl_txq *txq, - struct iwl_device_tx_cmd *dev_cmd, - struct sk_buff *skb, - struct iwl_cmd_meta *out_meta, - int hdr_len, - int tx_cmd_len) -{ - int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); - struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); - dma_addr_t tb_phys; - int len; - void *tb1_addr; - - tb_phys = iwl_txq_get_first_tb_dma(txq, idx); - - /* - * No need for _with_wa, the first TB allocation is aligned up - * to a 64-byte boundary and thus can't be at the end or cross - * a page boundary (much less a 2^32 boundary). - */ - iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); - - /* - * The second TB (tb1) points to the remainder of the TX command - * and the 802.11 header - dword aligned size - * (This calculation modifies the TX command, so do it before the - * setup of the first TB) - */ - len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len - - IWL_FIRST_TB_SIZE; - - /* do not align A-MSDU to dword as the subframe header aligns it */ - - /* map the data for TB1 */ - tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE; - tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) - goto out_err; - /* - * No need for _with_wa(), we ensure (via alignment) that the data - * here can never cross or end at a page boundary. - */ - iwl_txq_gen2_set_tb(trans, tfd, tb_phys, len); - - if (iwl_txq_gen2_build_amsdu(trans, skb, tfd, len + IWL_FIRST_TB_SIZE, - hdr_len, dev_cmd)) - goto out_err; - - /* building the A-MSDU might have changed this data, memcpy it now */ - memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); - return tfd; - -out_err: - iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); - return NULL; -} - -static int iwl_txq_gen2_tx_add_frags(struct iwl_trans *trans, - struct sk_buff *skb, - struct iwl_tfh_tfd *tfd, - struct iwl_cmd_meta *out_meta) -{ - int i; - - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { - const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; - dma_addr_t tb_phys; - unsigned int fragsz = skb_frag_size(frag); - int ret; - - if (!fragsz) - continue; - - tb_phys = skb_frag_dma_map(trans->dev, frag, 0, - fragsz, DMA_TO_DEVICE); - ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, - skb_frag_address(frag), - fragsz, out_meta); - if (ret) - return ret; - } - - return 0; -} - -static struct -iwl_tfh_tfd *iwl_txq_gen2_build_tx(struct iwl_trans *trans, - struct iwl_txq *txq, - struct iwl_device_tx_cmd *dev_cmd, - struct sk_buff *skb, - struct iwl_cmd_meta *out_meta, - int hdr_len, - int tx_cmd_len, - bool pad) -{ - int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); - struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); - dma_addr_t tb_phys; - int len, tb1_len, tb2_len; - void *tb1_addr; - struct sk_buff *frag; - - tb_phys = iwl_txq_get_first_tb_dma(txq, idx); - - /* The first TB points to bi-directional DMA data */ - memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); - - /* - * No need for _with_wa, the first TB allocation is aligned up - * to a 64-byte boundary and thus can't be at the end or cross - * a page boundary (much less a 2^32 boundary). - */ - iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); - - /* - * The second TB (tb1) points to the remainder of the TX command - * and the 802.11 header - dword aligned size - * (This calculation modifies the TX command, so do it before the - * setup of the first TB) - */ - len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len - - IWL_FIRST_TB_SIZE; - - if (pad) - tb1_len = ALIGN(len, 4); - else - tb1_len = len; - - /* map the data for TB1 */ - tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE; - tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) - goto out_err; - /* - * No need for _with_wa(), we ensure (via alignment) that the data - * here can never cross or end at a page boundary. - */ - iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb1_len); - trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, - IWL_FIRST_TB_SIZE + tb1_len, hdr_len); - - /* set up TFD's third entry to point to remainder of skb's head */ - tb2_len = skb_headlen(skb) - hdr_len; - - if (tb2_len > 0) { - int ret; - - tb_phys = dma_map_single(trans->dev, skb->data + hdr_len, - tb2_len, DMA_TO_DEVICE); - ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, - skb->data + hdr_len, tb2_len, - NULL); - if (ret) - goto out_err; - } - - if (iwl_txq_gen2_tx_add_frags(trans, skb, tfd, out_meta)) - goto out_err; - - skb_walk_frags(skb, frag) { - int ret; - - tb_phys = dma_map_single(trans->dev, frag->data, - skb_headlen(frag), DMA_TO_DEVICE); - ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, - frag->data, - skb_headlen(frag), NULL); - if (ret) - goto out_err; - if (iwl_txq_gen2_tx_add_frags(trans, frag, tfd, out_meta)) - goto out_err; - } - - return tfd; - -out_err: - iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); - return NULL; -} - -static -struct iwl_tfh_tfd *iwl_txq_gen2_build_tfd(struct iwl_trans *trans, - struct iwl_txq *txq, - struct iwl_device_tx_cmd *dev_cmd, - struct sk_buff *skb, - struct iwl_cmd_meta *out_meta) -{ - struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; - int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); - struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx); - int len, hdr_len; - bool amsdu; - - /* There must be data left over for TB1 or this code must be changed */ - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) < IWL_FIRST_TB_SIZE); - BUILD_BUG_ON(sizeof(struct iwl_cmd_header) + - offsetofend(struct iwl_tx_cmd_gen2, dram_info) > - IWL_FIRST_TB_SIZE); - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen3) < IWL_FIRST_TB_SIZE); - BUILD_BUG_ON(sizeof(struct iwl_cmd_header) + - offsetofend(struct iwl_tx_cmd_gen3, dram_info) > - IWL_FIRST_TB_SIZE); - - memset(tfd, 0, sizeof(*tfd)); - - if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) - len = sizeof(struct iwl_tx_cmd_gen2); - else - len = sizeof(struct iwl_tx_cmd_gen3); - - amsdu = ieee80211_is_data_qos(hdr->frame_control) && - (*ieee80211_get_qos_ctl(hdr) & - IEEE80211_QOS_CTL_A_MSDU_PRESENT); - - hdr_len = ieee80211_hdrlen(hdr->frame_control); - - /* - * Only build A-MSDUs here if doing so by GSO, otherwise it may be - * an A-MSDU for other reasons, e.g. NAN or an A-MSDU having been - * built in the higher layers already. - */ - if (amsdu && skb_shinfo(skb)->gso_size) - return iwl_txq_gen2_build_tx_amsdu(trans, txq, dev_cmd, skb, - out_meta, hdr_len, len); - return iwl_txq_gen2_build_tx(trans, txq, dev_cmd, skb, out_meta, - hdr_len, len, !amsdu); -} - -int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q) -{ - unsigned int max; - unsigned int used; - - /* - * To avoid ambiguity between empty and completely full queues, there - * should always be less than max_tfd_queue_size elements in the queue. - * If q->n_window is smaller than max_tfd_queue_size, there is no need - * to reserve any queue entries for this purpose. - */ - if (q->n_window < trans->trans_cfg->base_params->max_tfd_queue_size) - max = q->n_window; - else - max = trans->trans_cfg->base_params->max_tfd_queue_size - 1; - - /* - * max_tfd_queue_size is a power of 2, so the following is equivalent to - * modulo by max_tfd_queue_size and is well defined. - */ - used = (q->write_ptr - q->read_ptr) & - (trans->trans_cfg->base_params->max_tfd_queue_size - 1); - - if (WARN_ON(used > max)) - return 0; - - return max - used; -} - -int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, - struct iwl_device_tx_cmd *dev_cmd, int txq_id) -{ - struct iwl_cmd_meta *out_meta; - struct iwl_txq *txq = trans->txqs.txq[txq_id]; - u16 cmd_len; - int idx; - void *tfd; - - if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES, - "queue %d out of range", txq_id)) - return -EINVAL; - - if (WARN_ONCE(!test_bit(txq_id, trans->txqs.queue_used), - "TX on unused queue %d\n", txq_id)) - return -EINVAL; - - if (skb_is_nonlinear(skb) && - skb_shinfo(skb)->nr_frags > IWL_TRANS_MAX_FRAGS(trans) && - __skb_linearize(skb)) - return -ENOMEM; - - spin_lock(&txq->lock); - - if (iwl_txq_space(trans, txq) < txq->high_mark) { - iwl_txq_stop(trans, txq); - - /* don't put the packet on the ring, if there is no room */ - if (unlikely(iwl_txq_space(trans, txq) < 3)) { - struct iwl_device_tx_cmd **dev_cmd_ptr; - - dev_cmd_ptr = (void *)((u8 *)skb->cb + - trans->txqs.dev_cmd_offs); - - *dev_cmd_ptr = dev_cmd; - __skb_queue_tail(&txq->overflow_q, skb); - spin_unlock(&txq->lock); - return 0; - } - } - - idx = iwl_txq_get_cmd_index(txq, txq->write_ptr); - - /* Set up driver data for this TFD */ - txq->entries[idx].skb = skb; - txq->entries[idx].cmd = dev_cmd; - - dev_cmd->hdr.sequence = - cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | - INDEX_TO_SEQ(idx))); - - /* Set up first empty entry in queue's array of Tx/cmd buffers */ - out_meta = &txq->entries[idx].meta; - out_meta->flags = 0; - - tfd = iwl_txq_gen2_build_tfd(trans, txq, dev_cmd, skb, out_meta); - if (!tfd) { - spin_unlock(&txq->lock); - return -1; - } - - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) { - struct iwl_tx_cmd_gen3 *tx_cmd_gen3 = - (void *)dev_cmd->payload; - - cmd_len = le16_to_cpu(tx_cmd_gen3->len); - } else { - struct iwl_tx_cmd_gen2 *tx_cmd_gen2 = - (void *)dev_cmd->payload; - - cmd_len = le16_to_cpu(tx_cmd_gen2->len); - } - - /* Set up entry for this TFD in Tx byte-count array */ - iwl_pcie_gen2_update_byte_tbl(trans, txq, cmd_len, - iwl_txq_gen2_get_num_tbs(trans, tfd)); - - /* start timer if queue currently empty */ - if (txq->read_ptr == txq->write_ptr && txq->wd_timeout) - mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout); - - /* Tell device the write index *just past* this latest filled TFD */ - txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr); - iwl_txq_inc_wr_ptr(trans, txq); - /* - * At this point the frame is "transmitted" successfully - * and we will get a TX status notification eventually. - */ - spin_unlock(&txq->lock); - return 0; -} - -/*************** HOST COMMAND QUEUE FUNCTIONS *****/ - -/* - * iwl_txq_gen2_unmap - Unmap any remaining DMA mappings and free skb's - */ -void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id) -{ - struct iwl_txq *txq = trans->txqs.txq[txq_id]; - - spin_lock_bh(&txq->lock); - while (txq->write_ptr != txq->read_ptr) { - IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n", - txq_id, txq->read_ptr); - - if (txq_id != trans->txqs.cmd.q_id) { - int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); - struct sk_buff *skb = txq->entries[idx].skb; - - if (!WARN_ON_ONCE(!skb)) - iwl_txq_free_tso_page(trans, skb); - } - iwl_txq_gen2_free_tfd(trans, txq); - txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr); - } - - while (!skb_queue_empty(&txq->overflow_q)) { - struct sk_buff *skb = __skb_dequeue(&txq->overflow_q); - - iwl_op_mode_free_skb(trans->op_mode, skb); - } - - spin_unlock_bh(&txq->lock); - - /* just in case - this queue may have been stopped */ - iwl_wake_queue(trans, txq); -} - -static void iwl_txq_gen2_free_memory(struct iwl_trans *trans, - struct iwl_txq *txq) -{ - struct device *dev = trans->dev; - - /* De-alloc circular buffer of TFDs */ - if (txq->tfds) { - dma_free_coherent(dev, - trans->txqs.tfd.size * txq->n_window, - txq->tfds, txq->dma_addr); - dma_free_coherent(dev, - sizeof(*txq->first_tb_bufs) * txq->n_window, - txq->first_tb_bufs, txq->first_tb_dma); - } - - kfree(txq->entries); - if (txq->bc_tbl.addr) - dma_pool_free(trans->txqs.bc_pool, - txq->bc_tbl.addr, txq->bc_tbl.dma); - kfree(txq); -} - -/* - * iwl_pcie_txq_free - Deallocate DMA queue. - * @txq: Transmit queue to deallocate. - * - * Empty queue by removing and destroying all BD's. - * Free all buffers. - * 0-fill, but do not free "txq" descriptor structure. - */ -static void iwl_txq_gen2_free(struct iwl_trans *trans, int txq_id) -{ - struct iwl_txq *txq; - int i; - - if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES, - "queue %d out of range", txq_id)) - return; - - txq = trans->txqs.txq[txq_id]; - - if (WARN_ON(!txq)) - return; - - iwl_txq_gen2_unmap(trans, txq_id); - - /* De-alloc array of command/tx buffers */ - if (txq_id == trans->txqs.cmd.q_id) - for (i = 0; i < txq->n_window; i++) { - kfree_sensitive(txq->entries[i].cmd); - kfree_sensitive(txq->entries[i].free_buf); - } - del_timer_sync(&txq->stuck_timer); - - iwl_txq_gen2_free_memory(trans, txq); - - trans->txqs.txq[txq_id] = NULL; - - clear_bit(txq_id, trans->txqs.queue_used); -} - -/* - * iwl_queue_init - Initialize queue's high/low-water and read/write indexes - */ -static int iwl_queue_init(struct iwl_txq *q, int slots_num) -{ - q->n_window = slots_num; - - /* slots_num must be power-of-two size, otherwise - * iwl_txq_get_cmd_index is broken. */ - if (WARN_ON(!is_power_of_2(slots_num))) - return -EINVAL; - - q->low_mark = q->n_window / 4; - if (q->low_mark < 4) - q->low_mark = 4; - - q->high_mark = q->n_window / 8; - if (q->high_mark < 2) - q->high_mark = 2; - - q->write_ptr = 0; - q->read_ptr = 0; - - return 0; -} - -int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num, - bool cmd_queue) -{ - int ret; - u32 tfd_queue_max_size = - trans->trans_cfg->base_params->max_tfd_queue_size; - - txq->need_update = false; - - /* max_tfd_queue_size must be power-of-two size, otherwise - * iwl_txq_inc_wrap and iwl_txq_dec_wrap are broken. */ - if (WARN_ONCE(tfd_queue_max_size & (tfd_queue_max_size - 1), - "Max tfd queue size must be a power of two, but is %d", - tfd_queue_max_size)) - return -EINVAL; - - /* Initialize queue's high/low-water marks, and head/tail indexes */ - ret = iwl_queue_init(txq, slots_num); - if (ret) - return ret; - - spin_lock_init(&txq->lock); - - if (cmd_queue) { - static struct lock_class_key iwl_txq_cmd_queue_lock_class; - - lockdep_set_class(&txq->lock, &iwl_txq_cmd_queue_lock_class); - } - - __skb_queue_head_init(&txq->overflow_q); - - return 0; -} - -void iwl_txq_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb) -{ - struct page **page_ptr; - struct page *next; - - page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs); - next = *page_ptr; - *page_ptr = NULL; - - while (next) { - struct page *tmp = next; - - next = *(void **)((u8 *)page_address(next) + PAGE_SIZE - - sizeof(void *)); - __free_page(tmp); - } -} - -void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq) -{ - u32 txq_id = txq->id; - u32 status; - bool active; - u8 fifo; - - if (trans->trans_cfg->gen2) { - IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id, - txq->read_ptr, txq->write_ptr); - /* TODO: access new SCD registers and dump them */ - return; - } - - status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id)); - fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7; - active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE)); - - IWL_ERR(trans, - "Queue %d is %sactive on fifo %d and stuck for %u ms. SW [%d, %d] HW [%d, %d] FH TRB=0x0%x\n", - txq_id, active ? "" : "in", fifo, - jiffies_to_msecs(txq->wd_timeout), - txq->read_ptr, txq->write_ptr, - iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq_id)) & - (trans->trans_cfg->base_params->max_tfd_queue_size - 1), - iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq_id)) & - (trans->trans_cfg->base_params->max_tfd_queue_size - 1), - iwl_read_direct32(trans, FH_TX_TRB_REG(fifo))); -} - -static void iwl_txq_stuck_timer(struct timer_list *t) -{ - struct iwl_txq *txq = from_timer(txq, t, stuck_timer); - struct iwl_trans *trans = txq->trans; - - spin_lock(&txq->lock); - /* check if triggered erroneously */ - if (txq->read_ptr == txq->write_ptr) { - spin_unlock(&txq->lock); - return; - } - spin_unlock(&txq->lock); - - iwl_txq_log_scd_error(trans, txq); - - iwl_force_nmi(trans); -} - -static void iwl_txq_set_tfd_invalid_gen1(struct iwl_trans *trans, - struct iwl_tfd *tfd) -{ - tfd->num_tbs = 0; - - iwl_pcie_gen1_tfd_set_tb(trans, tfd, 0, trans->invalid_tx_cmd.dma, - trans->invalid_tx_cmd.size); -} - -int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num, - bool cmd_queue) -{ - size_t num_entries = trans->trans_cfg->gen2 ? - slots_num : trans->trans_cfg->base_params->max_tfd_queue_size; - size_t tfd_sz; - size_t tb0_buf_sz; - int i; - - if (WARN_ONCE(slots_num <= 0, "Invalid slots num:%d\n", slots_num)) - return -EINVAL; - - if (WARN_ON(txq->entries || txq->tfds)) - return -EINVAL; - - tfd_sz = trans->txqs.tfd.size * num_entries; - - timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0); - txq->trans = trans; - - txq->n_window = slots_num; - - txq->entries = kcalloc(slots_num, - sizeof(struct iwl_pcie_txq_entry), - GFP_KERNEL); - - if (!txq->entries) - goto error; - - if (cmd_queue) - for (i = 0; i < slots_num; i++) { - txq->entries[i].cmd = - kmalloc(sizeof(struct iwl_device_cmd), - GFP_KERNEL); - if (!txq->entries[i].cmd) - goto error; - } - - /* Circular buffer of transmit frame descriptors (TFDs), - * shared with device */ - txq->tfds = dma_alloc_coherent(trans->dev, tfd_sz, - &txq->dma_addr, GFP_KERNEL); - if (!txq->tfds) - goto error; - - BUILD_BUG_ON(sizeof(*txq->first_tb_bufs) != IWL_FIRST_TB_SIZE_ALIGN); - - tb0_buf_sz = sizeof(*txq->first_tb_bufs) * slots_num; - - txq->first_tb_bufs = dma_alloc_coherent(trans->dev, tb0_buf_sz, - &txq->first_tb_dma, - GFP_KERNEL); - if (!txq->first_tb_bufs) - goto err_free_tfds; - - for (i = 0; i < num_entries; i++) { - void *tfd = iwl_txq_get_tfd(trans, txq, i); - - if (trans->trans_cfg->gen2) - iwl_txq_set_tfd_invalid_gen2(trans, tfd); - else - iwl_txq_set_tfd_invalid_gen1(trans, tfd); - } - - return 0; -err_free_tfds: - dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr); - txq->tfds = NULL; -error: - if (txq->entries && cmd_queue) - for (i = 0; i < slots_num; i++) - kfree(txq->entries[i].cmd); - kfree(txq->entries); - txq->entries = NULL; - - return -ENOMEM; -} - -static struct iwl_txq * -iwl_txq_dyn_alloc_dma(struct iwl_trans *trans, int size, unsigned int timeout) -{ - size_t bc_tbl_size, bc_tbl_entries; - struct iwl_txq *txq; - int ret; - - WARN_ON(!trans->txqs.bc_tbl_size); - - bc_tbl_size = trans->txqs.bc_tbl_size; - bc_tbl_entries = bc_tbl_size / sizeof(u16); - - if (WARN_ON(size > bc_tbl_entries)) - return ERR_PTR(-EINVAL); - - txq = kzalloc(sizeof(*txq), GFP_KERNEL); - if (!txq) - return ERR_PTR(-ENOMEM); - - txq->bc_tbl.addr = dma_pool_alloc(trans->txqs.bc_pool, GFP_KERNEL, - &txq->bc_tbl.dma); - if (!txq->bc_tbl.addr) { - IWL_ERR(trans, "Scheduler BC Table allocation failed\n"); - kfree(txq); - return ERR_PTR(-ENOMEM); - } - - ret = iwl_txq_alloc(trans, txq, size, false); - if (ret) { - IWL_ERR(trans, "Tx queue alloc failed\n"); - goto error; - } - ret = iwl_txq_init(trans, txq, size, false); - if (ret) { - IWL_ERR(trans, "Tx queue init failed\n"); - goto error; - } - - txq->wd_timeout = msecs_to_jiffies(timeout); - - return txq; - -error: - iwl_txq_gen2_free_memory(trans, txq); - return ERR_PTR(ret); -} - -static int iwl_txq_alloc_response(struct iwl_trans *trans, struct iwl_txq *txq, - struct iwl_host_cmd *hcmd) -{ - struct iwl_tx_queue_cfg_rsp *rsp; - int ret, qid; - u32 wr_ptr; - - if (WARN_ON(iwl_rx_packet_payload_len(hcmd->resp_pkt) != - sizeof(*rsp))) { - ret = -EINVAL; - goto error_free_resp; - } - - rsp = (void *)hcmd->resp_pkt->data; - qid = le16_to_cpu(rsp->queue_number); - wr_ptr = le16_to_cpu(rsp->write_pointer); - - if (qid >= ARRAY_SIZE(trans->txqs.txq)) { - WARN_ONCE(1, "queue index %d unsupported", qid); - ret = -EIO; - goto error_free_resp; - } - - if (test_and_set_bit(qid, trans->txqs.queue_used)) { - WARN_ONCE(1, "queue %d already used", qid); - ret = -EIO; - goto error_free_resp; - } - - if (WARN_ONCE(trans->txqs.txq[qid], - "queue %d already allocated\n", qid)) { - ret = -EIO; - goto error_free_resp; - } - - txq->id = qid; - trans->txqs.txq[qid] = txq; - wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1); - - /* Place first TFD at index corresponding to start sequence number */ - txq->read_ptr = wr_ptr; - txq->write_ptr = wr_ptr; - - IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d\n", qid); - - iwl_free_resp(hcmd); - return qid; - -error_free_resp: - iwl_free_resp(hcmd); - iwl_txq_gen2_free_memory(trans, txq); - return ret; -} - -int iwl_txq_dyn_alloc(struct iwl_trans *trans, u32 flags, u32 sta_mask, - u8 tid, int size, unsigned int timeout) -{ - struct iwl_txq *txq; - union { - struct iwl_tx_queue_cfg_cmd old; - struct iwl_scd_queue_cfg_cmd new; - } cmd; - struct iwl_host_cmd hcmd = { - .flags = CMD_WANT_SKB, - }; - int ret; - - if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_BZ && - trans->hw_rev_step == SILICON_A_STEP) - size = 4096; - - txq = iwl_txq_dyn_alloc_dma(trans, size, timeout); - if (IS_ERR(txq)) - return PTR_ERR(txq); - - if (trans->txqs.queue_alloc_cmd_ver == 0) { - memset(&cmd.old, 0, sizeof(cmd.old)); - cmd.old.tfdq_addr = cpu_to_le64(txq->dma_addr); - cmd.old.byte_cnt_addr = cpu_to_le64(txq->bc_tbl.dma); - cmd.old.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size)); - cmd.old.flags = cpu_to_le16(flags | TX_QUEUE_CFG_ENABLE_QUEUE); - cmd.old.tid = tid; - - if (hweight32(sta_mask) != 1) { - ret = -EINVAL; - goto error; - } - cmd.old.sta_id = ffs(sta_mask) - 1; - - hcmd.id = SCD_QUEUE_CFG; - hcmd.len[0] = sizeof(cmd.old); - hcmd.data[0] = &cmd.old; - } else if (trans->txqs.queue_alloc_cmd_ver == 3) { - memset(&cmd.new, 0, sizeof(cmd.new)); - cmd.new.operation = cpu_to_le32(IWL_SCD_QUEUE_ADD); - cmd.new.u.add.tfdq_dram_addr = cpu_to_le64(txq->dma_addr); - cmd.new.u.add.bc_dram_addr = cpu_to_le64(txq->bc_tbl.dma); - cmd.new.u.add.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size)); - cmd.new.u.add.flags = cpu_to_le32(flags); - cmd.new.u.add.sta_mask = cpu_to_le32(sta_mask); - cmd.new.u.add.tid = tid; - - hcmd.id = WIDE_ID(DATA_PATH_GROUP, SCD_QUEUE_CONFIG_CMD); - hcmd.len[0] = sizeof(cmd.new); - hcmd.data[0] = &cmd.new; - } else { - ret = -EOPNOTSUPP; - goto error; - } - - ret = iwl_trans_send_cmd(trans, &hcmd); - if (ret) - goto error; - - return iwl_txq_alloc_response(trans, txq, &hcmd); - -error: - iwl_txq_gen2_free_memory(trans, txq); - return ret; -} - -void iwl_txq_dyn_free(struct iwl_trans *trans, int queue) -{ - if (WARN(queue >= IWL_MAX_TVQM_QUEUES, - "queue %d out of range", queue)) - return; - - /* - * Upon HW Rfkill - we stop the device, and then stop the queues - * in the op_mode. Just for the sake of the simplicity of the op_mode, - * allow the op_mode to call txq_disable after it already called - * stop_device. - */ - if (!test_and_clear_bit(queue, trans->txqs.queue_used)) { - WARN_ONCE(test_bit(STATUS_DEVICE_ENABLED, &trans->status), - "queue %d not used", queue); - return; - } - - iwl_txq_gen2_free(trans, queue); - - IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue); -} - -void iwl_txq_gen2_tx_free(struct iwl_trans *trans) -{ - int i; - - memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); - - /* Free all TX queues */ - for (i = 0; i < ARRAY_SIZE(trans->txqs.txq); i++) { - if (!trans->txqs.txq[i]) - continue; - - iwl_txq_gen2_free(trans, i); - } -} - -int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, int queue_size) -{ - struct iwl_txq *queue; - int ret; - - /* alloc and init the tx queue */ - if (!trans->txqs.txq[txq_id]) { - queue = kzalloc(sizeof(*queue), GFP_KERNEL); - if (!queue) { - IWL_ERR(trans, "Not enough memory for tx queue\n"); - return -ENOMEM; - } - trans->txqs.txq[txq_id] = queue; - ret = iwl_txq_alloc(trans, queue, queue_size, true); - if (ret) { - IWL_ERR(trans, "Tx %d queue init failed\n", txq_id); - goto error; - } - } else { - queue = trans->txqs.txq[txq_id]; - } - - ret = iwl_txq_init(trans, queue, queue_size, - (txq_id == trans->txqs.cmd.q_id)); - if (ret) { - IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id); - goto error; - } - trans->txqs.txq[txq_id]->id = txq_id; - set_bit(txq_id, trans->txqs.queue_used); - - return 0; - -error: - iwl_txq_gen2_tx_free(trans); - return ret; -} - -static inline dma_addr_t iwl_txq_gen1_tfd_tb_get_addr(struct iwl_trans *trans, - struct iwl_tfd *tfd, u8 idx) -{ - struct iwl_tfd_tb *tb = &tfd->tbs[idx]; - dma_addr_t addr; - dma_addr_t hi_len; - - addr = get_unaligned_le32(&tb->lo); - - if (sizeof(dma_addr_t) <= sizeof(u32)) - return addr; - - hi_len = le16_to_cpu(tb->hi_n_len) & 0xF; - - /* - * shift by 16 twice to avoid warnings on 32-bit - * (where this code never runs anyway due to the - * if statement above) - */ - return addr | ((hi_len << 16) << 16); -} - -void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans, - struct iwl_cmd_meta *meta, - struct iwl_txq *txq, int index) -{ - int i, num_tbs; - struct iwl_tfd *tfd = iwl_txq_get_tfd(trans, txq, index); - - /* Sanity check on number of chunks */ - num_tbs = iwl_txq_gen1_tfd_get_num_tbs(trans, tfd); - - if (num_tbs > trans->txqs.tfd.max_tbs) { - IWL_ERR(trans, "Too many chunks: %i\n", num_tbs); - /* @todo issue fatal error, it is quite serious situation */ - return; - } - - /* first TB is never freed - it's the bidirectional DMA data */ - - for (i = 1; i < num_tbs; i++) { - if (meta->tbs & BIT(i)) - dma_unmap_page(trans->dev, - iwl_txq_gen1_tfd_tb_get_addr(trans, - tfd, i), - iwl_txq_gen1_tfd_tb_get_len(trans, - tfd, i), - DMA_TO_DEVICE); - else - dma_unmap_single(trans->dev, - iwl_txq_gen1_tfd_tb_get_addr(trans, - tfd, i), - iwl_txq_gen1_tfd_tb_get_len(trans, - tfd, i), - DMA_TO_DEVICE); - } - - meta->tbs = 0; - - iwl_txq_set_tfd_invalid_gen1(trans, tfd); -} - -#define IWL_TX_CRC_SIZE 4 -#define IWL_TX_DELIMITER_SIZE 4 - -/* - * iwl_txq_gen1_update_byte_cnt_tbl - Set up entry in Tx byte-count array - */ -void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_txq *txq, u16 byte_cnt, - int num_tbs) -{ - struct iwlagn_scd_bc_tbl *scd_bc_tbl; - int write_ptr = txq->write_ptr; - int txq_id = txq->id; - u8 sec_ctl = 0; - u16 len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE; - __le16 bc_ent; - struct iwl_device_tx_cmd *dev_cmd = txq->entries[txq->write_ptr].cmd; - struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; - u8 sta_id = tx_cmd->sta_id; - - scd_bc_tbl = trans->txqs.scd_bc_tbls.addr; - - sec_ctl = tx_cmd->sec_ctl; - - switch (sec_ctl & TX_CMD_SEC_MSK) { - case TX_CMD_SEC_CCM: - len += IEEE80211_CCMP_MIC_LEN; - break; - case TX_CMD_SEC_TKIP: - len += IEEE80211_TKIP_ICV_LEN; - break; - case TX_CMD_SEC_WEP: - len += IEEE80211_WEP_IV_LEN + IEEE80211_WEP_ICV_LEN; - break; - } - if (trans->txqs.bc_table_dword) - len = DIV_ROUND_UP(len, 4); - - if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX)) - return; - - bc_ent = cpu_to_le16(len | (sta_id << 12)); - - scd_bc_tbl[txq_id].tfd_offset[write_ptr] = bc_ent; - - if (write_ptr < TFD_QUEUE_SIZE_BC_DUP) - scd_bc_tbl[txq_id].tfd_offset[TFD_QUEUE_SIZE_MAX + write_ptr] = - bc_ent; -} - -void iwl_txq_gen1_inval_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_txq *txq) -{ - struct iwlagn_scd_bc_tbl *scd_bc_tbl = trans->txqs.scd_bc_tbls.addr; - int txq_id = txq->id; - int read_ptr = txq->read_ptr; - u8 sta_id = 0; - __le16 bc_ent; - struct iwl_device_tx_cmd *dev_cmd = txq->entries[read_ptr].cmd; - struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; - - WARN_ON(read_ptr >= TFD_QUEUE_SIZE_MAX); - - if (txq_id != trans->txqs.cmd.q_id) - sta_id = tx_cmd->sta_id; - - bc_ent = cpu_to_le16(1 | (sta_id << 12)); - - scd_bc_tbl[txq_id].tfd_offset[read_ptr] = bc_ent; - - if (read_ptr < TFD_QUEUE_SIZE_BC_DUP) - scd_bc_tbl[txq_id].tfd_offset[TFD_QUEUE_SIZE_MAX + read_ptr] = - bc_ent; -} - -/* - * iwl_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr] - * @trans - transport private data - * @txq - tx queue - * @dma_dir - the direction of the DMA mapping - * - * Does NOT advance any TFD circular buffer read/write indexes - * Does NOT free the TFD itself (which is within circular buffer) - */ -void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq) -{ - /* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and - * idx is bounded by n_window - */ - int rd_ptr = txq->read_ptr; - int idx = iwl_txq_get_cmd_index(txq, rd_ptr); - struct sk_buff *skb; - - lockdep_assert_held(&txq->lock); - - if (!txq->entries) - return; - - /* We have only q->n_window txq->entries, but we use - * TFD_QUEUE_SIZE_MAX tfds - */ - if (trans->trans_cfg->gen2) - iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta, - iwl_txq_get_tfd(trans, txq, rd_ptr)); - else - iwl_txq_gen1_tfd_unmap(trans, &txq->entries[idx].meta, - txq, rd_ptr); - - /* free SKB */ - skb = txq->entries[idx].skb; - - /* Can be called from irqs-disabled context - * If skb is not NULL, it means that the whole queue is being - * freed and that the queue is not empty - free the skb - */ - if (skb) { - iwl_op_mode_free_skb(trans->op_mode, skb); - txq->entries[idx].skb = NULL; - } -} - -void iwl_txq_progress(struct iwl_txq *txq) -{ - lockdep_assert_held(&txq->lock); - - if (!txq->wd_timeout) - return; - - /* - * station is asleep and we send data - that must - * be uAPSD or PS-Poll. Don't rearm the timer. - */ - if (txq->frozen) - return; - - /* - * if empty delete timer, otherwise move timer forward - * since we're making progress on this queue - */ - if (txq->read_ptr == txq->write_ptr) - del_timer(&txq->stuck_timer); - else - mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout); -} - -/* Frees buffers until index _not_ inclusive */ -void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn, - struct sk_buff_head *skbs, bool is_flush) -{ - struct iwl_txq *txq = trans->txqs.txq[txq_id]; - int tfd_num, read_ptr, last_to_free; - - /* This function is not meant to release cmd queue*/ - if (WARN_ON(txq_id == trans->txqs.cmd.q_id)) - return; - - if (WARN_ON(!txq)) - return; - - tfd_num = iwl_txq_get_cmd_index(txq, ssn); - - spin_lock_bh(&txq->lock); - read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr); - - if (!test_bit(txq_id, trans->txqs.queue_used)) { - IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n", - txq_id, ssn); - goto out; - } - - if (read_ptr == tfd_num) - goto out; - - IWL_DEBUG_TX_REPLY(trans, "[Q %d] %d (%d) -> %d (%d)\n", - txq_id, read_ptr, txq->read_ptr, tfd_num, ssn); - - /*Since we free until index _not_ inclusive, the one before index is - * the last we will free. This one must be used */ - last_to_free = iwl_txq_dec_wrap(trans, tfd_num); - - if (!iwl_txq_used(txq, last_to_free)) { - IWL_ERR(trans, - "%s: Read index for txq id (%d), last_to_free %d is out of range [0-%d] %d %d.\n", - __func__, txq_id, last_to_free, - trans->trans_cfg->base_params->max_tfd_queue_size, - txq->write_ptr, txq->read_ptr); - - iwl_op_mode_time_point(trans->op_mode, - IWL_FW_INI_TIME_POINT_FAKE_TX, - NULL); - goto out; - } - - if (WARN_ON(!skb_queue_empty(skbs))) - goto out; - - for (; - read_ptr != tfd_num; - txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr), - read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr)) { - struct sk_buff *skb = txq->entries[read_ptr].skb; - - if (WARN_ONCE(!skb, "no SKB at %d (%d) on queue %d\n", - read_ptr, txq->read_ptr, txq_id)) - continue; - - iwl_txq_free_tso_page(trans, skb); - - __skb_queue_tail(skbs, skb); - - txq->entries[read_ptr].skb = NULL; - - if (!trans->trans_cfg->gen2) - iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq); - - iwl_txq_free_tfd(trans, txq); - } - - iwl_txq_progress(txq); - - if (iwl_txq_space(trans, txq) > txq->low_mark && - test_bit(txq_id, trans->txqs.queue_stopped)) { - struct sk_buff_head overflow_skbs; - struct sk_buff *skb; - - __skb_queue_head_init(&overflow_skbs); - skb_queue_splice_init(&txq->overflow_q, - is_flush ? skbs : &overflow_skbs); - - /* - * We are going to transmit from the overflow queue. - * Remember this state so that wait_for_txq_empty will know we - * are adding more packets to the TFD queue. It cannot rely on - * the state of &txq->overflow_q, as we just emptied it, but - * haven't TXed the content yet. - */ - txq->overflow_tx = true; - - /* - * This is tricky: we are in reclaim path which is non - * re-entrant, so noone will try to take the access the - * txq data from that path. We stopped tx, so we can't - * have tx as well. Bottom line, we can unlock and re-lock - * later. - */ - spin_unlock_bh(&txq->lock); - - while ((skb = __skb_dequeue(&overflow_skbs))) { - struct iwl_device_tx_cmd *dev_cmd_ptr; - - dev_cmd_ptr = *(void **)((u8 *)skb->cb + - trans->txqs.dev_cmd_offs); - - /* - * Note that we can very well be overflowing again. - * In that case, iwl_txq_space will be small again - * and we won't wake mac80211's queue. - */ - iwl_trans_tx(trans, skb, dev_cmd_ptr, txq_id); - } - - if (iwl_txq_space(trans, txq) > txq->low_mark) - iwl_wake_queue(trans, txq); - - spin_lock_bh(&txq->lock); - txq->overflow_tx = false; - } - -out: - spin_unlock_bh(&txq->lock); -} - -/* Set wr_ptr of specific device and txq */ -void iwl_txq_set_q_ptrs(struct iwl_trans *trans, int txq_id, int ptr) -{ - struct iwl_txq *txq = trans->txqs.txq[txq_id]; - - spin_lock_bh(&txq->lock); - - txq->write_ptr = ptr; - txq->read_ptr = txq->write_ptr; - - spin_unlock_bh(&txq->lock); -} - -void iwl_trans_txq_freeze_timer(struct iwl_trans *trans, unsigned long txqs, - bool freeze) -{ - int queue; - - for_each_set_bit(queue, &txqs, BITS_PER_LONG) { - struct iwl_txq *txq = trans->txqs.txq[queue]; - unsigned long now; - - spin_lock_bh(&txq->lock); - - now = jiffies; - - if (txq->frozen == freeze) - goto next_queue; - - IWL_DEBUG_TX_QUEUES(trans, "%s TXQ %d\n", - freeze ? "Freezing" : "Waking", queue); - - txq->frozen = freeze; - - if (txq->read_ptr == txq->write_ptr) - goto next_queue; - - if (freeze) { - if (unlikely(time_after(now, - txq->stuck_timer.expires))) { - /* - * The timer should have fired, maybe it is - * spinning right now on the lock. - */ - goto next_queue; - } - /* remember how long until the timer fires */ - txq->frozen_expiry_remainder = - txq->stuck_timer.expires - now; - del_timer(&txq->stuck_timer); - goto next_queue; - } - - /* - * Wake a non-empty queue -> arm timer with the - * remainder before it froze - */ - mod_timer(&txq->stuck_timer, - now + txq->frozen_expiry_remainder); - -next_queue: - spin_unlock_bh(&txq->lock); - } -} - -#define HOST_COMPLETE_TIMEOUT (2 * HZ) - -static int iwl_trans_txq_send_hcmd_sync(struct iwl_trans *trans, - struct iwl_host_cmd *cmd) -{ - const char *cmd_str = iwl_get_cmd_string(trans, cmd->id); - struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id]; - int cmd_idx; - int ret; - - IWL_DEBUG_INFO(trans, "Attempting to send sync command %s\n", cmd_str); - - if (WARN(test_and_set_bit(STATUS_SYNC_HCMD_ACTIVE, - &trans->status), - "Command %s: a command is already active!\n", cmd_str)) - return -EIO; - - IWL_DEBUG_INFO(trans, "Setting HCMD_ACTIVE for command %s\n", cmd_str); - - cmd_idx = trans->ops->send_cmd(trans, cmd); - if (cmd_idx < 0) { - ret = cmd_idx; - clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status); - IWL_ERR(trans, "Error sending %s: enqueue_hcmd failed: %d\n", - cmd_str, ret); - return ret; - } - - ret = wait_event_timeout(trans->wait_command_queue, - !test_bit(STATUS_SYNC_HCMD_ACTIVE, - &trans->status), - HOST_COMPLETE_TIMEOUT); - if (!ret) { - IWL_ERR(trans, "Error sending %s: time out after %dms.\n", - cmd_str, jiffies_to_msecs(HOST_COMPLETE_TIMEOUT)); - - IWL_ERR(trans, "Current CMD queue read_ptr %d write_ptr %d\n", - txq->read_ptr, txq->write_ptr); - - clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status); - IWL_DEBUG_INFO(trans, "Clearing HCMD_ACTIVE for command %s\n", - cmd_str); - ret = -ETIMEDOUT; - - iwl_trans_sync_nmi(trans); - goto cancel; - } - - if (test_bit(STATUS_FW_ERROR, &trans->status)) { - if (!test_and_clear_bit(STATUS_SUPPRESS_CMD_ERROR_ONCE, - &trans->status)) { - IWL_ERR(trans, "FW error in SYNC CMD %s\n", cmd_str); - dump_stack(); - } - ret = -EIO; - goto cancel; - } - - if (!(cmd->flags & CMD_SEND_IN_RFKILL) && - test_bit(STATUS_RFKILL_OPMODE, &trans->status)) { - IWL_DEBUG_RF_KILL(trans, "RFKILL in SYNC CMD... no rsp\n"); - ret = -ERFKILL; - goto cancel; - } - - if ((cmd->flags & CMD_WANT_SKB) && !cmd->resp_pkt) { - IWL_ERR(trans, "Error: Response NULL in '%s'\n", cmd_str); - ret = -EIO; - goto cancel; - } - - return 0; - -cancel: - if (cmd->flags & CMD_WANT_SKB) { - /* - * Cancel the CMD_WANT_SKB flag for the cmd in the - * TX cmd queue. Otherwise in case the cmd comes - * in later, it will possibly set an invalid - * address (cmd->meta.source). - */ - txq->entries[cmd_idx].meta.flags &= ~CMD_WANT_SKB; - } - - if (cmd->resp_pkt) { - iwl_free_resp(cmd); - cmd->resp_pkt = NULL; - } - - return ret; -} - -int iwl_trans_txq_send_hcmd(struct iwl_trans *trans, - struct iwl_host_cmd *cmd) -{ - /* Make sure the NIC is still alive in the bus */ - if (test_bit(STATUS_TRANS_DEAD, &trans->status)) - return -ENODEV; - - if (!(cmd->flags & CMD_SEND_IN_RFKILL) && - test_bit(STATUS_RFKILL_OPMODE, &trans->status)) { - IWL_DEBUG_RF_KILL(trans, "Dropping CMD 0x%x: RF KILL\n", - cmd->id); - return -ERFKILL; - } - - if (unlikely(trans->system_pm_mode == IWL_PLAT_PM_MODE_D3 && - !(cmd->flags & CMD_SEND_IN_D3))) { - IWL_DEBUG_WOWLAN(trans, "Dropping CMD 0x%x: D3\n", cmd->id); - return -EHOSTDOWN; - } - - if (cmd->flags & CMD_ASYNC) { - int ret; - - /* An asynchronous command can not expect an SKB to be set. */ - if (WARN_ON(cmd->flags & CMD_WANT_SKB)) - return -EINVAL; - - ret = trans->ops->send_cmd(trans, cmd); - if (ret < 0) { - IWL_ERR(trans, - "Error sending %s: enqueue_hcmd failed: %d\n", - iwl_get_cmd_string(trans, cmd->id), ret); - return ret; - } - return 0; - } - - return iwl_trans_txq_send_hcmd_sync(trans, cmd); -} - diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.h b/drivers/net/wireless/intel/iwlwifi/queue/tx.h deleted file mode 100644 index 124b29aac4a1..000000000000 --- a/drivers/net/wireless/intel/iwlwifi/queue/tx.h +++ /dev/null @@ -1,191 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ -/* - * Copyright (C) 2020-2023 Intel Corporation - */ -#ifndef __iwl_trans_queue_tx_h__ -#define __iwl_trans_queue_tx_h__ -#include "iwl-fh.h" -#include "fw/api/tx.h" - -struct iwl_tso_hdr_page { - struct page *page; - u8 *pos; -}; - -static inline dma_addr_t -iwl_txq_get_first_tb_dma(struct iwl_txq *txq, int idx) -{ - return txq->first_tb_dma + - sizeof(struct iwl_pcie_first_tb_buf) * idx; -} - -static inline u16 iwl_txq_get_cmd_index(const struct iwl_txq *q, u32 index) -{ - return index & (q->n_window - 1); -} - -void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id); - -static inline void iwl_wake_queue(struct iwl_trans *trans, - struct iwl_txq *txq) -{ - if (test_and_clear_bit(txq->id, trans->txqs.queue_stopped)) { - IWL_DEBUG_TX_QUEUES(trans, "Wake hwq %d\n", txq->id); - iwl_op_mode_queue_not_full(trans->op_mode, txq->id); - } -} - -static inline void *iwl_txq_get_tfd(struct iwl_trans *trans, - struct iwl_txq *txq, int idx) -{ - if (trans->trans_cfg->gen2) - idx = iwl_txq_get_cmd_index(txq, idx); - - return (u8 *)txq->tfds + trans->txqs.tfd.size * idx; -} - -int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num, - bool cmd_queue); -/* - * We need this inline in case dma_addr_t is only 32-bits - since the - * hardware is always 64-bit, the issue can still occur in that case, - * so use u64 for 'phys' here to force the addition in 64-bit. - */ -static inline bool iwl_txq_crosses_4g_boundary(u64 phys, u16 len) -{ - return upper_32_bits(phys) != upper_32_bits(phys + len); -} - -int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q); - -static inline void iwl_txq_stop(struct iwl_trans *trans, struct iwl_txq *txq) -{ - if (!test_and_set_bit(txq->id, trans->txqs.queue_stopped)) { - iwl_op_mode_queue_full(trans->op_mode, txq->id); - IWL_DEBUG_TX_QUEUES(trans, "Stop hwq %d\n", txq->id); - } else { - IWL_DEBUG_TX_QUEUES(trans, "hwq %d already stopped\n", - txq->id); - } -} - -/** - * iwl_txq_inc_wrap - increment queue index, wrap back to beginning - * @trans: the transport (for configuration data) - * @index: current index - */ -static inline int iwl_txq_inc_wrap(struct iwl_trans *trans, int index) -{ - return ++index & - (trans->trans_cfg->base_params->max_tfd_queue_size - 1); -} - -/** - * iwl_txq_dec_wrap - decrement queue index, wrap back to end - * @trans: the transport (for configuration data) - * @index: current index - */ -static inline int iwl_txq_dec_wrap(struct iwl_trans *trans, int index) -{ - return --index & - (trans->trans_cfg->base_params->max_tfd_queue_size - 1); -} - -static inline bool iwl_txq_used(const struct iwl_txq *q, int i) -{ - int index = iwl_txq_get_cmd_index(q, i); - int r = iwl_txq_get_cmd_index(q, q->read_ptr); - int w = iwl_txq_get_cmd_index(q, q->write_ptr); - - return w >= r ? - (index >= r && index < w) : - !(index < r && index >= w); -} - -void iwl_txq_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb); - -void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq); - -int iwl_txq_gen2_set_tb(struct iwl_trans *trans, - struct iwl_tfh_tfd *tfd, dma_addr_t addr, - u16 len); - -void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans, - struct iwl_cmd_meta *meta, - struct iwl_tfh_tfd *tfd); - -int iwl_txq_dyn_alloc(struct iwl_trans *trans, u32 flags, - u32 sta_mask, u8 tid, - int size, unsigned int timeout); - -int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, - struct iwl_device_tx_cmd *dev_cmd, int txq_id); - -void iwl_txq_dyn_free(struct iwl_trans *trans, int queue); -void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq); -void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq); -void iwl_txq_gen2_tx_free(struct iwl_trans *trans); -int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num, - bool cmd_queue); -int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, int queue_size); -#ifdef CONFIG_INET -struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, - struct sk_buff *skb); -#endif -static inline u8 iwl_txq_gen1_tfd_get_num_tbs(struct iwl_trans *trans, - struct iwl_tfd *tfd) -{ - return tfd->num_tbs & 0x1f; -} - -static inline u16 iwl_txq_gen1_tfd_tb_get_len(struct iwl_trans *trans, - void *_tfd, u8 idx) -{ - struct iwl_tfd *tfd; - struct iwl_tfd_tb *tb; - - if (trans->trans_cfg->gen2) { - struct iwl_tfh_tfd *tfh_tfd = _tfd; - struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; - - return le16_to_cpu(tfh_tb->tb_len); - } - - tfd = (struct iwl_tfd *)_tfd; - tb = &tfd->tbs[idx]; - - return le16_to_cpu(tb->hi_n_len) >> 4; -} - -static inline void iwl_pcie_gen1_tfd_set_tb(struct iwl_trans *trans, - struct iwl_tfd *tfd, - u8 idx, dma_addr_t addr, u16 len) -{ - struct iwl_tfd_tb *tb = &tfd->tbs[idx]; - u16 hi_n_len = len << 4; - - put_unaligned_le32(addr, &tb->lo); - hi_n_len |= iwl_get_dma_hi_addr(addr); - - tb->hi_n_len = cpu_to_le16(hi_n_len); - - tfd->num_tbs = idx + 1; -} - -void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans, - struct iwl_cmd_meta *meta, - struct iwl_txq *txq, int index); -void iwl_txq_gen1_inval_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_txq *txq); -void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_txq *txq, u16 byte_cnt, - int num_tbs); -void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn, - struct sk_buff_head *skbs, bool is_flush); -void iwl_txq_set_q_ptrs(struct iwl_trans *trans, int txq_id, int ptr); -void iwl_trans_txq_freeze_timer(struct iwl_trans *trans, unsigned long txqs, - bool freeze); -void iwl_txq_progress(struct iwl_txq *txq); -void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq); -int iwl_trans_txq_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); -#endif /* __iwl_trans_queue_tx_h__ */ |