diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-03-09 14:57:08 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-03-09 14:57:08 -0800 |
commit | 2901752c14b8e1b7dd898d2e5245c93e531aa624 (patch) | |
tree | 98780bc17593a3d79e7b3fe2ecf23f2e8882a39a /drivers/pci | |
parent | 96a6de1a541c86e9e67b9c310c14db4099bd1cbc (diff) | |
parent | dd92b6677e3d0d78e261a7f00f28e753bab41d24 (diff) |
Merge tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- Use match_string() instead of reimplementing it (Andy Shevchenko)
- Enable SERR# forwarding for all bridges (Bharat Kumar Gogada)
- Use Latency Tolerance Reporting if already enabled by platform (Bjorn
Helgaas)
- Save/restore LTR info for suspend/resume (Bjorn Helgaas)
- Fix DPC use of uninitialized data (Dongdong Liu)
- Probe bridge window attributes only once at enumeration-time to fix
device accesses during rescan (Bjorn Helgaas)
- Return BAR size (not "size -1 ") from pci_size() to simplify code (Du
Changbin)
- Use config header type (not class code) identify bridges more
reliably (Honghui Zhang)
- Work around Intel Denverton incorrect Trace Hub BAR size reporting
(Alexander Shishkin)
- Reorder pciehp cached state/hardware state updates to avoid missed
interrupts (Mika Westerberg)
- Turn ibmphp semaphores into completions or mutexes (Arnd Bergmann)
- Mark expected switch fall-through (Mathieu Malaterre)
- Use of_node_name_eq() for node name comparisons (Rob Herring)
- Add ACS and pciehp quirks for HXT SD4800 (Shunyong Yang)
- Consolidate Rohm Vendor ID definitions (Andy Shevchenko)
- Use u32 (not __u32) for things not exposed to userspace (Logan
Gunthorpe)
- Fix locking semantics of bus and slot reset interfaces (Alex
Williamson)
- Update PCIEPORTBUS Kconfig help text (Hou Zhiqiang)
- Allow portdrv to claim subtractive decode Ports so PCIe services will
work for them (Honghui Zhang)
- Report PCIe links that become degraded at run-time (Alexandru
Gagniuc)
- Blacklist Gigabyte X299 Root Port power management to fix Thunderbolt
hotplug (Mika Westerberg)
- Revert runtime PM suspend/resume callbacks that broke PME on network
cable plug (Mika Westerberg)
- Disable Data Link State Changed interrupts to prevent wakeup
immediately after suspend (Mika Westerberg)
- Extend altera to support Stratix 10 (Ley Foon Tan)
- Allow building altera driver on ARM64 (Ley Foon Tan)
- Replace Douglas with Tom Joseph as Cadence PCI host/endpoint
maintainer (Lorenzo Pieralisi)
- Add DT support for R-Car RZ/G2E (R8A774C0) (Fabrizio Castro)
- Add dra72x/dra74x/dra76x SoC compatible strings (Kishon Vijay Abraham I)
- Enable x2 mode support for dra72x/dra74x/dra76x SoC (Kishon Vijay
Abraham I)
- Configure dra7xx PHY to PCIe mode (Kishon Vijay Abraham I)
- Simplify dwc (remove unnecessary header includes, name variables
consistently, reduce inverted logic, etc) (Gustavo Pimentel)
- Add i.MX8MQ support (Andrey Smirnov)
- Add message to help debug dwc MSI-X mask bit errors (Gustavo
Pimentel)
- Work around imx7d PCIe PLL erratum (Trent Piepho)
- Don't assert qcom reset GPIO during probe (Bjorn Andersson)
- Skip dwc MSI init if MSIs have been disabled (Lucas Stach)
- Use memcpy_fromio()/memcpy_toio() instead of plain memcpy() in PCI
endpoint framework (Wen Yang)
- Add interface to discover supported endpoint features to replace a
bitfield that wasn't flexible enough (Kishon Vijay Abraham I)
- Implement the new supported-feature interface for designware-plat,
dra7xx, rockchip, cadence (Kishon Vijay Abraham I)
- Fix issues with 64-bit BAR in endpoints (Kishon Vijay Abraham I)
- Add layerscape endpoint mode support (Xiaowei Bao)
- Remove duplicate struct hv_vp_set in favor of struct hv_vpset (Maya
Nakamura)
- Rework hv_irq_unmask() to use cpumask_to_vpset() instead of
open-coded reimplementation (Maya Nakamura)
- Align Hyper-V struct retarget_msi_interrupt arguments (Maya Nakamura)
- Fix mediatek MMIO size computation to enable full size of available
MMIO space (Honghui Zhang)
- Fix mediatek DMA window size computation to allow endpoint DMA access
to full DRAM address range (Honghui Zhang)
- Fix mvebu prefetchable BAR regression caused by common bridge
emulation that assumed all bridges had prefetchable windows (Thomas
Petazzoni)
- Make advk_pci_bridge_emul_ops static (Wei Yongjun)
- Configure MPS settings for VMD root ports (Jon Derrick)
* tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits)
PCI: Update PCIEPORTBUS Kconfig help text
PCI: Fix "try" semantics of bus and slot reset
PCI/LINK: Report degraded links via link bandwidth notification
dt-bindings: PCI: altera: Add altr,pcie-root-port-2.0
PCI: altera: Enable driver on ARM64
PCI: altera: Add Stratix 10 PCIe support
PCI/PME: Fix possible use-after-free on remove
PCI: aardvark: Make symbol 'advk_pci_bridge_emul_ops' static
PCI: dwc: skip MSI init if MSIs have been explicitly disabled
PCI: hv: Refactor hv_irq_unmask() to use cpumask_to_vpset()
PCI: hv: Replace hv_vp_set with hv_vpset
PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt
PCI: mediatek: Enlarge PCIe2AHB window size to support 4GB DRAM
PCI: mediatek: Fix memory mapped IO range size computation
PCI: dwc: Remove superfluous shifting in definitions
PCI: dwc: Make use of GENMASK/FIELD_PREP
PCI: dwc: Make use of BIT() in constant definitions
PCI: dwc: Share code for dw_pcie_rd/wr_other_conf()
PCI: dwc: Make use of IS_ALIGNED()
PCI: imx6: Add code to request/control "pcie_aux" clock for i.MX8MQ
...
Diffstat (limited to 'drivers/pci')
44 files changed, 1514 insertions, 479 deletions
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index 6671946dbf66..6012f3059acd 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig @@ -175,7 +175,7 @@ config PCIE_IPROC_MSI config PCIE_ALTERA bool "Altera PCIe controller" - depends on ARM || NIOS2 || COMPILE_TEST + depends on ARM || NIOS2 || ARM64 || COMPILE_TEST help Say Y here if you want to enable PCIe controller support on Altera FPGA. diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index 548c58223868..6ea74b1c0d94 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig @@ -89,8 +89,8 @@ config PCI_EXYNOS select PCIE_DW_HOST config PCI_IMX6 - bool "Freescale i.MX6/7 PCIe controller" - depends on SOC_IMX6Q || SOC_IMX7D || (ARM && COMPILE_TEST) + bool "Freescale i.MX6/7/8 PCIe controller" + depends on SOC_IMX6Q || SOC_IMX7D || (ARM64 && ARCH_MXC) || COMPILE_TEST depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile index 7bcdcdf5024e..b5f3b83cc2b3 100644 --- a/drivers/pci/controller/dwc/Makefile +++ b/drivers/pci/controller/dwc/Makefile @@ -8,7 +8,7 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o obj-$(CONFIG_PCI_IMX6) += pci-imx6.o obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o -obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o +obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o pci-layerscape-ep.o obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index a32d6dde7a57..ae84a69ae63a 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c @@ -81,6 +81,10 @@ #define MSI_REQ_GRANT BIT(0) #define MSI_VECTOR_SHIFT 7 +#define PCIE_1LANE_2LANE_SELECTION BIT(13) +#define PCIE_B1C0_MODE_SEL BIT(2) +#define PCIE_B0_B1_TSYNCEN BIT(0) + struct dra7xx_pcie { struct dw_pcie *pci; void __iomem *base; /* DT ti_conf */ @@ -93,6 +97,7 @@ struct dra7xx_pcie { struct dra7xx_pcie_of_data { enum dw_pcie_device_mode mode; + u32 b1co_mode_sel_mask; }; #define to_dra7xx_pcie(x) dev_get_drvdata((x)->dev) @@ -389,9 +394,22 @@ static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, return 0; } +static const struct pci_epc_features dra7xx_pcie_epc_features = { + .linkup_notifier = true, + .msi_capable = true, + .msix_capable = false, +}; + +static const struct pci_epc_features* +dra7xx_pcie_get_features(struct dw_pcie_ep *ep) +{ + return &dra7xx_pcie_epc_features; +} + static struct dw_pcie_ep_ops pcie_ep_ops = { .ep_init = dra7xx_pcie_ep_init, .raise_irq = dra7xx_pcie_raise_irq, + .get_features = dra7xx_pcie_get_features, }; static int __init dra7xx_add_pcie_ep(struct dra7xx_pcie *dra7xx, @@ -499,6 +517,10 @@ static int dra7xx_pcie_enable_phy(struct dra7xx_pcie *dra7xx) int i; for (i = 0; i < phy_count; i++) { + ret = phy_set_mode(dra7xx->phy[i], PHY_MODE_PCIE); + if (ret < 0) + goto err_phy; + ret = phy_init(dra7xx->phy[i]); if (ret < 0) goto err_phy; @@ -529,6 +551,26 @@ static const struct dra7xx_pcie_of_data dra7xx_pcie_ep_of_data = { .mode = DW_PCIE_EP_TYPE, }; +static const struct dra7xx_pcie_of_data dra746_pcie_rc_of_data = { + .b1co_mode_sel_mask = BIT(2), + .mode = DW_PCIE_RC_TYPE, +}; + +static const struct dra7xx_pcie_of_data dra726_pcie_rc_of_data = { + .b1co_mode_sel_mask = GENMASK(3, 2), + .mode = DW_PCIE_RC_TYPE, +}; + +static const struct dra7xx_pcie_of_data dra746_pcie_ep_of_data = { + .b1co_mode_sel_mask = BIT(2), + .mode = DW_PCIE_EP_TYPE, +}; + +static const struct dra7xx_pcie_of_data dra726_pcie_ep_of_data = { + .b1co_mode_sel_mask = GENMASK(3, 2), + .mode = DW_PCIE_EP_TYPE, +}; + static const struct of_device_id of_dra7xx_pcie_match[] = { { .compatible = "ti,dra7-pcie", @@ -538,6 +580,22 @@ static const struct of_device_id of_dra7xx_pcie_match[] = { .compatible = "ti,dra7-pcie-ep", .data = &dra7xx_pcie_ep_of_data, }, + { + .compatible = "ti,dra746-pcie-rc", + .data = &dra746_pcie_rc_of_data, + }, + { + .compatible = "ti,dra726-pcie-rc", + .data = &dra726_pcie_rc_of_data, + }, + { + .compatible = "ti,dra746-pcie-ep", + .data = &dra746_pcie_ep_of_data, + }, + { + .compatible = "ti,dra726-pcie-ep", + .data = &dra726_pcie_ep_of_data, + }, {}, }; @@ -583,6 +641,34 @@ static int dra7xx_pcie_unaligned_memaccess(struct device *dev) return ret; } +static int dra7xx_pcie_configure_two_lane(struct device *dev, + u32 b1co_mode_sel_mask) +{ + struct device_node *np = dev->of_node; + struct regmap *pcie_syscon; + unsigned int pcie_reg; + u32 mask; + u32 val; + + pcie_syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-lane-sel"); + if (IS_ERR(pcie_syscon)) { + dev_err(dev, "unable to get ti,syscon-lane-sel\n"); + return -EINVAL; + } + + if (of_property_read_u32_index(np, "ti,syscon-lane-sel", 1, + &pcie_reg)) { + dev_err(dev, "couldn't get lane selection reg offset\n"); + return -EINVAL; + } + + mask = b1co_mode_sel_mask | PCIE_B0_B1_TSYNCEN; + val = PCIE_B1C0_MODE_SEL | PCIE_B0_B1_TSYNCEN; + regmap_update_bits(pcie_syscon, pcie_reg, mask, val); + + return 0; +} + static int __init dra7xx_pcie_probe(struct platform_device *pdev) { u32 reg; @@ -603,6 +689,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) const struct of_device_id *match; const struct dra7xx_pcie_of_data *data; enum dw_pcie_device_mode mode; + u32 b1co_mode_sel_mask; match = of_match_device(of_match_ptr(of_dra7xx_pcie_match), dev); if (!match) @@ -610,6 +697,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) data = (struct dra7xx_pcie_of_data *)match->data; mode = (enum dw_pcie_device_mode)data->mode; + b1co_mode_sel_mask = data->b1co_mode_sel_mask; dra7xx = devm_kzalloc(dev, sizeof(*dra7xx), GFP_KERNEL); if (!dra7xx) @@ -665,6 +753,12 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev) dra7xx->pci = pci; dra7xx->phy_count = phy_count; + if (phy_count == 2) { + ret = dra7xx_pcie_configure_two_lane(dev, b1co_mode_sel_mask); + if (ret < 0) + dra7xx->phy_count = 1; /* Fallback to x1 lane mode */ + } + ret = dra7xx_pcie_enable_phy(dra7xx); if (ret) { dev_err(dev, "failed to enable phy\n"); diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 80f843030e36..3d627f94a166 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c @@ -8,6 +8,7 @@ * Author: Sean Cross <xobs@kosagi.com> */ +#include <linux/bitfield.h> #include <linux/clk.h> #include <linux/delay.h> #include <linux/gpio.h> @@ -18,6 +19,7 @@ #include <linux/module.h> #include <linux/of_gpio.h> #include <linux/of_device.h> +#include <linux/of_address.h> #include <linux/pci.h> #include <linux/platform_device.h> #include <linux/regmap.h> @@ -32,6 +34,12 @@ #include "pcie-designware.h" +#define IMX8MQ_GPR_PCIE_REF_USE_PAD BIT(9) +#define IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN BIT(10) +#define IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE BIT(11) +#define IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE GENMASK(11, 8) +#define IMX8MQ_PCIE2_BASE_ADDR 0x33c00000 + #define to_imx6_pcie(x) dev_get_drvdata((x)->dev) enum imx6_pcie_variants { @@ -39,6 +47,15 @@ enum imx6_pcie_variants { IMX6SX, IMX6QP, IMX7D, + IMX8MQ, +}; + +#define IMX6_PCIE_FLAG_IMX6_PHY BIT(0) +#define IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE BIT(1) + +struct imx6_pcie_drvdata { + enum imx6_pcie_variants variant; + u32 flags; }; struct imx6_pcie { @@ -49,11 +66,12 @@ struct imx6_pcie { struct clk *pcie_phy; struct clk *pcie_inbound_axi; struct clk *pcie; + struct clk *pcie_aux; struct regmap *iomuxc_gpr; + u32 controller_id; struct reset_control *pciephy_reset; struct reset_control *apps_reset; struct reset_control *turnoff_reset; - enum imx6_pcie_variants variant; u32 tx_deemph_gen1; u32 tx_deemph_gen2_3p5db; u32 tx_deemph_gen2_6db; @@ -61,11 +79,13 @@ struct imx6_pcie { u32 tx_swing_low; int link_gen; struct regulator *vpcie; + void __iomem *phy_base; /* power domain for pcie */ struct device *pd_pcie; /* power domain for pcie phy */ struct device *pd_pcie_phy; + const struct imx6_pcie_drvdata *drvdata; }; /* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */ @@ -101,7 +121,6 @@ struct imx6_pcie { #define PCIE_PHY_STAT_ACK_LOC 16 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C -#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) /* PHY registers (not memory-mapped) */ #define PCIE_PHY_ATEOVRD 0x10 @@ -117,6 +136,23 @@ struct imx6_pcie { #define PCIE_PHY_RX_ASIC_OUT 0x100D #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) +/* iMX7 PCIe PHY registers */ +#define PCIE_PHY_CMN_REG4 0x14 +/* These are probably the bits that *aren't* DCC_FB_EN */ +#define PCIE_PHY_CMN_REG4_DCC_FB_EN 0x29 + +#define PCIE_PHY_CMN_REG15 0x54 +#define PCIE_PHY_CMN_REG15_DLY_4 BIT(2) +#define PCIE_PHY_CMN_REG15_PLL_PD BIT(5) +#define PCIE_PHY_CMN_REG15_OVRD_PLL_PD BIT(7) + +#define PCIE_PHY_CMN_REG24 0x90 +#define PCIE_PHY_CMN_REG24_RX_EQ BIT(6) +#define PCIE_PHY_CMN_REG24_RX_EQ_SEL BIT(3) + +#define PCIE_PHY_CMN_REG26 0x98 +#define PCIE_PHY_CMN_REG26_ATT_MODE 0xBC + #define PHY_RX_OVRD_IN_LO 0x1005 #define PHY_RX_OVRD_IN_LO_RX_DATA_EN (1 << 5) #define PHY_RX_OVRD_IN_LO_RX_PLL_EN (1 << 3) @@ -251,6 +287,9 @@ static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie) { u32 tmp; + if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) + return; + pcie_phy_read(imx6_pcie, PHY_RX_OVRD_IN_LO, &tmp); tmp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | PHY_RX_OVRD_IN_LO_RX_PLL_EN); @@ -264,6 +303,7 @@ static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie) pcie_phy_write(imx6_pcie, PHY_RX_OVRD_IN_LO, tmp); } +#ifdef CONFIG_ARM /* Added for PCI abort handling */ static int imx6q_pcie_abort_handler(unsigned long addr, unsigned int fsr, struct pt_regs *regs) @@ -297,6 +337,7 @@ static int imx6q_pcie_abort_handler(unsigned long addr, return 1; } +#endif static int imx6_pcie_attach_pd(struct device *dev) { @@ -342,8 +383,9 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) { struct device *dev = imx6_pcie->pci->dev; - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX7D: + case IMX8MQ: reset_control_assert(imx6_pcie->pciephy_reset); reset_control_assert(imx6_pcie->apps_reset); break; @@ -378,13 +420,20 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) } } +static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) +{ + WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ); + return imx6_pcie->controller_id == 1 ? IOMUXC_GPR16 : IOMUXC_GPR14; +} + static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie) { struct dw_pcie *pci = imx6_pcie->pci; struct device *dev = pci->dev; + unsigned int offset; int ret = 0; - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6SX: ret = clk_prepare_enable(imx6_pcie->pcie_inbound_axi); if (ret) { @@ -412,6 +461,25 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie) break; case IMX7D: break; + case IMX8MQ: + ret = clk_prepare_enable(imx6_pcie->pcie_aux); + if (ret) { + dev_err(dev, "unable to enable pcie_aux clock\n"); + break; + } + + offset = imx6_pcie_grp_offset(imx6_pcie); + /* + * Set the over ride low and enabled + * make sure that REF_CLK is turned on. + */ + regmap_update_bits(imx6_pcie->iomuxc_gpr, offset, + IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE, + 0); + regmap_update_bits(imx6_pcie->iomuxc_gpr, offset, + IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN, + IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN); + break; } return ret; @@ -487,9 +555,32 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) !imx6_pcie->gpio_active_high); } - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { + case IMX8MQ: + reset_control_deassert(imx6_pcie->pciephy_reset); + break; case IMX7D: reset_control_deassert(imx6_pcie->pciephy_reset); + + /* Workaround for ERR010728, failure of PCI-e PLL VCO to + * oscillate, especially when cold. This turns off "Duty-cycle + * Corrector" and other mysterious undocumented things. + */ + if (likely(imx6_pcie->phy_base)) { + /* De-assert DCC_FB_EN */ + writel(PCIE_PHY_CMN_REG4_DCC_FB_EN, + imx6_pcie->phy_base + PCIE_PHY_CMN_REG4); + /* Assert RX_EQS and RX_EQS_SEL */ + writel(PCIE_PHY_CMN_REG24_RX_EQ_SEL + | PCIE_PHY_CMN_REG24_RX_EQ, + imx6_pcie->phy_base + PCIE_PHY_CMN_REG24); + /* Assert ATT_MODE */ + writel(PCIE_PHY_CMN_REG26_ATT_MODE, + imx6_pcie->phy_base + PCIE_PHY_CMN_REG26); + } else { + dev_warn(dev, "Unable to apply ERR010728 workaround. DT missing fsl,imx7d-pcie-phy phandle ?\n"); + } + imx7d_pcie_wait_for_phy_pll_lock(imx6_pcie); break; case IMX6SX: @@ -523,9 +614,37 @@ err_pcie_phy: } } +static void imx6_pcie_configure_type(struct imx6_pcie *imx6_pcie) +{ + unsigned int mask, val; + + if (imx6_pcie->drvdata->variant == IMX8MQ && + imx6_pcie->controller_id == 1) { + mask = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE; + val = FIELD_PREP(IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, + PCI_EXP_TYPE_ROOT_PORT); + } else { + mask = IMX6Q_GPR12_DEVICE_TYPE; + val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, + PCI_EXP_TYPE_ROOT_PORT); + } + + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, mask, val); +} + static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) { - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { + case IMX8MQ: + /* + * TODO: Currently this code assumes external + * oscillator is being used + */ + regmap_update_bits(imx6_pcie->iomuxc_gpr, + imx6_pcie_grp_offset(imx6_pcie), + IMX8MQ_GPR_PCIE_REF_USE_PAD, + IMX8MQ_GPR_PCIE_REF_USE_PAD); + break; case IMX7D: regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 0); @@ -561,8 +680,7 @@ static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) break; } - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, - IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12); + imx6_pcie_configure_type(imx6_pcie); } static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) @@ -571,6 +689,9 @@ static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) int mult, div; u32 val; + if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) + return 0; + switch (phy_rate) { case 125000000: /* @@ -647,7 +768,7 @@ static void imx6_pcie_ltssm_enable(struct device *dev) { struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6Q: case IMX6SX: case IMX6QP: @@ -656,6 +777,7 @@ static void imx6_pcie_ltssm_enable(struct device *dev) IMX6Q_GPR12_PCIE_CTL_2); break; case IMX7D: + case IMX8MQ: reset_control_deassert(imx6_pcie->apps_reset); break; } @@ -700,7 +822,8 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) tmp |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp); - if (imx6_pcie->variant != IMX7D) { + if (imx6_pcie->drvdata->flags & + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE) { /* * On i.MX7, DIRECT_SPEED_CHANGE behaves differently * from i.MX6 family when no link speed transition @@ -797,7 +920,7 @@ static void imx6_pcie_ltssm_disable(struct device *dev) { struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6SX: case IMX6QP: regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, @@ -823,7 +946,7 @@ static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie) } /* Others poke directly at IOMUXC registers */ - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6SX: regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX6SX_GPR12_PCIE_PM_TURN_OFF, @@ -853,7 +976,7 @@ static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) clk_disable_unprepare(imx6_pcie->pcie_phy); clk_disable_unprepare(imx6_pcie->pcie_bus); - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6SX: clk_disable_unprepare(imx6_pcie->pcie_inbound_axi); break; @@ -862,6 +985,9 @@ static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); break; + case IMX8MQ: + clk_disable_unprepare(imx6_pcie->pcie_aux); + break; default: break; } @@ -869,8 +995,8 @@ static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) static inline bool imx6_pcie_supports_suspend(struct imx6_pcie *imx6_pcie) { - return (imx6_pcie->variant == IMX7D || - imx6_pcie->variant == IMX6SX); + return (imx6_pcie->drvdata->variant == IMX7D || + imx6_pcie->drvdata->variant == IMX6SX); } static int imx6_pcie_suspend_noirq(struct device *dev) @@ -919,6 +1045,7 @@ static int imx6_pcie_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct dw_pcie *pci; struct imx6_pcie *imx6_pcie; + struct device_node *np; struct resource *dbi_base; struct device_node *node = dev->of_node; int ret; @@ -936,8 +1063,24 @@ static int imx6_pcie_probe(struct platform_device *pdev) pci->ops = &dw_pcie_ops; imx6_pcie->pci = pci; - imx6_pcie->variant = - (enum imx6_pcie_variants)of_device_get_match_data(dev); + imx6_pcie->drvdata = of_device_get_match_data(dev); + + /* Find the PHY if one is defined, only imx7d uses it */ + np = of_parse_phandle(node, "fsl,imx7d-pcie-phy", 0); + if (np) { + struct resource res; + + ret = of_address_to_resource(np, 0, &res); + if (ret) { + dev_err(dev, "Unable to map PCIe PHY\n"); + return ret; + } + imx6_pcie->phy_base = devm_ioremap_resource(dev, &res); + if (IS_ERR(imx6_pcie->phy_base)) { + dev_err(dev, "Unable to map PCIe PHY\n"); + return PTR_ERR(imx6_pcie->phy_base); + } + } dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); pci->dbi_base = devm_ioremap_resource(dev, dbi_base); @@ -981,7 +1124,7 @@ static int imx6_pcie_probe(struct platform_device *pdev) return PTR_ERR(imx6_pcie->pcie); } - switch (imx6_pcie->variant) { + switch (imx6_pcie->drvdata->variant) { case IMX6SX: imx6_pcie->pcie_inbound_axi = devm_clk_get(dev, "pcie_inbound_axi"); @@ -990,7 +1133,17 @@ static int imx6_pcie_probe(struct platform_device *pdev) return PTR_ERR(imx6_pcie->pcie_inbound_axi); } break; + case IMX8MQ: + imx6_pcie->pcie_aux = devm_clk_get(dev, "pcie_aux"); + if (IS_ERR(imx6_pcie->pcie_aux)) { + dev_err(dev, "pcie_aux clock source missing or invalid\n"); + return PTR_ERR(imx6_pcie->pcie_aux); + } + /* fall through */ case IMX7D: + if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR) + imx6_pcie->controller_id = 1; + imx6_pcie->pciephy_reset = devm_reset_control_get_exclusive(dev, "pciephy"); if (IS_ERR(imx6_pcie->pciephy_reset)) { @@ -1087,11 +1240,36 @@ static void imx6_pcie_shutdown(struct platform_device *pdev) imx6_pcie_assert_core_reset(imx6_pcie); } +static const struct imx6_pcie_drvdata drvdata[] = { + [IMX6Q] = { + .variant = IMX6Q, + .flags = IMX6_PCIE_FLAG_IMX6_PHY | + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, + }, + [IMX6SX] = { + .variant = IMX6SX, + .flags = IMX6_PCIE_FLAG_IMX6_PHY | + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, + }, + [IMX6QP] = { + .variant = IMX6QP, + .flags = IMX6_PCIE_FLAG_IMX6_PHY | + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, + }, + [IMX7D] = { + .variant = IMX7D, + }, + [IMX8MQ] = { + .variant = IMX8MQ, + }, +}; + static const struct of_device_id imx6_pcie_of_match[] = { - { .compatible = "fsl,imx6q-pcie", .data = (void *)IMX6Q, }, - { .compatible = "fsl,imx6sx-pcie", .data = (void *)IMX6SX, }, - { .compatible = "fsl,imx6qp-pcie", .data = (void *)IMX6QP, }, - { .compatible = "fsl,imx7d-pcie", .data = (void *)IMX7D, }, + { .compatible = "fsl,imx6q-pcie", .data = &drvdata[IMX6Q], }, + { .compatible = "fsl,imx6sx-pcie", .data = &drvdata[IMX6SX], }, + { .compatible = "fsl,imx6qp-pcie", .data = &drvdata[IMX6QP], }, + { .compatible = "fsl,imx7d-pcie", .data = &drvdata[IMX7D], }, + { .compatible = "fsl,imx8mq-pcie", .data = &drvdata[IMX8MQ], } , {}, }; @@ -1108,6 +1286,7 @@ static struct platform_driver imx6_pcie_driver = { static int __init imx6_pcie_init(void) { +#ifdef CONFIG_ARM /* * Since probe() can be deferred we need to make sure that * hook_fault_code is not called after __init memory is freed @@ -1117,6 +1296,7 @@ static int __init imx6_pcie_init(void) */ hook_fault_code(8, imx6q_pcie_abort_handler, SIGBUS, 0, "external abort on non-linefetch"); +#endif return platform_driver_register(&imx6_pcie_driver); } diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c new file mode 100644 index 000000000000..a42c9c3ae1cc --- /dev/null +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c @@ -0,0 +1,156 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCIe controller EP driver for Freescale Layerscape SoCs + * + * Copyright (C) 2018 NXP Semiconductor. + * + * Author: Xiaowei Bao <xiaowei.bao@nxp.com> + */ + +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/of_pci.h> +#include <linux/of_platform.h> +#include <linux/of_address.h> +#include <linux/pci.h> +#include <linux/platform_device.h> +#include <linux/resource.h> + +#include "pcie-designware.h" + +#define PCIE_DBI2_OFFSET 0x1000 /* DBI2 base address*/ + +struct ls_pcie_ep { + struct dw_pcie *pci; +}; + +#define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev) + +static int ls_pcie_establish_link(struct dw_pcie *pci) +{ + return 0; +} + +static const struct dw_pcie_ops ls_pcie_ep_ops = { + .start_link = ls_pcie_establish_link, +}; + +static const struct of_device_id ls_pcie_ep_of_match[] = { + { .compatible = "fsl,ls-pcie-ep",}, + { }, +}; + +static const struct pci_epc_features ls_pcie_epc_features = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = false, +}; + +static const struct pci_epc_features* +ls_pcie_ep_get_features(struct dw_pcie_ep *ep) +{ + return &ls_pcie_epc_features; +} + +static void ls_pcie_ep_init(struct dw_pcie_ep *ep) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + enum pci_barno bar; + + for (bar = BAR_0; bar <= BAR_5; bar++) + dw_pcie_ep_reset_bar(pci, bar); +} + +static int ls_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, + enum pci_epc_irq_type type, u16 interrupt_num) +{ + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + + switch (type) { + case PCI_EPC_IRQ_LEGACY: + return dw_pcie_ep_raise_legacy_irq(ep, func_no); + case PCI_EPC_IRQ_MSI: + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); + case PCI_EPC_IRQ_MSIX: + return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); + default: + dev_err(pci->dev, "UNKNOWN IRQ type\n"); + return -EINVAL; + } +} + +static struct dw_pcie_ep_ops pcie_ep_ops = { + .ep_init = ls_pcie_ep_init, + .raise_irq = ls_pcie_ep_raise_irq, + .get_features = ls_pcie_ep_get_features, +}; + +static int __init ls_add_pcie_ep(struct ls_pcie_ep *pcie, + struct platform_device *pdev) +{ + struct dw_pcie *pci = pcie->pci; + struct device *dev = pci->dev; + struct dw_pcie_ep *ep; + struct resource *res; + int ret; + + ep = &pci->ep; + ep->ops = &pcie_ep_ops; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); + if (!res) + return -EINVAL; + + ep->phys_base = res->start; + ep->addr_size = resource_size(res); + + ret = dw_pcie_ep_init(ep); + if (ret) { + dev_err(dev, "failed to initialize endpoint\n"); + return ret; + } + + return 0; +} + +static int __init ls_pcie_ep_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct dw_pcie *pci; + struct ls_pcie_ep *pcie; + struct resource *dbi_base; + int ret; + + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); + if (!pcie) + return -ENOMEM; + + pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); + if (!pci) + return -ENOMEM; + + dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); + pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_base); + if (IS_ERR(pci->dbi_base)) + return PTR_ERR(pci->dbi_base); + + pci->dbi_base2 = pci->dbi_base + PCIE_DBI2_OFFSET; + pci->dev = dev; + pci->ops = &ls_pcie_ep_ops; + pcie->pci = pci; + + platform_set_drvdata(pdev, pcie); + + ret = ls_add_pcie_ep(pcie, pdev); + + return ret; +} + +static struct platform_driver ls_pcie_ep_driver = { + .driver = { + .name = "layerscape-pcie-ep", + .of_match_table = ls_pcie_ep_of_match, + .suppress_bind_attrs = true, + }, +}; +builtin_platform_driver_probe(ls_pcie_ep_driver, ls_pcie_ep_probe); diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index a543c45c7224..24f5a775ad34 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -355,6 +355,17 @@ static int dw_pcie_ep_start(struct pci_epc *epc) return pci->ops->start_link(pci); } +static const struct pci_epc_features* +dw_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) +{ + struct dw_pcie_ep *ep = epc_get_drvdata(epc); + + if (!ep->ops->get_features) + return NULL; + + return ep->ops->get_features(ep); +} + static const struct pci_epc_ops epc_ops = { .write_header = dw_pcie_ep_write_header, .set_bar = dw_pcie_ep_set_bar, @@ -368,6 +379,7 @@ static const struct pci_epc_ops epc_ops = { .raise_irq = dw_pcie_ep_raise_irq, .start = dw_pcie_ep_start, .stop = dw_pcie_ep_stop, + .get_features = dw_pcie_ep_get_features, }; int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no) @@ -465,8 +477,10 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, iounmap(msix_tbl); - if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) + if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) { + dev_dbg(pci->dev, "MSI-X entry ctrl set\n"); return -EPERM; + } ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, epc->mem->page_size); diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 721d60a5d9e4..25087d3c9a82 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -120,9 +120,9 @@ static void dw_chained_msi_isr(struct irq_desc *desc) chained_irq_exit(chip, desc); } -static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg) +static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg) { - struct pcie_port *pp = irq_data_get_irq_chip_data(data); + struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct dw_pcie *pci = to_dw_pcie_from_pp(pp); u64 msi_target; @@ -135,61 +135,61 @@ static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg) msg->address_hi = upper_32_bits(msi_target); if (pp->ops->get_msi_data) - msg->data = pp->ops->get_msi_data(pp, data->hwirq); + msg->data = pp->ops->get_msi_data(pp, d->hwirq); else - msg->data = data->hwirq; + msg->data = d->hwirq; dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", - (int)data->hwirq, msg->address_hi, msg->address_lo); + (int)d->hwirq, msg->address_hi, msg->address_lo); } -static int dw_pci_msi_set_affinity(struct irq_data *irq_data, +static int dw_pci_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force) { return -EINVAL; } -static void dw_pci_bottom_mask(struct irq_data *data) +static void dw_pci_bottom_mask(struct irq_data *d) { - struct pcie_port *pp = irq_data_get_irq_chip_data(data); + struct pcie_port *pp = irq_data_get_irq_chip_data(d); unsigned int res, bit, ctrl; unsigned long flags; raw_spin_lock_irqsave(&pp->lock, flags); if (pp->ops->msi_clear_irq) { - pp->ops->msi_clear_irq(pp, data->hwirq); + pp->ops->msi_clear_irq(pp, d->hwirq); } else { - ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL; + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; - bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL; + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; - pp->irq_status[ctrl] &= ~(1 << bit); + pp->irq_mask[ctrl] |= BIT(bit); dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, - ~pp->irq_status[ctrl]); + pp->irq_mask[ctrl]); } raw_spin_unlock_irqrestore(&pp->lock, flags); } -static void dw_pci_bottom_unmask(struct irq_data *data) +static void dw_pci_bottom_unmask(struct irq_data *d) { - struct pcie_port *pp = irq_data_get_irq_chip_data(data); + struct pcie_port *pp = irq_data_get_irq_chip_data(d); unsigned int res, bit, ctrl; unsigned long flags; raw_spin_lock_irqsave(&pp->lock, flags); if (pp->ops->msi_set_irq) { - pp->ops->msi_set_irq(pp, data->hwirq); + pp->ops->msi_set_irq(pp, d->hwirq); } else { - ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL; + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; - bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL; + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; - pp->irq_status[ctrl] |= 1 << bit; + pp->irq_mask[ctrl] &= ~BIT(bit); dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, - ~pp->irq_status[ctrl]); + pp->irq_mask[ctrl]); } raw_spin_unlock_irqrestore(&pp->lock, flags); @@ -207,7 +207,7 @@ static void dw_pci_bottom_ack(struct irq_data *d) raw_spin_lock_irqsave(&pp->lock, flags); - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, 1 << bit); + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit)); if (pp->ops->msi_irq_ack) pp->ops->msi_irq_ack(d->hwirq, pp); @@ -255,13 +255,13 @@ static int dw_pcie_irq_domain_alloc(struct irq_domain *domain, static void dw_pcie_irq_domain_free(struct irq_domain *domain, unsigned int virq, unsigned int nr_irqs) { - struct irq_data *data = irq_domain_get_irq_data(domain, virq); - struct pcie_port *pp = irq_data_get_irq_chip_data(data); + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct pcie_port *pp = irq_data_get_irq_chip_data(d); unsigned long flags; raw_spin_lock_irqsave(&pp->lock, flags); - bitmap_release_region(pp->msi_irq_in_use, data->hwirq, + bitmap_release_region(pp->msi_irq_in_use, d->hwirq, order_base_2(nr_irqs)); raw_spin_unlock_irqrestore(&pp->lock, flags); @@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp) if (ret) pci->num_viewport = 2; - if (IS_ENABLED(CONFIG_PCI_MSI)) { + if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) { /* * If a specific SoC driver needs to change the * default number of vectors, it needs to implement @@ -512,8 +512,9 @@ error: return ret; } -static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, - u32 devfn, int where, int size, u32 *val) +static int dw_pcie_access_other_conf(struct pcie_port *pp, struct pci_bus *bus, + u32 devfn, int where, int size, u32 *val, + bool write) { int ret, type; u32 busdev, cfg_size; @@ -521,9 +522,6 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, void __iomem *va_cfg_base; struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - if (pp->ops->rd_other_conf) - return pp->ops->rd_other_conf(pp, bus, devfn, where, size, val); - busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | PCIE_ATU_FUNC(PCI_FUNC(devfn)); @@ -542,7 +540,11 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, type, cpu_addr, busdev, cfg_size); - ret = dw_pcie_read(va_cfg_base + where, size, val); + if (write) + ret = dw_pcie_write(va_cfg_base + where, size, *val); + else + ret = dw_pcie_read(va_cfg_base + where, size, val); + if (pci->num_viewport <= 2) dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, PCIE_ATU_TYPE_IO, pp->io_base, @@ -551,43 +553,26 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, return ret; } +static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, + u32 devfn, int where, int size, u32 *val) +{ + if (pp->ops->rd_other_conf) + return pp->ops->rd_other_conf(pp, bus, devfn, where, + size, val); + + return dw_pcie_access_other_conf(pp, bus, devfn, where, size, val, + false); +} + static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, u32 devfn, int where, int size, u32 val) { - int ret, type; - u32 busdev, cfg_size; - u64 cpu_addr; - void __iomem *va_cfg_base; - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - if (pp->ops->wr_other_conf) - return pp->ops->wr_other_conf(pp, bus, devfn, where, size, val); - - busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | - PCIE_ATU_FUNC(PCI_FUNC(devfn)); + return pp->ops->wr_other_conf(pp, bus, devfn, where, + size, val); - if (bus->parent->number == pp->root_bus_nr) { - type = PCIE_ATU_TYPE_CFG0; - cpu_addr = pp->cfg0_base; - cfg_size = pp->cfg0_size; - va_cfg_base = pp->va_cfg0_base; - } else { - type = PCIE_ATU_TYPE_CFG1; - cpu_addr = pp->cfg1_base; - cfg_size = pp->cfg1_size; - va_cfg_base = pp->va_cfg1_base; - } - - dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, - type, cpu_addr, - busdev, cfg_size); - ret = dw_pcie_write(va_cfg_base + where, size, val); - if (pci->num_viewport <= 2) - dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, - PCIE_ATU_TYPE_IO, pp->io_base, - pp->io_bus_addr, pp->io_size); - - return ret; + return dw_pcie_access_other_conf(pp, bus, devfn, where, size, &val, + true); } static int dw_pcie_valid_device(struct pcie_port *pp, struct pci_bus *bus, @@ -665,13 +650,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp) /* Initialize IRQ Status array */ for (ctrl = 0; ctrl < num_ctrls; ctrl++) { + pp->irq_mask[ctrl] = ~0; dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), - 4, ~0); + 4, pp->irq_mask[ctrl]); dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 4, ~0); - pp->irq_status[ctrl] = 0; } /* Setup RC BARs */ diff --git a/drivers/pci/controller/dwc/pcie-designware-plat.c b/drivers/pci/controller/dwc/pcie-designware-plat.c index c12bf794d69c..932dbd0b34b6 100644 --- a/drivers/pci/controller/dwc/pcie-designware-plat.c +++ b/drivers/pci/controller/dwc/pcie-designware-plat.c @@ -13,11 +13,9 @@ #include <linux/kernel.h> #include <linux/init.h> #include <linux/of_device.h> -#include <linux/of_gpio.h> #include <linux/pci.h> #include <linux/platform_device.h> #include <linux/resource.h> -#include <linux/signal.h> #include <linux/types.h> #include <linux/regmap.h> @@ -70,14 +68,10 @@ static const struct dw_pcie_ops dw_pcie_ops = { static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) { struct dw_pcie *pci = to_dw_pcie_from_ep(ep); - struct pci_epc *epc = ep->epc; enum pci_barno bar; for (bar = BAR_0; bar <= BAR_5; bar++) dw_pcie_ep_reset_bar(pci, bar); - - epc->features |= EPC_FEATURE_NO_LINKUP_NOTIFIER; - epc->features |= EPC_FEATURE_MSIX_AVAILABLE; } static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, @@ -100,9 +94,22 @@ static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, return 0; } +static const struct pci_epc_features dw_plat_pcie_epc_features = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = true, +}; + +static const struct pci_epc_features* +dw_plat_pcie_get_features(struct dw_pcie_ep *ep) +{ + return &dw_plat_pcie_epc_features; +} + static struct dw_pcie_ep_ops pcie_ep_ops = { .ep_init = dw_plat_pcie_ep_init, .raise_irq = dw_plat_pcie_ep_raise_irq, + .get_features = dw_plat_pcie_get_features, }; static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie, diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 93ef8c31fb39..31f6331ca46f 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -22,7 +22,7 @@ int dw_pcie_read(void __iomem *addr, int size, u32 *val) { - if ((uintptr_t)addr & (size - 1)) { + if (!IS_ALIGNED((uintptr_t)addr, size)) { *val = 0; return PCIBIOS_BAD_REGISTER_NUMBER; } @@ -43,7 +43,7 @@ int dw_pcie_read(void __iomem *addr, int size, u32 *val) int dw_pcie_write(void __iomem *addr, int size, u32 val) { - if ((uintptr_t)addr & (size - 1)) + if (!IS_ALIGNED((uintptr_t)addr, size)) return PCIBIOS_BAD_REGISTER_NUMBER; if (size == 4) @@ -306,7 +306,7 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, int index, } dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index); - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~PCIE_ATU_ENABLE); + dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, (u32)~PCIE_ATU_ENABLE); } int dw_pcie_wait_for_link(struct dw_pcie *pci) diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 9943d8c68335..377f4c0b52da 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -11,6 +11,7 @@ #ifndef _PCIE_DESIGNWARE_H #define _PCIE_DESIGNWARE_H +#include <linux/bitfield.h> #include <linux/dma-mapping.h> #include <linux/irq.h> #include <linux/msi.h> @@ -30,23 +31,25 @@ /* Synopsys-specific PCIe configuration registers */ #define PCIE_PORT_LINK_CONTROL 0x710 -#define PORT_LINK_MODE_MASK (0x3f << 16) -#define PORT_LINK_MODE_1_LANES (0x1 << 16) -#define PORT_LINK_MODE_2_LANES (0x3 << 16) -#define PORT_LINK_MODE_4_LANES (0x7 << 16) -#define PORT_LINK_MODE_8_LANES (0xf << 16) +#define PORT_LINK_MODE_MASK GENMASK(21, 16) +#define PORT_LINK_MODE(n) FIELD_PREP(PORT_LINK_MODE_MASK, n) +#define PORT_LINK_MODE_1_LANES PORT_LINK_MODE(0x1) +#define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) +#define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) +#define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) #define PCIE_PORT_DEBUG0 0x728 #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f #define PORT_LOGIC_LTSSM_STATE_L0 0x11 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C -#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) -#define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8) -#define PORT_LOGIC_LINK_WIDTH_1_LANES (0x1 << 8) -#define PORT_LOGIC_LINK_WIDTH_2_LANES (0x2 << 8) -#define PORT_LOGIC_LINK_WIDTH_4_LANES (0x4 << 8) -#define PORT_LOGIC_LINK_WIDTH_8_LANES (0x8 << 8) +#define PORT_LOGIC_SPEED_CHANGE BIT(17) +#define PORT_LOGIC_LINK_WIDTH_MASK GENMASK(12, 8) +#define PORT_LOGIC_LINK_WIDTH(n) FIELD_PREP(PORT_LOGIC_LINK_WIDTH_MASK, n) +#define PORT_LOGIC_LINK_WIDTH_1_LANES PORT_LOGIC_LINK_WIDTH(0x1) +#define PORT_LOGIC_LINK_WIDTH_2_LANES PORT_LOGIC_LINK_WIDTH(0x2) +#define PORT_LOGIC_LINK_WIDTH_4_LANES PORT_LOGIC_LINK_WIDTH(0x4) +#define PORT_LOGIC_LINK_WIDTH_8_LANES PORT_LOGIC_LINK_WIDTH(0x8) #define PCIE_MSI_ADDR_LO 0x820 #define PCIE_MSI_ADDR_HI 0x824 @@ -55,30 +58,30 @@ #define PCIE_MSI_INTR0_STATUS 0x830 #define PCIE_ATU_VIEWPORT 0x900 -#define PCIE_ATU_REGION_INBOUND (0x1 << 31) -#define PCIE_ATU_REGION_OUTBOUND (0x0 << 31) -#define PCIE_ATU_REGION_INDEX2 (0x2 << 0) -#define PCIE_ATU_REGION_INDEX1 (0x1 << 0) -#define PCIE_ATU_REGION_INDEX0 (0x0 << 0) +#define PCIE_ATU_REGION_INBOUND BIT(31) +#define PCIE_ATU_REGION_OUTBOUND 0 +#define PCIE_ATU_REGION_INDEX2 0x2 +#define PCIE_ATU_REGION_INDEX1 0x1 +#define PCIE_ATU_REGION_INDEX0 0x0 #define PCIE_ATU_CR1 0x904 -#define PCIE_ATU_TYPE_MEM (0x0 << 0) -#define PCIE_ATU_TYPE_IO (0x2 << 0) -#define PCIE_ATU_TYPE_CFG0 (0x4 << 0) -#define PCIE_ATU_TYPE_CFG1 (0x5 << 0) +#define PCIE_ATU_TYPE_MEM 0x0 +#define PCIE_ATU_TYPE_IO 0x2 +#define PCIE_ATU_TYPE_CFG0 0x4 +#define PCIE_ATU_TYPE_CFG1 0x5 #define PCIE_ATU_CR2 0x908 -#define PCIE_ATU_ENABLE (0x1 << 31) -#define PCIE_ATU_BAR_MODE_ENABLE (0x1 << 30) +#define PCIE_ATU_ENABLE BIT(31) +#define PCIE_ATU_BAR_MODE_ENABLE BIT(30) #define PCIE_ATU_LOWER_BASE 0x90C #define PCIE_ATU_UPPER_BASE 0x910 #define PCIE_ATU_LIMIT 0x914 #define PCIE_ATU_LOWER_TARGET 0x918 -#define PCIE_ATU_BUS(x) (((x) & 0xff) << 24) -#define PCIE_ATU_DEV(x) (((x) & 0x1f) << 19) -#define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) +#define PCIE_ATU_BUS(x) FIELD_PREP(GENMASK(31, 24), x) +#define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x) +#define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x) #define PCIE_ATU_UPPER_TARGET 0x91C #define PCIE_MISC_CONTROL_1_OFF 0x8BC -#define PCIE_DBI_RO_WR_EN (0x1 << 0) +#define PCIE_DBI_RO_WR_EN BIT(0) /* * iATU Unroll-specific register definitions @@ -105,7 +108,7 @@ ((region) << 9) #define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) \ - (((region) << 9) | (0x1 << 8)) + (((region) << 9) | BIT(8)) #define MAX_MSI_IRQS 256 #define MAX_MSI_IRQS_PER_CTRL 32 @@ -177,7 +180,7 @@ struct pcie_port { struct irq_domain *msi_domain; dma_addr_t msi_data; u32 num_vectors; - u32 irq_status[MAX_MSI_CTRLS]; + u32 irq_mask[MAX_MSI_CTRLS]; raw_spinlock_t lock; DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); }; @@ -192,6 +195,7 @@ struct dw_pcie_ep_ops { void (*ep_init)(struct dw_pcie_ep *ep); int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, enum pci_epc_irq_type type, u16 interrupt_num); + const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep); }; struct dw_pcie_ep { diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index d185ea5fe996..a7f703556790 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev) pcie->ops = of_device_get_match_data(dev); - pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); + pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); if (IS_ERR(pcie->reset)) { ret = PTR_ERR(pcie->reset); goto err_pm_runtime_put; diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index 750081c1cb48..eb58dfdaba1b 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c @@ -466,7 +466,7 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, } } -struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { +static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { .read_pcie = advk_pci_bridge_emul_pcie_conf_read, .write_pcie = advk_pci_bridge_emul_pcie_conf_write, }; @@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) bridge->data = pcie; bridge->ops = &advk_pci_bridge_emul_ops; - pci_bridge_emul_init(bridge); + pci_bridge_emul_init(bridge, 0); } diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 9ba4d12c179c..95441a35eceb 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -391,14 +391,6 @@ struct hv_interrupt_entry { u32 data; }; -#define HV_VP_SET_BANK_COUNT_MAX 5 /* current implementation limit */ - -struct hv_vp_set { - u64 format; /* 0 (HvGenericSetSparse4k) */ - u64 valid_banks; - u64 masks[HV_VP_SET_BANK_COUNT_MAX]; -}; - /* * flags for hv_device_interrupt_target.flags */ @@ -410,7 +402,7 @@ struct hv_device_interrupt_target { u32 flags; union { u64 vp_mask; - struct hv_vp_set vp_set; + struct hv_vpset vp_set; }; }; @@ -420,7 +412,7 @@ struct retarget_msi_interrupt { struct hv_interrupt_entry int_entry; u64 reserved2; struct hv_device_interrupt_target int_target; -} __packed; +} __packed __aligned(8); /* * Driver specific state. @@ -460,12 +452,16 @@ struct hv_pcibus_device { struct msi_controller msi_chip; struct irq_domain *irq_domain; - /* hypercall arg, must not cross page boundary */ - struct retarget_msi_interrupt retarget_msi_interrupt_params; - spinlock_t retarget_msi_interrupt_lock; struct workqueue_struct *wq; + + /* hypercall arg, must not cross page boundary */ + struct retarget_msi_interrupt retarget_msi_interrupt_params; + + /* + * Don't put anything here: retarget_msi_interrupt_params must be last + */ }; /* @@ -910,12 +906,12 @@ static void hv_irq_unmask(struct irq_data *data) struct retarget_msi_interrupt *params; struct hv_pcibus_device *hbus; struct cpumask *dest; + cpumask_var_t tmp; struct pci_bus *pbus; struct pci_dev *pdev; unsigned long flags; u32 var_size = 0; - int cpu_vmbus; - int cpu; + int cpu, nr_bank; u64 res; dest = irq_data_get_effective_affinity_mask(data); @@ -955,28 +951,27 @@ static void hv_irq_unmask(struct irq_data *data) */ params->int_target.flags |= HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET; - params->int_target.vp_set.valid_banks = - (1ull << HV_VP_SET_BANK_COUNT_MAX) - 1; - /* - * var-sized hypercall, var-size starts after vp_mask (thus - * vp_set.format does not count, but vp_set.valid_banks does). - */ - var_size = 1 + HV_VP_SET_BANK_COUNT_MAX; + if (!alloc_cpumask_var(&tmp, GFP_ATOMIC)) { + res = 1; + goto exit_unlock; + } - for_each_cpu_and(cpu, dest, cpu_online_mask) { - cpu_vmbus = hv_cpu_number_to_vp_number(cpu); + cpumask_and(tmp, dest, cpu_online_mask); + nr_bank = cpumask_to_vpset(¶ms->int_target.vp_set, tmp); + free_cpumask_var(tmp); - if (cpu_vmbus >= HV_VP_SET_BANK_COUNT_MAX * 64) { - dev_err(&hbus->hdev->device, - "too high CPU %d", cpu_vmbus); - res = 1; - goto exit_unlock; - } - - params->int_target.vp_set.masks[cpu_vmbus / 64] |= - (1ULL << (cpu_vmbus & 63)); + if (nr_bank <= 0) { + res = 1; + goto exit_unlock; } + + /* + * var-sized hypercall, var-size starts after vp_mask (thus + * vp_set.format does not count, but vp_set.valid_bank_mask + * does). + */ + var_size = 1 + nr_bank; } else { for_each_cpu_and(cpu, dest, cpu_online_mask) { params->int_target.vp_mask |= diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index fa0fc46edb0c..d3a0419e42f2 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c @@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) bridge->data = port; bridge->ops = &mvebu_pci_bridge_emul_ops; - pci_bridge_emul_init(bridge); + pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR); } static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) diff --git a/drivers/pci/controller/pcie-altera.c b/drivers/pci/controller/pcie-altera.c index 7d05e51205b3..27edcebd1726 100644 --- a/drivers/pci/controller/pcie-altera.c +++ b/drivers/pci/controller/pcie-altera.c @@ -11,6 +11,7 @@ #include <linux/irqchip/chained_irq.h> #include <linux/init.h> #include <linux/of_address.h> +#include <linux/of_device.h> #include <linux/of_irq.h> #include <linux/of_pci.h> #include <linux/pci.h> @@ -37,7 +38,12 @@ #define RP_LTSSM_MASK 0x1f #define LTSSM_L0 0xf -#define PCIE_CAP_OFFSET 0x80 +#define S10_RP_TX_CNTRL 0x2004 +#define S10_RP_RXCPL_REG 0x2008 +#define S10_RP_RXCPL_STATUS 0x200C +#define S10_RP_CFG_ADDR(pcie, reg) \ + (((pcie)->hip_base) + (reg) + (1 << 20)) + /* TLP configuration type 0 and 1 */ #define TLP_FMTTYPE_CFGRD0 0x04 /* Configuration Read Type 0 */ #define TLP_FMTTYPE_CFGWR0 0x44 /* Configuration Write Type 0 */ @@ -49,18 +55,19 @@ #define RP_DEVFN 0 #define TLP_REQ_ID(bus, devfn) (((bus) << 8) | (devfn)) #define TLP_CFGRD_DW0(pcie, bus) \ - ((((bus == pcie->root_bus_nr) ? TLP_FMTTYPE_CFGRD0 \ - : TLP_FMTTYPE_CFGRD1) << 24) | \ - TLP_PAYLOAD_SIZE) + ((((bus == pcie->root_bus_nr) ? pcie->pcie_data->cfgrd0 \ + : pcie->pcie_data->cfgrd1) << 24) | \ + TLP_PAYLOAD_SIZE) #define TLP_CFGWR_DW0(pcie, bus) \ - ((((bus == pcie->root_bus_nr) ? TLP_FMTTYPE_CFGWR0 \ - : TLP_FMTTYPE_CFGWR1) << 24) | \ - TLP_PAYLOAD_SIZE) + ((((bus == pcie->root_bus_nr) ? pcie->pcie_data->cfgwr0 \ + : pcie->pcie_data->cfgwr1) << 24) | \ + TLP_PAYLOAD_SIZE) #define TLP_CFG_DW1(pcie, tag, be) \ - (((TLP_REQ_ID(pcie->root_bus_nr, RP_DEVFN)) << 16) | (tag << 8) | (be)) + (((TLP_REQ_ID(pcie->root_bus_nr, RP_DEVFN)) << 16) | (tag << 8) | (be)) #define TLP_CFG_DW2(bus, devfn, offset) \ (((bus) << 24) | ((devfn) << 16) | (offset)) #define TLP_COMP_STATUS(s) (((s) >> 13) & 7) +#define TLP_BYTE_COUNT(s) (((s) >> 0) & 0xfff) #define TLP_HDR_SIZE 3 #define TLP_LOOP 500 @@ -69,14 +76,47 @@ #define DWORD_MASK 3 +#define S10_TLP_FMTTYPE_CFGRD0 0x05 +#define S10_TLP_FMTTYPE_CFGRD1 0x04 +#define S10_TLP_FMTTYPE_CFGWR0 0x45 +#define S10_TLP_FMTTYPE_CFGWR1 0x44 + +enum altera_pcie_version { + ALTERA_PCIE_V1 = 0, + ALTERA_PCIE_V2, +}; + struct altera_pcie { struct platform_device *pdev; - void __iomem *cra_base; /* DT Cra */ + void __iomem *cra_base; + void __iomem *hip_base; int irq; u8 root_bus_nr; struct irq_domain *irq_domain; struct resource bus_range; struct list_head resources; + const struct altera_pcie_data *pcie_data; +}; + +struct altera_pcie_ops { + int (*tlp_read_pkt)(struct altera_pcie *pcie, u32 *value); + void (*tlp_write_pkt)(struct altera_pcie *pcie, u32 *headers, + u32 data, bool align); + bool (*get_link_status)(struct altera_pcie *pcie); + int (*rp_read_cfg)(struct altera_pcie *pcie, int where, + int size, u32 *value); + int (*rp_write_cfg)(struct altera_pcie *pcie, u8 busno, + int where, int size, u32 value); +}; + +struct altera_pcie_data { + const struct altera_pcie_ops *ops; + enum altera_pcie_version version; + u32 cap_offset; /* PCIe capability structure register offset */ + u32 cfgrd0; + u32 cfgrd1; + u32 cfgwr0; + u32 cfgwr1; }; struct tlp_rp_regpair_t { @@ -101,6 +141,15 @@ static bool altera_pcie_link_up(struct altera_pcie *pcie) return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0); } +static bool s10_altera_pcie_link_up(struct altera_pcie *pcie) +{ + void __iomem *addr = S10_RP_CFG_ADDR(pcie, + pcie->pcie_data->cap_offset + + PCI_EXP_LNKSTA); + + return !!(readw(addr) & PCI_EXP_LNKSTA_DLLLA); +} + /* * Altera PCIe port uses BAR0 of RC's configuration space as the translation * from PCI bus to native BUS. Entire DDR region is mapped into PCIe space @@ -128,12 +177,18 @@ static void tlp_write_tx(struct altera_pcie *pcie, cra_writel(pcie, tlp_rp_regdata->ctrl, RP_TX_CNTRL); } +static void s10_tlp_write_tx(struct altera_pcie *pcie, u32 reg0, u32 ctrl) +{ + cra_writel(pcie, reg0, RP_TX_REG0); + cra_writel(pcie, ctrl, S10_RP_TX_CNTRL); +} + static bool altera_pcie_valid_device(struct altera_pcie *pcie, struct pci_bus *bus, int dev) { /* If there is no link, then there is no device */ if (bus->number != pcie->root_bus_nr) { - if (!altera_pcie_link_up(pcie)) + if (!pcie->pcie_data->ops->get_link_status(pcie)) return false; } @@ -183,6 +238,53 @@ static int tlp_read_packet(struct altera_pcie *pcie, u32 *value) return PCIBIOS_DEVICE_NOT_FOUND; } +static int s10_tlp_read_packet(struct altera_pcie *pcie, u32 *value) +{ + u32 ctrl; + u32 comp_status; + u32 dw[4]; + u32 count; + struct device *dev = &pcie->pdev->dev; + + for (count = 0; count < TLP_LOOP; count++) { + ctrl = cra_readl(pcie, S10_RP_RXCPL_STATUS); + if (ctrl & RP_RXCPL_SOP) { + /* Read first DW */ + dw[0] = cra_readl(pcie, S10_RP_RXCPL_REG); + break; + } + + udelay(5); + } + + /* SOP detection failed, return error */ + if (count == TLP_LOOP) + return PCIBIOS_DEVICE_NOT_FOUND; + + count = 1; + + /* Poll for EOP */ + while (count < ARRAY_SIZE(dw)) { + ctrl = cra_readl(pcie, S10_RP_RXCPL_STATUS); + dw[count++] = cra_readl(pcie, S10_RP_RXCPL_REG); + if (ctrl & RP_RXCPL_EOP) { + comp_status = TLP_COMP_STATUS(dw[1]); + if (comp_status) + return PCIBIOS_DEVICE_NOT_FOUND; + + if (value && TLP_BYTE_COUNT(dw[1]) == sizeof(u32) && + count == 4) + *value = dw[3]; + + return PCIBIOS_SUCCESSFUL; + } + } + + dev_warn(dev, "Malformed TLP packet\n"); + + return PCIBIOS_DEVICE_NOT_FOUND; +} + static void tlp_write_packet(struct altera_pcie *pcie, u32 *headers, u32 data, bool align) { @@ -210,6 +312,15 @@ static void tlp_write_packet(struct altera_pcie *pcie, u32 *headers, tlp_write_tx(pcie, &tlp_rp_regdata); } +static void s10_tlp_write_packet(struct altera_pcie *pcie, u32 *headers, + u32 data, bool dummy) +{ + s10_tlp_write_tx(pcie, headers[0], RP_TX_SOP); + s10_tlp_write_tx(pcie, headers[1], 0); + s10_tlp_write_tx(pcie, headers[2], 0); + s10_tlp_write_tx(pcie, data, RP_TX_EOP); +} + static int tlp_cfg_dword_read(struct altera_pcie *pcie, u8 bus, u32 devfn, int where, u8 byte_en, u32 *value) { @@ -219,9 +330,9 @@ static int tlp_cfg_dword_read(struct altera_pcie *pcie, u8 bus, u32 devfn, headers[1] = TLP_CFG_DW1(pcie, TLP_READ_TAG, byte_en); headers[2] = TLP_CFG_DW2(bus, devfn, where); - tlp_write_packet(pcie, headers, 0, false); + pcie->pcie_data->ops->tlp_write_pkt(pcie, headers, 0, false); - return tlp_read_packet(pcie, value); + return pcie->pcie_data->ops->tlp_read_pkt(pcie, value); } static int tlp_cfg_dword_write(struct altera_pcie *pcie, u8 bus, u32 devfn, @@ -236,11 +347,13 @@ static int tlp_cfg_dword_write(struct altera_pcie *pcie, u8 bus, u32 devfn, /* check alignment to Qword */ if ((where & 0x7) == 0) - tlp_write_packet(pcie, headers, value, true); + pcie->pcie_data->ops->tlp_write_pkt(pcie, headers, + value, true); else - tlp_write_packet(pcie, headers, value, false); + pcie->pcie_data->ops->tlp_write_pkt(pcie, headers, + value, false); - ret = tlp_read_packet(pcie, NULL); + ret = pcie->pcie_data->ops->tlp_read_pkt(pcie, NULL); if (ret != PCIBIOS_SUCCESSFUL) return ret; @@ -254,6 +367,53 @@ static int tlp_cfg_dword_write(struct altera_pcie *pcie, u8 bus, u32 devfn, return PCIBIOS_SUCCESSFUL; } +static int s10_rp_read_cfg(struct altera_pcie *pcie, int where, + int size, u32 *value) +{ + void __iomem *addr = S10_RP_CFG_ADDR(pcie, where); + + switch (size) { + case 1: + *value = readb(addr); + break; + case 2: + *value = readw(addr); + break; + default: + *value = readl(addr); + break; + } + + return PCIBIOS_SUCCESSFUL; +} + +static int s10_rp_write_cfg(struct altera_pcie *pcie, u8 busno, + int where, int size, u32 value) +{ + void __iomem *addr = S10_RP_CFG_ADDR(pcie, where); + + switch (size) { + case 1: + writeb(value, addr); + break; + case 2: + writew(value, addr); + break; + default: + writel(value, addr); + break; + } + + /* + * Monitor changes to PCI_PRIMARY_BUS register on root port + * and update local copy of root bus number accordingly. + */ + if (busno == pcie->root_bus_nr && where == PCI_PRIMARY_BUS) + pcie->root_bus_nr = value & 0xff; + + return PCIBIOS_SUCCESSFUL; +} + static int _altera_pcie_cfg_read(struct altera_pcie *pcie, u8 busno, unsigned int devfn, int where, int size, u32 *value) @@ -262,6 +422,10 @@ static int _altera_pcie_cfg_read(struct altera_pcie *pcie, u8 busno, u32 data; u8 byte_en; + if (busno == pcie->root_bus_nr && pcie->pcie_data->ops->rp_read_cfg) + return pcie->pcie_data->ops->rp_read_cfg(pcie, where, + size, value); + switch (size) { case 1: byte_en = 1 << (where & 3); @@ -302,6 +466,10 @@ static int _altera_pcie_cfg_write(struct altera_pcie *pcie, u8 busno, u32 shift = 8 * (where & 3); u8 byte_en; + if (busno == pcie->root_bus_nr && pcie->pcie_data->ops->rp_write_cfg) + return pcie->pcie_data->ops->rp_write_cfg(pcie, busno, + where, size, value); + switch (size) { case 1: data32 = (value & 0xff) << shift; @@ -365,7 +533,8 @@ static int altera_read_cap_word(struct altera_pcie *pcie, u8 busno, int ret; ret = _altera_pcie_cfg_read(pcie, busno, devfn, - PCIE_CAP_OFFSET + offset, sizeof(*value), + pcie->pcie_data->cap_offset + offset, + sizeof(*value), &data); *value = data; return ret; @@ -375,7 +544,8 @@ static int altera_write_cap_word(struct altera_pcie *pcie, u8 busno, unsigned int devfn, int offset, u16 value) { return _altera_pcie_cfg_write(pcie, busno, devfn, - PCIE_CAP_OFFSET + offset, sizeof(value), + pcie->pcie_data->cap_offset + offset, + sizeof(value), value); } @@ -403,7 +573,7 @@ static void altera_wait_link_retrain(struct altera_pcie *pcie) /* Wait for link is up */ start_jiffies = jiffies; for (;;) { - if (altera_pcie_link_up(pcie)) + if (pcie->pcie_data->ops->get_link_status(pcie)) break; if (time_after(jiffies, start_jiffies + LINK_UP_TIMEOUT)) { @@ -418,7 +588,7 @@ static void altera_pcie_retrain(struct altera_pcie *pcie) { u16 linkcap, linkstat, linkctl; - if (!altera_pcie_link_up(pcie)) + if (!pcie->pcie_data->ops->get_link_status(pcie)) return; /* @@ -540,12 +710,20 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie) struct device *dev = &pcie->pdev->dev; struct platform_device *pdev = pcie->pdev; struct resource *cra; + struct resource *hip; cra = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Cra"); pcie->cra_base = devm_ioremap_resource(dev, cra); if (IS_ERR(pcie->cra_base)) return PTR_ERR(pcie->cra_base); + if (pcie->pcie_data->version == ALTERA_PCIE_V2) { + hip = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Hip"); + pcie->hip_base = devm_ioremap_resource(&pdev->dev, hip); + if (IS_ERR(pcie->hip_base)) + return PTR_ERR(pcie->hip_base); + } + /* setup IRQ */ pcie->irq = platform_get_irq(pdev, 0); if (pcie->irq < 0) { @@ -562,6 +740,48 @@ static void altera_pcie_host_init(struct altera_pcie *pcie) altera_pcie_retrain(pcie); } +static const struct altera_pcie_ops altera_pcie_ops_1_0 = { + .tlp_read_pkt = tlp_read_packet, + .tlp_write_pkt = tlp_write_packet, + .get_link_status = altera_pcie_link_up, +}; + +static const struct altera_pcie_ops altera_pcie_ops_2_0 = { + .tlp_read_pkt = s10_tlp_read_packet, + .tlp_write_pkt = s10_tlp_write_packet, + .get_link_status = s10_altera_pcie_link_up, + .rp_read_cfg = s10_rp_read_cfg, + .rp_write_cfg = s10_rp_write_cfg, +}; + +static const struct altera_pcie_data altera_pcie_1_0_data = { + .ops = &altera_pcie_ops_1_0, + .cap_offset = 0x80, + .version = ALTERA_PCIE_V1, + .cfgrd0 = TLP_FMTTYPE_CFGRD0, + .cfgrd1 = TLP_FMTTYPE_CFGRD1, + .cfgwr0 = TLP_FMTTYPE_CFGWR0, + .cfgwr1 = TLP_FMTTYPE_CFGWR1, +}; + +static const struct altera_pcie_data altera_pcie_2_0_data = { + .ops = &altera_pcie_ops_2_0, + .version = ALTERA_PCIE_V2, + .cap_offset = 0x70, + .cfgrd0 = S10_TLP_FMTTYPE_CFGRD0, + .cfgrd1 = S10_TLP_FMTTYPE_CFGRD1, + .cfgwr0 = S10_TLP_FMTTYPE_CFGWR0, + .cfgwr1 = S10_TLP_FMTTYPE_CFGWR1, +}; + +static const struct of_device_id altera_pcie_of_match[] = { + {.compatible = "altr,pcie-root-port-1.0", + .data = &altera_pcie_1_0_data }, + {.compatible = "altr,pcie-root-port-2.0", + .data = &altera_pcie_2_0_data }, + {}, +}; + static int altera_pcie_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -570,6 +790,7 @@ static int altera_pcie_probe(struct platform_device *pdev) struct pci_bus *child; struct pci_host_bridge *bridge; int ret; + const struct of_device_id *match; bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); if (!bridge) @@ -578,6 +799,12 @@ static int altera_pcie_probe(struct platform_device *pdev) pcie = pci_host_bridge_priv(bridge); pcie->pdev = pdev; + match = of_match_device(altera_pcie_of_match, &pdev->dev); + if (!match) + return -ENODEV; + + pcie->pcie_data = match->data; + ret = altera_pcie_parse_dt(pcie); if (ret) { dev_err(dev, "Parsing DT failed\n"); @@ -628,11 +855,6 @@ static int altera_pcie_probe(struct platform_device *pdev) return ret; } -static const struct of_device_id altera_pcie_of_match[] = { - { .compatible = "altr,pcie-root-port-1.0", }, - {}, -}; - static struct platform_driver altera_pcie_driver = { .probe = altera_pcie_probe, .driver = { diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c index c3a088910f48..def7820cb824 100644 --- a/drivers/pci/controller/pcie-cadence-ep.c +++ b/drivers/pci/controller/pcie-cadence-ep.c @@ -396,21 +396,21 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) cfg |= BIT(epf->func_no); cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg); - /* - * The PCIe links are automatically established by the controller - * once for all at powerup: the software can neither start nor stop - * those links later at runtime. - * - * Then we only have to notify the EP core that our links are already - * established. However we don't call directly pci_epc_linkup() because - * we've already locked the epc->lock. - */ - list_for_each_entry(epf, &epc->pci_epf, list) - pci_epf_linkup(epf); - return 0; } +static const struct pci_epc_features cdns_pcie_epc_features = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = false, +}; + +static const struct pci_epc_features* +cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) +{ + return &cdns_pcie_epc_features; +} + static const struct pci_epc_ops cdns_pcie_epc_ops = { .write_header = cdns_pcie_ep_write_header, .set_bar = cdns_pcie_ep_set_bar, @@ -421,6 +421,7 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = { .get_msi = cdns_pcie_ep_get_msi, .raise_irq = cdns_pcie_ep_raise_irq, .start = cdns_pcie_ep_start, + .get_features = cdns_pcie_ep_get_features, }; static const struct of_device_id cdns_pcie_ep_of_match[] = { diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index 55e471c18e8d..0b6c72804e03 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c @@ -90,6 +90,12 @@ #define AHB2PCIE_SIZE(x) ((x) & GENMASK(4, 0)) #define PCIE_AXI_WINDOW0 0x448 #define WIN_ENABLE BIT(7) +/* + * Define PCIe to AHB window size as 2^33 to support max 8GB address space + * translate, support least 4GB DRAM size access from EP DMA(physical DRAM + * start from 0x40000000). + */ +#define PCIE2AHB_SIZE 0x21 /* PCIe V2 configuration transaction header */ #define PCIE_CFG_HEADER0 0x460 @@ -654,7 +660,6 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) struct resource *mem = &pcie->mem; const struct mtk_pcie_soc *soc = port->pcie->soc; u32 val; - size_t size; int err; /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ @@ -706,15 +711,15 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) mtk_pcie_enable_msi(port); /* Set AHB to PCIe translation windows */ - size = mem->end - mem->start; - val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); + val = lower_32_bits(mem->start) | + AHB2PCIE_SIZE(fls(resource_size(mem))); writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); val = upper_32_bits(mem->start); writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); /* Set PCIe to AXI translation memory space.*/ - val = fls(0xffffffff) | WIN_ENABLE; + val = PCIE2AHB_SIZE | WIN_ENABLE; writel(val, port->base + PCIE_AXI_WINDOW0); return 0; diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index b8163c56a142..a5d799e2dff2 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c @@ -499,12 +499,21 @@ static int rockchip_pcie_ep_start(struct pci_epc *epc) rockchip_pcie_write(rockchip, cfg, PCIE_CORE_PHY_FUNC_CFG); - list_for_each_entry(epf, &epc->pci_epf, list) - pci_epf_linkup(epf); - return 0; } +static const struct pci_epc_features rockchip_pcie_epc_features = { + .linkup_notifier = false, + .msi_capable = true, + .msix_capable = false, +}; + +static const struct pci_epc_features* +rockchip_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) +{ + return &rockchip_pcie_epc_features; +} + static const struct pci_epc_ops rockchip_pcie_epc_ops = { .write_header = rockchip_pcie_ep_write_header, .set_bar = rockchip_pcie_ep_set_bar, @@ -515,6 +524,7 @@ static const struct pci_epc_ops rockchip_pcie_epc_ops = { .get_msi = rockchip_pcie_ep_get_msi, .raise_irq = rockchip_pcie_ep_raise_irq, .start = rockchip_pcie_ep_start, + .get_features = rockchip_pcie_ep_get_features, }; static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip, diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 3890812cdf87..cf6816b55b5e 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -571,6 +571,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) LIST_HEAD(resources); resource_size_t offset[2] = {0}; resource_size_t membar2_offset = 0x2000, busn_start = 0; + struct pci_bus *child; /* * Shadow registers may exist in certain VMD device ids which allow @@ -698,7 +699,19 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) vmd_attach_resources(vmd); vmd_setup_dma_ops(vmd); dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); - pci_rescan_bus(vmd->bus); + + pci_scan_child_bus(vmd->bus); + pci_assign_unassigned_bus_resources(vmd->bus); + + /* + * VMD root buses are virtual and don't return true on pci_is_pcie() + * and will fail pcie_bus_configure_settings() early. It can instead be + * run on each of the real root ports. + */ + list_for_each_entry(child, &vmd->bus->children, node) + pcie_bus_configure_settings(child); + + pci_bus_add_devices(vmd->bus); WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, "domain"), "Can't create symlink to domain\n"); diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 3e86fa3c7da3..d0b91da49bf4 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -47,9 +47,8 @@ struct pci_epf_test { void *reg[6]; struct pci_epf *epf; enum pci_barno test_reg_bar; - bool linkup_notifier; - bool msix_available; struct delayed_work cmd_handler; + const struct pci_epc_features *epc_features; }; struct pci_epf_test_reg { @@ -71,11 +70,6 @@ static struct pci_epf_header test_header = { .interrupt_pin = PCI_INTERRUPT_INTA, }; -struct pci_epf_test_data { - enum pci_barno test_reg_bar; - bool linkup_notifier; -}; - static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; static int pci_epf_test_copy(struct pci_epf_test *epf_test) @@ -175,7 +169,7 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) goto err_map_addr; } - memcpy(buf, src_addr, reg->size); + memcpy_fromio(buf, src_addr, reg->size); crc32 = crc32_le(~0, buf, reg->size); if (crc32 != reg->checksum) @@ -230,7 +224,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) get_random_bytes(buf, reg->size); reg->checksum = crc32_le(~0, buf, reg->size); - memcpy(dst_addr, buf, reg->size); + memcpy_toio(dst_addr, buf, reg->size); /* * wait 1ms inorder for the write to complete. Without this delay L3 @@ -402,13 +396,15 @@ static int pci_epf_test_set_bar(struct pci_epf *epf) struct device *dev = &epf->dev; struct pci_epf_test *epf_test = epf_get_drvdata(epf); enum pci_barno test_reg_bar = epf_test->test_reg_bar; + const struct pci_epc_features *epc_features; + + epc_features = epf_test->epc_features; for (bar = BAR_0; bar <= BAR_5; bar++) { epf_bar = &epf->bar[bar]; - epf_bar->flags |= upper_32_bits(epf_bar->size) ? - PCI_BASE_ADDRESS_MEM_TYPE_64 : - PCI_BASE_ADDRESS_MEM_TYPE_32; + if (!!(epc_features->reserved_bar & (1 << bar))) + continue; ret = pci_epc_set_bar(epc, epf->func_no, epf_bar); if (ret) { @@ -433,9 +429,13 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf) { struct pci_epf_test *epf_test = epf_get_drvdata(epf); struct device *dev = &epf->dev; + struct pci_epf_bar *epf_bar; void *base; int bar; enum pci_barno test_reg_bar = epf_test->test_reg_bar; + const struct pci_epc_features *epc_features; + + epc_features = epf_test->epc_features; base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), test_reg_bar); @@ -446,37 +446,69 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf) epf_test->reg[test_reg_bar] = base; for (bar = BAR_0; bar <= BAR_5; bar++) { + epf_bar = &epf->bar[bar]; if (bar == test_reg_bar) continue; + + if (!!(epc_features->reserved_bar & (1 << bar))) + continue; + base = pci_epf_alloc_space(epf, bar_size[bar], bar); if (!base) dev_err(dev, "Failed to allocate space for BAR%d\n", bar); epf_test->reg[bar] = base; + if (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) + bar++; } return 0; } +static void pci_epf_configure_bar(struct pci_epf *epf, + const struct pci_epc_features *epc_features) +{ + struct pci_epf_bar *epf_bar; + bool bar_fixed_64bit; + int i; + + for (i = BAR_0; i <= BAR_5; i++) { + epf_bar = &epf->bar[i]; + bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i)); + if (bar_fixed_64bit) + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; + if (epc_features->bar_fixed_size[i]) + bar_size[i] = epc_features->bar_fixed_size[i]; + } +} + static int pci_epf_test_bind(struct pci_epf *epf) { int ret; struct pci_epf_test *epf_test = epf_get_drvdata(epf); struct pci_epf_header *header = epf->header; + const struct pci_epc_features *epc_features; + enum pci_barno test_reg_bar = BAR_0; struct pci_epc *epc = epf->epc; struct device *dev = &epf->dev; + bool linkup_notifier = false; + bool msix_capable = false; + bool msi_capable = true; if (WARN_ON_ONCE(!epc)) return -EINVAL; - if (epc->features & EPC_FEATURE_NO_LINKUP_NOTIFIER) - epf_test->linkup_notifier = false; - else - epf_test->linkup_notifier = true; - - epf_test->msix_available = epc->features & EPC_FEATURE_MSIX_AVAILABLE; + epc_features = pci_epc_get_features(epc, epf->func_no); + if (epc_features) { + linkup_notifier = epc_features->linkup_notifier; + msix_capable = epc_features->msix_capable; + msi_capable = epc_features->msi_capable; + test_reg_bar = pci_epc_get_first_free_bar(epc_features); + pci_epf_configure_bar(epf, epc_features); + } - epf_test->test_reg_bar = EPC_FEATURE_GET_BAR(epc->features); + epf_test->test_reg_bar = test_reg_bar; + epf_test->epc_features = epc_features; ret = pci_epc_write_header(epc, epf->func_no, header); if (ret) { @@ -492,13 +524,15 @@ static int pci_epf_test_bind(struct pci_epf *epf) if (ret) return ret; - ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); - if (ret) { - dev_err(dev, "MSI configuration failed\n"); - return ret; + if (msi_capable) { + ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); + if (ret) { + dev_err(dev, "MSI configuration failed\n"); + return ret; + } } - if (epf_test->msix_available) { + if (msix_capable) { ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts); if (ret) { dev_err(dev, "MSI-X configuration failed\n"); @@ -506,7 +540,7 @@ static int pci_epf_test_bind(struct pci_epf *epf) } } - if (!epf_test->linkup_notifier) + if (!linkup_notifier) queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); return 0; @@ -523,17 +557,6 @@ static int pci_epf_test_probe(struct pci_epf *epf) { struct pci_epf_test *epf_test; struct device *dev = &epf->dev; - const struct pci_epf_device_id *match; - struct pci_epf_test_data *data; - enum pci_barno test_reg_bar = BAR_0; - bool linkup_notifier = true; - - match = pci_epf_match_device(pci_epf_test_ids, epf); - data = (struct pci_epf_test_data *)match->driver_data; - if (data) { - test_reg_bar = data->test_reg_bar; - linkup_notifier = data->linkup_notifier; - } epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL); if (!epf_test) @@ -541,8 +564,6 @@ static int pci_epf_test_probe(struct pci_epf *epf) epf->header = &test_header; epf_test->epf = epf; - epf_test->test_reg_bar = test_reg_bar; - epf_test->linkup_notifier = linkup_notifier; INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index 094dcc3203b8..e4712a0f249c 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c @@ -84,6 +84,59 @@ err: EXPORT_SYMBOL_GPL(pci_epc_get); /** + * pci_epc_get_first_free_bar() - helper to get first unreserved BAR + * @epc_features: pci_epc_features structure that holds the reserved bar bitmap + * + * Invoke to get the first unreserved BAR that can be used for endpoint + * function. For any incorrect value in reserved_bar return '0'. + */ +unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features + *epc_features) +{ + int free_bar; + + if (!epc_features) + return 0; + + free_bar = ffz(epc_features->reserved_bar); + if (free_bar > 5) + return 0; + + return free_bar; +} +EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar); + +/** + * pci_epc_get_features() - get the features supported by EPC + * @epc: the features supported by *this* EPC device will be returned + * @func_no: the features supported by the EPC device specific to the + * endpoint function with func_no will be returned + * + * Invoke to get the features provided by the EPC which may be + * specific to an endpoint function. Returns pci_epc_features on success + * and NULL for any failures. + */ +const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc, + u8 func_no) +{ + const struct pci_epc_features *epc_features; + unsigned long flags; + + if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) + return NULL; + + if (!epc->ops->get_features) + return NULL; + + spin_lock_irqsave(&epc->lock, flags); + epc_features = epc->ops->get_features(epc, func_no); + spin_unlock_irqrestore(&epc->lock, flags); + + return epc_features; +} +EXPORT_SYMBOL_GPL(pci_epc_get_features); + +/** * pci_epc_stop() - stop the PCI link * @epc: the link of the EPC device that has to be stopped * diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 825fa24427a3..8bfdcd291196 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -131,7 +131,9 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar) epf->bar[bar].phys_addr = phys_addr; epf->bar[bar].size = size; epf->bar[bar].barno = bar; - epf->bar[bar].flags = PCI_BASE_ADDRESS_SPACE_MEMORY; + epf->bar[bar].flags |= upper_32_bits(size) ? + PCI_BASE_ADDRESS_MEM_TYPE_64 : + PCI_BASE_ADDRESS_MEM_TYPE_32; return space; } diff --git a/drivers/pci/hotplug/ibmphp.h b/drivers/pci/hotplug/ibmphp.h index b89f850c3a4e..e90a4ebf6550 100644 --- a/drivers/pci/hotplug/ibmphp.h +++ b/drivers/pci/hotplug/ibmphp.h @@ -378,7 +378,6 @@ int ibmphp_add_pfmem_from_mem(struct resource_node *); struct bus_node *ibmphp_find_res_bus(u8); void ibmphp_print_test(void); /* for debugging purposes */ -void ibmphp_hpc_initvars(void); int ibmphp_hpc_readslot(struct slot *, u8, u8 *); int ibmphp_hpc_writeslot(struct slot *, u8); void ibmphp_lock_operations(void); diff --git a/drivers/pci/hotplug/ibmphp_core.c b/drivers/pci/hotplug/ibmphp_core.c index 08a58e911fc2..17124254d897 100644 --- a/drivers/pci/hotplug/ibmphp_core.c +++ b/drivers/pci/hotplug/ibmphp_core.c @@ -1277,8 +1277,6 @@ static int __init ibmphp_init(void) ibmphp_debug = debug; - ibmphp_hpc_initvars(); - for (i = 0; i < 16; i++) irqs[i] = 0; diff --git a/drivers/pci/hotplug/ibmphp_hpc.c b/drivers/pci/hotplug/ibmphp_hpc.c index 752c384cbd4c..508a62a6b5f9 100644 --- a/drivers/pci/hotplug/ibmphp_hpc.c +++ b/drivers/pci/hotplug/ibmphp_hpc.c @@ -15,13 +15,13 @@ #include <linux/wait.h> #include <linux/time.h> +#include <linux/completion.h> #include <linux/delay.h> #include <linux/module.h> #include <linux/pci.h> #include <linux/init.h> #include <linux/mutex.h> #include <linux/sched.h> -#include <linux/semaphore.h> #include <linux/kthread.h> #include "ibmphp.h" @@ -88,10 +88,10 @@ static int to_debug = 0; //---------------------------------------------------------------------------- // global variables //---------------------------------------------------------------------------- -static struct mutex sem_hpcaccess; // lock access to HPC -static struct semaphore semOperations; // lock all operations and +static DEFINE_MUTEX(sem_hpcaccess); // lock access to HPC +static DEFINE_MUTEX(operations_mutex); // lock all operations and // access to data structures -static struct semaphore sem_exit; // make sure polling thread goes away +static DECLARE_COMPLETION(exit_complete); // make sure polling thread goes away static struct task_struct *ibmphp_poll_thread; //---------------------------------------------------------------------------- // local function prototypes @@ -110,23 +110,6 @@ static int hpc_wait_ctlr_notworking(int, struct controller *, void __iomem *, u8 /*---------------------------------------------------------------------- -* Name: ibmphp_hpc_initvars -* -* Action: initialize semaphores and variables -*---------------------------------------------------------------------*/ -void __init ibmphp_hpc_initvars(void) -{ - debug("%s - Entry\n", __func__); - - mutex_init(&sem_hpcaccess); - sema_init(&semOperations, 1); - sema_init(&sem_exit, 0); - to_debug = 0; - - debug("%s - Exit\n", __func__); -} - -/*---------------------------------------------------------------------- * Name: i2c_ctrl_read * * Action: read from HPC over I2C @@ -780,7 +763,7 @@ void free_hpc_access(void) *---------------------------------------------------------------------*/ void ibmphp_lock_operations(void) { - down(&semOperations); + mutex_lock(&operations_mutex); to_debug = 1; } @@ -790,7 +773,7 @@ void ibmphp_lock_operations(void) void ibmphp_unlock_operations(void) { debug("%s - Entry\n", __func__); - up(&semOperations); + mutex_unlock(&operations_mutex); to_debug = 0; debug("%s - Exit\n", __func__); } @@ -816,7 +799,7 @@ static int poll_hpc(void *data) while (!kthread_should_stop()) { /* try to get the lock to do some kind of hardware access */ - down(&semOperations); + mutex_lock(&operations_mutex); switch (poll_state) { case POLL_LATCH_REGISTER: @@ -871,13 +854,13 @@ static int poll_hpc(void *data) break; case POLL_SLEEP: /* don't sleep with a lock on the hardware */ - up(&semOperations); + mutex_unlock(&operations_mutex); msleep(POLL_INTERVAL_SEC * 1000); if (kthread_should_stop()) goto out_sleep; - down(&semOperations); + mutex_lock(&operations_mutex); if (poll_count >= POLL_LATCH_CNT) { poll_count = 0; @@ -887,12 +870,12 @@ static int poll_hpc(void *data) break; } /* give up the hardware semaphore */ - up(&semOperations); + mutex_unlock(&operations_mutex); /* sleep for a short time just for good measure */ out_sleep: msleep(100); } - up(&sem_exit); + complete(&exit_complete); debug("%s - Exit\n", __func__); return 0; } @@ -1060,9 +1043,9 @@ void __exit ibmphp_hpc_stop_poll_thread(void) debug("after locking operations\n"); // wait for poll thread to exit - debug("before sem_exit down\n"); - down(&sem_exit); - debug("after sem_exit down\n"); + debug("before exit_complete down\n"); + wait_for_completion(&exit_complete); + debug("after exit_completion down\n"); // cleanup debug("before free_hpc_access\n"); @@ -1070,8 +1053,6 @@ void __exit ibmphp_hpc_stop_poll_thread(void) debug("after free_hpc_access\n"); ibmphp_unlock_operations(); debug("after unlock operations\n"); - up(&sem_exit); - debug("after sem exit up\n"); debug("%s - Exit\n", __func__); } diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index 7dd443aea5a5..6a2365cd794e 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -156,9 +156,9 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd, slot_ctrl |= (cmd & mask); ctrl->cmd_busy = 1; smp_mb(); + ctrl->slot_ctrl = slot_ctrl; pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl); ctrl->cmd_started = jiffies; - ctrl->slot_ctrl = slot_ctrl; /* * Controllers with the Intel CF118 and similar errata advertise @@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl) void pcie_enable_interrupt(struct controller *ctrl) { - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE); + u16 mask; + + mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE; + pcie_write_cmd(ctrl, mask, mask); } void pcie_disable_interrupt(struct controller *ctrl) { - pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE); + u16 mask; + + /* + * Mask hot-plug interrupt to prevent it triggering immediately + * when the link goes inactive (we still get PME when any of the + * enabled events is detected). Same goes with Link Layer State + * changed event which generates PME immediately when the link goes + * inactive so mask it as well. + */ + mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE; + pcie_write_cmd(ctrl, 0, mask); } /* @@ -920,3 +933,5 @@ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400, PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401, PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_HXT, 0x0401, + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); diff --git a/drivers/pci/of.c b/drivers/pci/of.c index 4c4217d0c3f1..3d32da15c215 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c @@ -113,7 +113,7 @@ struct device_node *of_pci_find_child_device(struct device_node *parent, * a fake root for all functions of a multi-function * device we go down them as well. */ - if (!strcmp(node->name, "multifunc-device")) { + if (of_node_name_eq(node, "multifunc-device")) { for_each_child_of_node(node, node2) { if (__of_pci_pci_compare(node2, devfn)) { of_node_put(node); diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c index 129738362d90..83fb077d0b41 100644 --- a/drivers/pci/pci-bridge-emul.c +++ b/drivers/pci/pci-bridge-emul.c @@ -24,29 +24,6 @@ #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) -/* - * Initialize a pci_bridge_emul structure to represent a fake PCI - * bridge configuration space. The caller needs to have initialized - * the PCI configuration space with whatever values make sense - * (typically at least vendor, device, revision), the ->ops pointer, - * and optionally ->data and ->has_pcie. - */ -void pci_bridge_emul_init(struct pci_bridge_emul *bridge) -{ - bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; - bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; - bridge->conf.cache_line_size = 0x10; - bridge->conf.status = PCI_STATUS_CAP_LIST; - - if (bridge->has_pcie) { - bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; - bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; - /* Set PCIe v2, root port, slot support */ - bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | - PCI_EXP_FLAGS_SLOT; - } -} - struct pci_bridge_reg_behavior { /* Read-only bits */ u32 ro; @@ -284,6 +261,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { }; /* + * Initialize a pci_bridge_emul structure to represent a fake PCI + * bridge configuration space. The caller needs to have initialized + * the PCI configuration space with whatever values make sense + * (typically at least vendor, device, revision), the ->ops pointer, + * and optionally ->data and ->has_pcie. + */ +int pci_bridge_emul_init(struct pci_bridge_emul *bridge, + unsigned int flags) +{ + bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; + bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; + bridge->conf.cache_line_size = 0x10; + bridge->conf.status = PCI_STATUS_CAP_LIST; + bridge->pci_regs_behavior = kmemdup(pci_regs_behavior, + sizeof(pci_regs_behavior), + GFP_KERNEL); + if (!bridge->pci_regs_behavior) + return -ENOMEM; + + if (bridge->has_pcie) { + bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; + bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; + /* Set PCIe v2, root port, slot support */ + bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | + PCI_EXP_FLAGS_SLOT; + bridge->pcie_cap_regs_behavior = + kmemdup(pcie_cap_regs_behavior, + sizeof(pcie_cap_regs_behavior), + GFP_KERNEL); + if (!bridge->pcie_cap_regs_behavior) { + kfree(bridge->pci_regs_behavior); + return -ENOMEM; + } + } + + if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) { + bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0; + bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0; + } + + return 0; +} + +/* + * Cleanup a pci_bridge_emul structure that was previously initilized + * using pci_bridge_emul_init(). + */ +void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge) +{ + if (bridge->has_pcie) + kfree(bridge->pcie_cap_regs_behavior); + kfree(bridge->pci_regs_behavior); +} + +/* * Should be called by the PCI controller driver when reading the PCI * configuration space of the fake bridge. It will call back the * ->ops->read_base or ->ops->read_pcie operations. @@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, reg -= PCI_CAP_PCIE_START; read_op = bridge->ops->read_pcie; cfgspace = (u32 *) &bridge->pcie_conf; - behavior = pcie_cap_regs_behavior; + behavior = bridge->pcie_cap_regs_behavior; } else { read_op = bridge->ops->read_base; cfgspace = (u32 *) &bridge->conf; - behavior = pci_regs_behavior; + behavior = bridge->pci_regs_behavior; } if (read_op) @@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, reg -= PCI_CAP_PCIE_START; write_op = bridge->ops->write_pcie; cfgspace = (u32 *) &bridge->pcie_conf; - behavior = pcie_cap_regs_behavior; + behavior = bridge->pcie_cap_regs_behavior; } else { write_op = bridge->ops->write_base; cfgspace = (u32 *) &bridge->conf; - behavior = pci_regs_behavior; + behavior = bridge->pci_regs_behavior; } /* Keep all bits, except the RW bits */ diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h index 9d510ccf738b..e65b1b79899d 100644 --- a/drivers/pci/pci-bridge-emul.h +++ b/drivers/pci/pci-bridge-emul.h @@ -107,15 +107,26 @@ struct pci_bridge_emul_ops { u32 old, u32 new, u32 mask); }; +struct pci_bridge_reg_behavior; + struct pci_bridge_emul { struct pci_bridge_emul_conf conf; struct pci_bridge_emul_pcie_conf pcie_conf; struct pci_bridge_emul_ops *ops; + struct pci_bridge_reg_behavior *pci_regs_behavior; + struct pci_bridge_reg_behavior *pcie_cap_regs_behavior; void *data; bool has_pcie; }; -void pci_bridge_emul_init(struct pci_bridge_emul *bridge); +enum { + PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0), +}; + +int pci_bridge_emul_init(struct pci_bridge_emul *bridge, + unsigned int flags); +void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge); + int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, int size, u32 *value); int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 79b1610a8beb..71853befd435 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -100,7 +100,7 @@ static ssize_t new_id_store(struct device_driver *driver, const char *buf, { struct pci_driver *pdrv = to_pci_driver(driver); const struct pci_device_id *ids = pdrv->id_table; - __u32 vendor, device, subvendor = PCI_ANY_ID, + u32 vendor, device, subvendor = PCI_ANY_ID, subdevice = PCI_ANY_ID, class = 0, class_mask = 0; unsigned long driver_data = 0; int fields = 0; @@ -168,7 +168,7 @@ static ssize_t remove_id_store(struct device_driver *driver, const char *buf, { struct pci_dynid *dynid, *n; struct pci_driver *pdrv = to_pci_driver(driver); - __u32 vendor, device, subvendor = PCI_ANY_ID, + u32 vendor, device, subvendor = PCI_ANY_ID, subdevice = PCI_ANY_ID, class = 0, class_mask = 0; int fields = 0; size_t retval = -ENODEV; diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index db55acf32a7e..7c1b362f599a 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -861,7 +861,7 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) if ((pmcsr & PCI_PM_CTRL_STATE_MASK) == PCI_D3hot && !(pmcsr & PCI_PM_CTRL_NO_SOFT_RESET)) need_restore = true; - /* Fall-through: force to D0 */ + /* Fall-through - force to D0 */ default: pmcsr = 0; break; @@ -1233,7 +1233,6 @@ static void pci_restore_pcie_state(struct pci_dev *dev) pcie_capability_write_word(dev, PCI_EXP_SLTCTL2, cap[i++]); } - static int pci_save_pcix_state(struct pci_dev *dev) { int pos; @@ -1270,6 +1269,45 @@ static void pci_restore_pcix_state(struct pci_dev *dev) pci_write_config_word(dev, pos + PCI_X_CMD, cap[i++]); } +static void pci_save_ltr_state(struct pci_dev *dev) +{ + int ltr; + struct pci_cap_saved_state *save_state; + u16 *cap; + + if (!pci_is_pcie(dev)) + return; + + ltr = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR); + if (!ltr) + return; + + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_LTR); + if (!save_state) { + pci_err(dev, "no suspend buffer for LTR; ASPM issues possible after resume\n"); + return; + } + + cap = (u16 *)&save_state->cap.data[0]; + pci_read_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, cap++); + pci_read_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, cap++); +} + +static void pci_restore_ltr_state(struct pci_dev *dev) +{ + struct pci_cap_saved_state *save_state; + int ltr; + u16 *cap; + + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_LTR); + ltr = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR); + if (!save_state || !ltr) + return; + + cap = (u16 *)&save_state->cap.data[0]; + pci_write_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, *cap++); + pci_write_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, *cap++); +} /** * pci_save_state - save the PCI configuration space of a device before suspending @@ -1291,6 +1329,7 @@ int pci_save_state(struct pci_dev *dev) if (i != 0) return i; + pci_save_ltr_state(dev); pci_save_dpc_state(dev); return pci_save_vc_state(dev); } @@ -1390,7 +1429,12 @@ void pci_restore_state(struct pci_dev *dev) if (!dev->state_saved) return; - /* PCI Express register must be restored first */ + /* + * Restore max latencies (in the LTR capability) before enabling + * LTR itself (in the PCIe capability). + */ + pci_restore_ltr_state(dev); + pci_restore_pcie_state(dev); pci_restore_pasid_state(dev); pci_restore_pri_state(dev); @@ -2260,7 +2304,7 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup) case PCI_D2: if (pci_no_d1d2(dev)) break; - /* else: fall through */ + /* else, fall through */ default: target_state = state; } @@ -2501,6 +2545,25 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev) pm_runtime_put_sync(parent); } +static const struct dmi_system_id bridge_d3_blacklist[] = { +#ifdef CONFIG_X86 + { + /* + * Gigabyte X299 root port is not marked as hotplug capable + * which allows Linux to power manage it. However, this + * confuses the BIOS SMI handler so don't power manage root + * ports on that system. + */ + .ident = "X299 DESIGNARE EX-CF", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), + DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"), + }, + }, +#endif + { } +}; + /** * pci_bridge_d3_possible - Is it possible to put the bridge into D3 * @bridge: Bridge to check @@ -2546,6 +2609,9 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge) if (bridge->is_hotplug_bridge) return false; + if (dmi_check_system(bridge_d3_blacklist)) + return false; + /* * It should be safe to put PCIe ports from 2015 or newer * to D3. @@ -2998,6 +3064,11 @@ void pci_allocate_cap_save_buffers(struct pci_dev *dev) if (error) pci_err(dev, "unable to preallocate PCI-X save buffer\n"); + error = pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_LTR, + 2 * sizeof(u16)); + if (error) + pci_err(dev, "unable to allocate suspend buffer for LTR\n"); + pci_allocate_vc_save_buffers(dev); } @@ -5058,39 +5129,42 @@ unlock: return 0; } -/* Save and disable devices from the top of the tree down */ -static void pci_bus_save_and_disable(struct pci_bus *bus) +/* + * Save and disable devices from the top of the tree down while holding + * the @dev mutex lock for the entire tree. + */ +static void pci_bus_save_and_disable_locked(struct pci_bus *bus) { struct pci_dev *dev; list_for_each_entry(dev, &bus->devices, bus_list) { - pci_dev_lock(dev); pci_dev_save_and_disable(dev); - pci_dev_unlock(dev); if (dev->subordinate) - pci_bus_save_and_disable(dev->subordinate); + pci_bus_save_and_disable_locked(dev->subordinate); } } /* - * Restore devices from top of the tree down - parent bridges need to be - * restored before we can get to subordinate devices. + * Restore devices from top of the tree down while holding @dev mutex lock + * for the entire tree. Parent bridges need to be restored before we can + * get to subordinate devices. */ -static void pci_bus_restore(struct pci_bus *bus) +static void pci_bus_restore_locked(struct pci_bus *bus) { struct pci_dev *dev; list_for_each_entry(dev, &bus->devices, bus_list) { - pci_dev_lock(dev); pci_dev_restore(dev); - pci_dev_unlock(dev); if (dev->subordinate) - pci_bus_restore(dev->subordinate); + pci_bus_restore_locked(dev->subordinate); } } -/* Save and disable devices from the top of the tree down */ -static void pci_slot_save_and_disable(struct pci_slot *slot) +/* + * Save and disable devices from the top of the tree down while holding + * the @dev mutex lock for the entire tree. + */ +static void pci_slot_save_and_disable_locked(struct pci_slot *slot) { struct pci_dev *dev; @@ -5099,26 +5173,25 @@ static void pci_slot_save_and_disable(struct pci_slot *slot) continue; pci_dev_save_and_disable(dev); if (dev->subordinate) - pci_bus_save_and_disable(dev->subordinate); + pci_bus_save_and_disable_locked(dev->subordinate); } } /* - * Restore devices from top of the tree down - parent bridges need to be - * restored before we can get to subordinate devices. + * Restore devices from top of the tree down while holding @dev mutex lock + * for the entire tree. Parent bridges need to be restored before we can + * get to subordinate devices. */ -static void pci_slot_restore(struct pci_slot *slot) +static void pci_slot_restore_locked(struct pci_slot *slot) { struct pci_dev *dev; list_for_each_entry(dev, &slot->bus->devices, bus_list) { if (!dev->slot || dev->slot != slot) continue; - pci_dev_lock(dev); pci_dev_restore(dev); - pci_dev_unlock(dev); if (dev->subordinate) - pci_bus_restore(dev->subordinate); + pci_bus_restore_locked(dev->subordinate); } } @@ -5177,17 +5250,15 @@ static int __pci_reset_slot(struct pci_slot *slot) if (rc) return rc; - pci_slot_save_and_disable(slot); - if (pci_slot_trylock(slot)) { + pci_slot_save_and_disable_locked(slot); might_sleep(); rc = pci_reset_hotplug_slot(slot->hotplug, 0); + pci_slot_restore_locked(slot); pci_slot_unlock(slot); } else rc = -EAGAIN; - pci_slot_restore(slot); - return rc; } @@ -5273,17 +5344,15 @@ static int __pci_reset_bus(struct pci_bus *bus) if (rc) return rc; - pci_bus_save_and_disable(bus); - if (pci_bus_trylock(bus)) { + pci_bus_save_and_disable_locked(bus); might_sleep(); rc = pci_bridge_secondary_bus_reset(bus->self); + pci_bus_restore_locked(bus); pci_bus_unlock(bus); } else rc = -EAGAIN; - pci_bus_restore(bus); - return rc; } @@ -6000,8 +6069,7 @@ void pci_reassigndev_resource_alignment(struct pci_dev *dev) * to enable the kernel to reassign new resource * window later on. */ - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE && - (dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) { + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) { r = &dev->resource[i]; if (!(r->flags & IORESOURCE_MEM)) diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig index 44742b2e1126..5cbdbca904ac 100644 --- a/drivers/pci/pcie/Kconfig +++ b/drivers/pci/pcie/Kconfig @@ -6,10 +6,9 @@ config PCIEPORTBUS bool "PCI Express Port Bus support" depends on PCI help - This automatically enables PCI Express Port Bus support. Users can - choose Native Hot-Plug support, Advanced Error Reporting support, - Power Management Event support and Virtual Channel support to run - on PCI Express Ports (Root or Switch). + This enables PCI Express Port Bus support. Users can then enable + support for Native Hot-Plug, Advanced Error Reporting, Power + Management Events, and Downstream Port Containment. # # Include service Kconfig here diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile index ab514083d5d4..f1d7bc1e5efa 100644 --- a/drivers/pci/pcie/Makefile +++ b/drivers/pci/pcie/Makefile @@ -3,6 +3,7 @@ # Makefile for PCI Express features and port driver pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o +pcieportdrv-y += bw_notification.o obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index fed29de783e0..f8fc2114ad39 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c @@ -117,7 +117,7 @@ bool pci_aer_available(void) static int ecrc_policy = ECRC_POLICY_DEFAULT; -static const char *ecrc_policy_str[] = { +static const char * const ecrc_policy_str[] = { [ECRC_POLICY_DEFAULT] = "bios", [ECRC_POLICY_OFF] = "off", [ECRC_POLICY_ON] = "on" @@ -203,11 +203,8 @@ void pcie_ecrc_get_policy(char *str) { int i; - for (i = 0; i < ARRAY_SIZE(ecrc_policy_str); i++) - if (!strncmp(str, ecrc_policy_str[i], - strlen(ecrc_policy_str[i]))) - break; - if (i >= ARRAY_SIZE(ecrc_policy_str)) + i = match_string(ecrc_policy_str, ARRAY_SIZE(ecrc_policy_str), str); + if (i < 0) return; ecrc_policy = i; diff --git a/drivers/pci/pcie/bw_notification.c b/drivers/pci/pcie/bw_notification.c new file mode 100644 index 000000000000..d2eae3b7cc0f --- /dev/null +++ b/drivers/pci/pcie/bw_notification.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * PCI Express Link Bandwidth Notification services driver + * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com> + * + * Copyright (C) 2019, Dell Inc + * + * The PCIe Link Bandwidth Notification provides a way to notify the + * operating system when the link width or data rate changes. This + * capability is required for all root ports and downstream ports + * supporting links wider than x1 and/or multiple link speeds. + * + * This service port driver hooks into the bandwidth notification interrupt + * and warns when links become degraded in operation. + */ + +#include "../pci.h" +#include "portdrv.h" + +static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev) +{ + int ret; + u32 lnk_cap; + + ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap); + return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC); +} + +static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev) +{ + u16 lnk_ctl; + + pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl); + lnk_ctl |= PCI_EXP_LNKCTL_LBMIE; + pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl); +} + +static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev) +{ + u16 lnk_ctl; + + pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl); + lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE; + pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl); +} + +static irqreturn_t pcie_bw_notification_handler(int irq, void *context) +{ + struct pcie_device *srv = context; + struct pci_dev *port = srv->port; + struct pci_dev *dev; + u16 link_status, events; + int ret; + + ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status); + events = link_status & PCI_EXP_LNKSTA_LBMS; + + if (ret != PCIBIOS_SUCCESSFUL || !events) + return IRQ_NONE; + + /* + * Print status from downstream devices, not this root port or + * downstream switch port. + */ + down_read(&pci_bus_sem); + list_for_each_entry(dev, &port->subordinate->devices, bus_list) + __pcie_print_link_status(dev, false); + up_read(&pci_bus_sem); + + pcie_update_link_speed(port->subordinate, link_status); + pcie_capability_write_word(port, PCI_EXP_LNKSTA, events); + return IRQ_HANDLED; +} + +static int pcie_bandwidth_notification_probe(struct pcie_device *srv) +{ + int ret; + + /* Single-width or single-speed ports do not have to support this. */ + if (!pcie_link_bandwidth_notification_supported(srv->port)) + return -ENODEV; + + ret = request_threaded_irq(srv->irq, NULL, pcie_bw_notification_handler, + IRQF_SHARED, "PCIe BW notif", srv); + if (ret) + return ret; + + pcie_enable_link_bandwidth_notification(srv->port); + + return 0; +} + +static void pcie_bandwidth_notification_remove(struct pcie_device *srv) +{ + pcie_disable_link_bandwidth_notification(srv->port); + free_irq(srv->irq, srv); +} + +static struct pcie_port_service_driver pcie_bandwidth_notification_driver = { + .name = "pcie_bw_notification", + .port_type = PCIE_ANY_PORT, + .service = PCIE_PORT_SERVICE_BWNOTIF, + .probe = pcie_bandwidth_notification_probe, + .remove = pcie_bandwidth_notification_remove, +}; + +int __init pcie_bandwidth_notification_init(void) +{ + return pcie_port_service_register(&pcie_bandwidth_notification_driver); +} diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c index e435d12e61a0..7b77754a82de 100644 --- a/drivers/pci/pcie/dpc.c +++ b/drivers/pci/pcie/dpc.c @@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); } +static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev, + struct aer_err_info *info) +{ + int pos = dev->aer_cap; + u32 status, mask, sev; + + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask); + status &= ~mask; + if (!status) + return 0; + + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); + status &= sev; + if (status) + info->severity = AER_FATAL; + else + info->severity = AER_NONFATAL; + + return 1; +} + static irqreturn_t dpc_handler(int irq, void *context) { struct aer_err_info info; @@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context) /* show RP PIO error detail information */ if (dpc->rp_extensions && reason == 3 && ext_reason == 0) dpc_process_rp_pio_error(dpc); - else if (reason == 0 && aer_get_device_error_info(pdev, &info)) { + else if (reason == 0 && + dpc_get_aer_uncorrect_severity(pdev, &info) && + aer_get_device_error_info(pdev, &info)) { aer_print_error(pdev, &info); pci_cleanup_aer_uncorrect_error_status(pdev); + pci_aer_clear_fatal_status(pdev); } /* We configure DPC so it only triggers on ERR_FATAL */ diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c index 0dbcf429089f..54d593d10396 100644 --- a/drivers/pci/pcie/pme.c +++ b/drivers/pci/pcie/pme.c @@ -363,6 +363,16 @@ static bool pcie_pme_check_wakeup(struct pci_bus *bus) return false; } +static void pcie_pme_disable_interrupt(struct pci_dev *port, + struct pcie_pme_service_data *data) +{ + spin_lock_irq(&data->lock); + pcie_pme_interrupt_enable(port, false); + pcie_clear_root_pme_status(port); + data->noirq = true; + spin_unlock_irq(&data->lock); +} + /** * pcie_pme_suspend - Suspend PCIe PME service device. * @srv: PCIe service device to suspend. @@ -387,11 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv) return 0; } - spin_lock_irq(&data->lock); - pcie_pme_interrupt_enable(port, false); - pcie_clear_root_pme_status(port); - data->noirq = true; - spin_unlock_irq(&data->lock); + pcie_pme_disable_interrupt(port, data); synchronize_irq(srv->irq); @@ -427,34 +433,12 @@ static int pcie_pme_resume(struct pcie_device *srv) */ static void pcie_pme_remove(struct pcie_device *srv) { - pcie_pme_suspend(srv); - free_irq(srv->irq, srv); - kfree(get_service_data(srv)); -} - -static int pcie_pme_runtime_suspend(struct pcie_device *srv) -{ - struct pcie_pme_service_data *data = get_service_data(srv); - - spin_lock_irq(&data->lock); - pcie_pme_interrupt_enable(srv->port, false); - pcie_clear_root_pme_status(srv->port); - data->noirq = true; - spin_unlock_irq(&data->lock); - - return 0; -} - -static int pcie_pme_runtime_resume(struct pcie_device *srv) -{ struct pcie_pme_service_data *data = get_service_data(srv); - spin_lock_irq(&data->lock); - pcie_pme_interrupt_enable(srv->port, true); - data->noirq = false; - spin_unlock_irq(&data->lock); - - return 0; + pcie_pme_disable_interrupt(srv->port, data); + free_irq(srv->irq, srv); + cancel_work_sync(&data->work); + kfree(data); } static struct pcie_port_service_driver pcie_pme_driver = { @@ -464,8 +448,6 @@ static struct pcie_port_service_driver pcie_pme_driver = { .probe = pcie_pme_probe, .suspend = pcie_pme_suspend, - .runtime_suspend = pcie_pme_runtime_suspend, - .runtime_resume = pcie_pme_runtime_resume, .resume = pcie_pme_resume, .remove = pcie_pme_remove, }; diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index fbbf00b0992e..1d50dc58ac40 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -20,8 +20,10 @@ #define PCIE_PORT_SERVICE_HP (1 << PCIE_PORT_SERVICE_HP_SHIFT) #define PCIE_PORT_SERVICE_DPC_SHIFT 3 /* Downstream Port Containment */ #define PCIE_PORT_SERVICE_DPC (1 << PCIE_PORT_SERVICE_DPC_SHIFT) +#define PCIE_PORT_SERVICE_BWNOTIF_SHIFT 4 /* Bandwidth notification */ +#define PCIE_PORT_SERVICE_BWNOTIF (1 << PCIE_PORT_SERVICE_BWNOTIF_SHIFT) -#define PCIE_PORT_DEVICE_MAXSERVICES 4 +#define PCIE_PORT_DEVICE_MAXSERVICES 5 #ifdef CONFIG_PCIEAER int pcie_aer_init(void); @@ -47,6 +49,8 @@ int pcie_dpc_init(void); static inline int pcie_dpc_init(void) { return 0; } #endif +int pcie_bandwidth_notification_init(void); + /* Port Type */ #define PCIE_ANY_PORT (~0) diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index f458ac9cb70c..7d04f9d087a6 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c @@ -99,7 +99,7 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask, */ static int pcie_port_enable_irq_vec(struct pci_dev *dev, int *irqs, int mask) { - int nr_entries, nvec; + int nr_entries, nvec, pcie_irq; u32 pme = 0, aer = 0, dpc = 0; /* Allocate the maximum possible number of MSI/MSI-X vectors */ @@ -135,10 +135,13 @@ static int pcie_port_enable_irq_vec(struct pci_dev *dev, int *irqs, int mask) return nr_entries; } - /* PME and hotplug share an MSI/MSI-X vector */ - if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) { - irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, pme); - irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, pme); + /* PME, hotplug and bandwidth notification share an MSI/MSI-X vector */ + if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP | + PCIE_PORT_SERVICE_BWNOTIF)) { + pcie_irq = pci_irq_vector(dev, pme); + irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pcie_irq; + irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pcie_irq; + irqs[PCIE_PORT_SERVICE_BWNOTIF_SHIFT] = pcie_irq; } if (mask & PCIE_PORT_SERVICE_AER) @@ -250,6 +253,10 @@ static int get_port_device_capability(struct pci_dev *dev) pci_aer_available() && services & PCIE_PORT_SERVICE_AER) services |= PCIE_PORT_SERVICE_DPC; + if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || + pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) + services |= PCIE_PORT_SERVICE_BWNOTIF; + return services; } diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index 0acca3596807..0a87091a0800 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -182,10 +182,12 @@ static void pcie_portdrv_err_resume(struct pci_dev *dev) /* * LINUX Device Driver Model */ -static const struct pci_device_id port_pci_ids[] = { { +static const struct pci_device_id port_pci_ids[] = { /* handle any PCI-Express port */ - PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0), - }, { /* end: all zeroes */ } + { PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0) }, + /* subtractive decode PCI-to-PCI bridge, class type is 060401h */ + { PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x01), ~0) }, + { }, }; static const struct pci_error_handlers pcie_portdrv_err_handler = { @@ -238,6 +240,7 @@ static void __init pcie_init_services(void) pcie_pme_init(); pcie_dpc_init(); pcie_hp_init(); + pcie_bandwidth_notification_init(); } static int __init pcie_portdrv_init(void) diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 257b9f6f2ebb..2ec0df04e0dc 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -121,13 +121,13 @@ static u64 pci_size(u64 base, u64 maxbase, u64 mask) * Get the lowest of them to find the decode size, and from that * the extent. */ - size = (size & ~(size-1)) - 1; + size = size & ~(size-1); /* * base == maxbase can be valid only if the BAR has already been * programmed with all 1s. */ - if (base == maxbase && ((base | size) & mask) != mask) + if (base == maxbase && ((base | (size - 1)) & mask) != mask) return 0; return size; @@ -278,7 +278,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, /* Above 32-bit boundary; try to reallocate */ res->flags |= IORESOURCE_UNSET; res->start = 0; - res->end = sz64; + res->end = sz64 - 1; pci_info(dev, "reg 0x%x: can't handle BAR above 4GB (bus address %#010llx)\n", pos, (unsigned long long)l64); goto out; @@ -286,7 +286,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, } region.start = l64; - region.end = l64 + sz64; + region.end = l64 + sz64 - 1; pcibios_bus_to_resource(dev->bus, res, ®ion); pcibios_resource_to_bus(dev->bus, &inverted_region, res); @@ -348,6 +348,57 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom) } } +static void pci_read_bridge_windows(struct pci_dev *bridge) +{ + u16 io; + u32 pmem, tmp; + + pci_read_config_word(bridge, PCI_IO_BASE, &io); + if (!io) { + pci_write_config_word(bridge, PCI_IO_BASE, 0xe0f0); + pci_read_config_word(bridge, PCI_IO_BASE, &io); + pci_write_config_word(bridge, PCI_IO_BASE, 0x0); + } + if (io) + bridge->io_window = 1; + + /* + * DECchip 21050 pass 2 errata: the bridge may miss an address + * disconnect boundary by one PCI data phase. Workaround: do not + * use prefetching on this device. + */ + if (bridge->vendor == PCI_VENDOR_ID_DEC && bridge->device == 0x0001) + return; + + pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); + if (!pmem) { + pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, + 0xffe0fff0); + pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); + pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, 0x0); + } + if (!pmem) + return; + + bridge->pref_window = 1; + + if ((pmem & PCI_PREF_RANGE_TYPE_MASK) == PCI_PREF_RANGE_TYPE_64) { + + /* + * Bridge claims to have a 64-bit prefetchable memory + * window; verify that the upper bits are actually + * writable. + */ + pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, &pmem); + pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, + 0xffffffff); + pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, &tmp); + pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, pmem); + if (tmp) + bridge->pref_64_window = 1; + } +} + static void pci_read_bridge_io(struct pci_bus *child) { struct pci_dev *dev = child->self; @@ -1728,9 +1779,6 @@ int pci_setup_device(struct pci_dev *dev) break; case PCI_HEADER_TYPE_BRIDGE: /* bridge header */ - if (class != PCI_CLASS_BRIDGE_PCI) - goto bad; - /* * The PCI-to-PCI bridge spec requires that subtractive * decoding (i.e. transparent) bridge must have programming @@ -1739,6 +1787,7 @@ int pci_setup_device(struct pci_dev *dev) pci_read_irq(dev); dev->transparent = ((dev->class & 0xff) == 1); pci_read_bases(dev, 2, PCI_ROM_ADDRESS1); + pci_read_bridge_windows(dev); set_pcie_hotplug_bridge(dev); pos = pci_find_capability(dev, PCI_CAP_ID_SSVID); if (pos) { @@ -1856,8 +1905,6 @@ static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp) pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, hpp->latency_timer); pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl); - if (hpp->enable_serr) - pci_bctl |= PCI_BRIDGE_CTL_SERR; if (hpp->enable_perr) pci_bctl |= PCI_BRIDGE_CTL_PARITY; pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl); @@ -2071,11 +2118,8 @@ static void pci_configure_ltr(struct pci_dev *dev) { #ifdef CONFIG_PCIEASPM struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); - u32 cap; struct pci_dev *bridge; - - if (!host->native_ltr) - return; + u32 cap, ctl; if (!pci_is_pcie(dev)) return; @@ -2084,22 +2128,35 @@ static void pci_configure_ltr(struct pci_dev *dev) if (!(cap & PCI_EXP_DEVCAP2_LTR)) return; - /* - * Software must not enable LTR in an Endpoint unless the Root - * Complex and all intermediate Switches indicate support for LTR. - * PCIe r3.1, sec 6.18. - */ - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) - dev->ltr_path = 1; - else { + pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl); + if (ctl & PCI_EXP_DEVCTL2_LTR_EN) { + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { + dev->ltr_path = 1; + return; + } + bridge = pci_upstream_bridge(dev); if (bridge && bridge->ltr_path) dev->ltr_path = 1; + + return; } - if (dev->ltr_path) + if (!host->native_ltr) + return; + + /* + * Software must not enable LTR in an Endpoint unless the Root + * Complex and all intermediate Switches indicate support for LTR. + * PCIe r4.0, sec 6.18. + */ + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || + ((bridge = pci_upstream_bridge(dev)) && + bridge->ltr_path)) { pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_DEVCTL2_LTR_EN); + dev->ltr_path = 1; + } #endif } @@ -2129,6 +2186,24 @@ static void pci_configure_eetlp_prefix(struct pci_dev *dev) #endif } +static void pci_configure_serr(struct pci_dev *dev) +{ + u16 control; + + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { + + /* + * A bridge will not forward ERR_ messages coming from an + * endpoint unless SERR# forwarding is enabled. + */ + pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &control); + if (!(control & PCI_BRIDGE_CTL_SERR)) { + control |= PCI_BRIDGE_CTL_SERR; + pci_write_config_word(dev, PCI_BRIDGE_CONTROL, control); + } + } +} + static void pci_configure_device(struct pci_dev *dev) { struct hotplug_params hpp; @@ -2139,6 +2214,7 @@ static void pci_configure_device(struct pci_dev *dev) pci_configure_relaxed_ordering(dev); pci_configure_ltr(dev); pci_configure_eetlp_prefix(dev); + pci_configure_serr(dev); memset(&hpp, 0, sizeof(hpp)); ret = pci_get_hp_params(dev, &hpp); diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index e2a879e93d86..a59ad09ce911 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -2139,7 +2139,7 @@ static void quirk_netmos(struct pci_dev *dev) if (dev->subsystem_vendor == PCI_VENDOR_ID_IBM && dev->subsystem_device == 0x0299) return; - /* else: fall through */ + /* else, fall through */ case PCI_DEVICE_ID_NETMOS_9735: case PCI_DEVICE_ID_NETMOS_9745: case PCI_DEVICE_ID_NETMOS_9845: @@ -4520,6 +4520,8 @@ static const struct pci_dev_acs_enabled { /* QCOM QDF2xxx root ports */ { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs }, { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs }, + /* HXT SD4800 root ports. The ACS design is same as QCOM QDF2xxx */ + { PCI_VENDOR_ID_HXT, 0x0401, pci_quirk_qcom_rp_acs }, /* Intel PCH root ports */ { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_spt_pch_acs }, diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index ed960436df5e..ec44a0f3a7ac 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -735,58 +735,21 @@ int pci_claim_bridge_resource(struct pci_dev *bridge, int i) base/limit registers must be read-only and read as 0. */ static void pci_bridge_check_ranges(struct pci_bus *bus) { - u16 io; - u32 pmem; struct pci_dev *bridge = bus->self; - struct resource *b_res; + struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; - b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; b_res[1].flags |= IORESOURCE_MEM; - pci_read_config_word(bridge, PCI_IO_BASE, &io); - if (!io) { - pci_write_config_word(bridge, PCI_IO_BASE, 0xe0f0); - pci_read_config_word(bridge, PCI_IO_BASE, &io); - pci_write_config_word(bridge, PCI_IO_BASE, 0x0); - } - if (io) + if (bridge->io_window) b_res[0].flags |= IORESOURCE_IO; - /* DECchip 21050 pass 2 errata: the bridge may miss an address - disconnect boundary by one PCI data phase. - Workaround: do not use prefetching on this device. */ - if (bridge->vendor == PCI_VENDOR_ID_DEC && bridge->device == 0x0001) - return; - - pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); - if (!pmem) { - pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, - 0xffe0fff0); - pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); - pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, 0x0); - } - if (pmem) { + if (bridge->pref_window) { b_res[2].flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; - if ((pmem & PCI_PREF_RANGE_TYPE_MASK) == - PCI_PREF_RANGE_TYPE_64) { + if (bridge->pref_64_window) { b_res[2].flags |= IORESOURCE_MEM_64; b_res[2].flags |= PCI_PREF_RANGE_TYPE_64; } } - - /* double check if bridge does support 64 bit pref */ - if (b_res[2].flags & IORESOURCE_MEM_64) { - u32 mem_base_hi, tmp; - pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, - &mem_base_hi); - pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, - 0xffffffff); - pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, &tmp); - if (!tmp) - b_res[2].flags &= ~IORESOURCE_MEM_64; - pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, - mem_base_hi); - } } /* Helper function for sizing routines: find first available @@ -1223,12 +1186,12 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) if (!b) continue; - switch (dev->class >> 8) { - case PCI_CLASS_BRIDGE_CARDBUS: + switch (dev->hdr_type) { + case PCI_HEADER_TYPE_CARDBUS: pci_bus_size_cardbus(b, realloc_head); break; - case PCI_CLASS_BRIDGE_PCI: + case PCI_HEADER_TYPE_BRIDGE: default: __pci_bus_size_bridges(b, realloc_head); break; @@ -1239,12 +1202,12 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) if (pci_is_root_bus(bus)) return; - switch (bus->self->class >> 8) { - case PCI_CLASS_BRIDGE_CARDBUS: + switch (bus->self->hdr_type) { + case PCI_HEADER_TYPE_CARDBUS: /* don't size cardbuses yet. */ break; - case PCI_CLASS_BRIDGE_PCI: + case PCI_HEADER_TYPE_BRIDGE: pci_bridge_check_ranges(bus); if (bus->self->is_hotplug_bridge) { additional_io_size = pci_hotplug_io_size; @@ -1393,13 +1356,13 @@ void __pci_bus_assign_resources(const struct pci_bus *bus, __pci_bus_assign_resources(b, realloc_head, fail_head); - switch (dev->class >> 8) { - case PCI_CLASS_BRIDGE_PCI: + switch (dev->hdr_type) { + case PCI_HEADER_TYPE_BRIDGE: if (!pci_is_enabled(dev)) pci_setup_bridge(b); break; - case PCI_CLASS_BRIDGE_CARDBUS: + case PCI_HEADER_TYPE_CARDBUS: pci_setup_cardbus(b); break; |