diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-24 16:51:40 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-24 16:51:40 -0800 |
commit | 90ddb3f03418cce0d83c415c0c1d470cf524ba46 (patch) | |
tree | b61a296898b66941cb77819230fa4df2d975a5b5 /drivers/dma | |
parent | 8cbd92339db08b19b93d1637e5799ff2a8dddfd2 (diff) | |
parent | 3eb5d0f26f4ea604e83ca499a72c0d33638f4765 (diff) |
Merge tag 'pci-v6.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Rework portdrv shutdown so it disables interrupts but doesn't
disable bus mastering, which leads to hangs on Loongson LS7A
- Add mechanism to prevent Max_Read_Request_Size (MRRS) increases,
again to avoid hardware issues on Loongson LS7A (and likely other
devices based on DesignWare IP)
- Ignore devices with a firmware (DT or ACPI) node that says the
device is disabled
Resource management:
- Distribute spare resources to unconfigured hotplug bridges at
boot-time (not just when hot-adding such a bridge), which makes
hot-adding devices to docks work better. Tried this in v6.1 but had
to revert for regressions, so try again
- Fix root bus issue that dropped resources that happened to end
at 0, e.g., [bus 00]
PCI device hotplug:
- Remove device locking when marking device as disconnected so this
doesn't have to wait for concurrent driver bind/unbind to complete
- Quirk more Qualcomm bridges that don't fully implement the PCIe
Slot Status 'Command Completed' bit
Power management:
- Account for _S0W of the target bridge in acpi_pci_bridge_d3() so we
don't miss hot-add notifications for USB4 docks, Thunderbolt, etc
Reset:
- Observe delay after reset, e.g., resuming from system sleep,
regardless of whether a bridge can suspend to D3cold at runtime
- Wait for secondary bus to become ready after a bridge reset
Virtualization:
- Avoid FLR on some AMD FCH AHCI adapters where it doesn't work
- Allow independent IOMMU groups for some Wangxun NICs that prevent
peer-to-peer transactions but don't advertise an ACS Capability
Error handling:
- Configure End-to-End-CRC (ECRC) only if Linux owns the AER
Capability
- Remove redundant Device Control Error Reporting Enable in the AER
service driver since this is already done for all devices during
enumeration
ASPM:
- Add pci_enable_link_state() interface to allow drivers to enable
ASPM link state
Endpoint framework:
- Move dra7xx and tegra194 linkup processing from hard IRQ to
threaded IRQ handler
- Add a separate lock for endpoint controller list of endpoint
function drivers to prevent deadlock in callbacks
- Pass events from endpoint controller to endpoint function drivers
via callbacks instead of notifiers
Synopsys DesignWare eDMA controller driver (acked by Vinod):
- Fix CPU vs PCI address issues
- Fix source vs destination address issues
- Fix issues with interleaved transfer semantics
- Fix channel count initialization issue (issue still exists in
several other drivers)
- Clean up and improve debugfs usage so it will work on platforms
with several eDMA devices
Baikal T-1 PCIe controller driver:
- Set a 64-bit DMA mask
Freescale i.MX6 PCIe controller driver:
- Add i.MX8MM, i.MX8MQ, i.MX8MP endpoint mode DT binding and driver
support
Intel VMD host bridge driver:
- Add quirk to configure PCIe ASPM and LTR. This is normally done by
BIOS, and will be for future products
Marvell MVEBU PCIe controller driver:
- Mark this driver as broken in Kconfig since bugs prevent its daily
usage
MediaTek MT7621 PCIe controller driver:
- Delay PHY port initialization to improve boot reliability for ZBT
WE1326, ZBT WF3526-P, and some Netgear models
Qualcomm PCIe controller driver:
- Add MSM8998 DT compatible string
- Unify MSM8996 and MSM8998 clock orderings
- Add SM8350 DT binding and driver support
- Add IPQ8074 Gen3 DT binding and driver support
- Correct qcom,perst-regs in DT binding
- Add qcom_pcie_host_deinit() so the PHY is powered off and
regulators and clocks are disabled on late host-init errors
Socionext UniPhier Pro5 controller driver:
- Clean up uniphier-ep reg, clocks, resets, and their names in DT
binding
Synopsys DesignWare PCIe controller driver:
- Restrict coherent DMA mask to 32 bits for MSI, but allow controller
drivers to set 64-bit streaming DMA mask
- Add eDMA engine support in both Root Port and Endpoint controllers
Miscellaneous:
- Remove MODULE_LICENSE from boolean drivers so they don't look like
modules so modprobe can complain about them"
* tag 'pci-v6.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (86 commits)
PCI: dwc: Add Root Port and Endpoint controller eDMA engine support
PCI: bt1: Set 64-bit DMA mask
PCI: dwc: Restrict only coherent DMA mask for MSI address allocation
dmaengine: dw-edma: Prepare dw_edma_probe() for builtin callers
dmaengine: dw-edma: Depend on DW_EDMA instead of selecting it
dmaengine: dw-edma: Add mem-mapped LL-entries support
PCI: Remove MODULE_LICENSE so boolean drivers don't look like modules
PCI: hv: Drop duplicate PCI_MSI dependency
PCI/P2PDMA: Annotate RCU dereference
PCI/sysfs: Constify struct kobj_type pci_slot_ktype
PCI: hotplug: Allow marking devices as disconnected during bind/unbind
PCI: pciehp: Add Qualcomm quirk for Command Completed erratum
PCI: qcom: Add IPQ8074 Gen3 port support
dt-bindings: PCI: qcom: Add IPQ8074 Gen3 port
dt-bindings: PCI: qcom: Sort compatibles alphabetically
PCI: qcom: Fix host-init error handling
PCI: qcom: Add SM8350 support
dt-bindings: PCI: qcom: Add SM8350
dt-bindings: PCI: qcom-ep: Correct qcom,perst-regs
dt-bindings: PCI: qcom: Unify MSM8996 and MSM8998 clock order
...
Diffstat (limited to 'drivers/dma')
-rw-r--r-- | drivers/dma/dw-edma/Kconfig | 5 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-core.c | 196 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-core.h | 10 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-pcie.c | 56 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-v0-core.c | 100 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-v0-core.h | 1 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 374 | ||||
-rw-r--r-- | drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 5 |
8 files changed, 369 insertions, 378 deletions
diff --git a/drivers/dma/dw-edma/Kconfig b/drivers/dma/dw-edma/Kconfig index 7ff17b2db6a1..2b6f2679508d 100644 --- a/drivers/dma/dw-edma/Kconfig +++ b/drivers/dma/dw-edma/Kconfig @@ -9,11 +9,14 @@ config DW_EDMA Support the Synopsys DesignWare eDMA controller, normally implemented on endpoints SoCs. +if DW_EDMA + config DW_EDMA_PCIE tristate "Synopsys DesignWare eDMA PCIe driver" depends on PCI && PCI_MSI - select DW_EDMA help Provides a glue-logic between the Synopsys DesignWare eDMA controller and an endpoint PCIe device. This also serves as a reference design to whom desires to use this IP. + +endif # DW_EDMA diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index c54b24ff5206..1906a836f0aa 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -39,6 +39,17 @@ struct dw_edma_desc *vd2dw_edma_desc(struct virt_dma_desc *vd) return container_of(vd, struct dw_edma_desc, vd); } +static inline +u64 dw_edma_get_pci_address(struct dw_edma_chan *chan, phys_addr_t cpu_addr) +{ + struct dw_edma_chip *chip = chan->dw->chip; + + if (chip->ops->pci_address) + return chip->ops->pci_address(chip->dev, cpu_addr); + + return cpu_addr; +} + static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) { struct dw_edma_burst *burst; @@ -197,6 +208,24 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan) desc->chunks_alloc--; } +static void dw_edma_device_caps(struct dma_chan *dchan, + struct dma_slave_caps *caps) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + if (chan->dir == EDMA_DIR_READ) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } else { + if (chan->dir == EDMA_DIR_WRITE) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } +} + static int dw_edma_device_config(struct dma_chan *dchan, struct dma_slave_config *config) { @@ -327,11 +356,12 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) { struct dw_edma_chan *chan = dchan2dw_edma_chan(xfer->dchan); enum dma_transfer_direction dir = xfer->direction; - phys_addr_t src_addr, dst_addr; struct scatterlist *sg = NULL; struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + u64 src_addr, dst_addr; + size_t fsz = 0; u32 cnt = 0; int i; @@ -381,9 +411,9 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->xfer.sg.len < 1) return NULL; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (!xfer->xfer.il->numf) + if (!xfer->xfer.il->numf || xfer->xfer.il->frame_size < 1) return NULL; - if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) + if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) return NULL; } else { return NULL; @@ -405,16 +435,19 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) dst_addr = chan->config.dst_addr; } + if (dir == DMA_DEV_TO_MEM) + src_addr = dw_edma_get_pci_address(chan, (phys_addr_t)src_addr); + else + dst_addr = dw_edma_get_pci_address(chan, (phys_addr_t)dst_addr); + if (xfer->type == EDMA_XFER_CYCLIC) { cnt = xfer->xfer.cyclic.cnt; } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { cnt = xfer->xfer.sg.len; sg = xfer->xfer.sg.sgl; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (xfer->xfer.il->numf > 0) - cnt = xfer->xfer.il->numf; - else - cnt = xfer->xfer.il->frame_size; + cnt = xfer->xfer.il->numf * xfer->xfer.il->frame_size; + fsz = xfer->xfer.il->frame_size; } for (i = 0; i < cnt; i++) { @@ -436,7 +469,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) else if (xfer->type == EDMA_XFER_SCATTER_GATHER) burst->sz = sg_dma_len(sg); else if (xfer->type == EDMA_XFER_INTERLEAVED) - burst->sz = xfer->xfer.il->sgl[i].size; + burst->sz = xfer->xfer.il->sgl[i % fsz].size; chunk->ll_region.sz += burst->sz; desc->alloc_sz += burst->sz; @@ -455,6 +488,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->dar = dst_addr; } } else { burst->dar = dst_addr; @@ -470,25 +505,24 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->sar = src_addr; } } if (xfer->type == EDMA_XFER_SCATTER_GATHER) { sg = sg_next(sg); - } else if (xfer->type == EDMA_XFER_INTERLEAVED && - xfer->xfer.il->frame_size > 0) { + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { struct dma_interleaved_template *il = xfer->xfer.il; - struct data_chunk *dc = &il->sgl[i]; + struct data_chunk *dc = &il->sgl[i % fsz]; - if (il->src_sgl) { - src_addr += burst->sz; + src_addr += burst->sz; + if (il->src_sgl) src_addr += dmaengine_get_src_icg(il, dc); - } - if (il->dst_sgl) { - dst_addr += burst->sz; + dst_addr += burst->sz; + if (il->dst_sgl) dst_addr += dmaengine_get_dst_icg(il, dc); - } } } @@ -701,92 +735,76 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) } } -static int dw_edma_channel_setup(struct dw_edma *dw, bool write, - u32 wr_alloc, u32 rd_alloc) +static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) { struct dw_edma_chip *chip = dw->chip; - struct dw_edma_region *dt_region; struct device *dev = chip->dev; struct dw_edma_chan *chan; struct dw_edma_irq *irq; struct dma_device *dma; - u32 alloc, off_alloc; - u32 i, j, cnt; - int err = 0; + u32 i, ch_cnt; u32 pos; - if (write) { - i = 0; - cnt = dw->wr_ch_cnt; - dma = &dw->wr_edma; - alloc = wr_alloc; - off_alloc = 0; - } else { - i = dw->wr_ch_cnt; - cnt = dw->rd_ch_cnt; - dma = &dw->rd_edma; - alloc = rd_alloc; - off_alloc = wr_alloc; - } + ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; + dma = &dw->dma; INIT_LIST_HEAD(&dma->channels); - for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) { + + for (i = 0; i < ch_cnt; i++) { chan = &dw->chan[i]; - dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); - if (!dt_region) - return -ENOMEM; + chan->dw = dw; - chan->vc.chan.private = dt_region; + if (i < dw->wr_ch_cnt) { + chan->id = i; + chan->dir = EDMA_DIR_WRITE; + } else { + chan->id = i - dw->wr_ch_cnt; + chan->dir = EDMA_DIR_READ; + } - chan->dw = dw; - chan->id = j; - chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; chan->configured = false; chan->request = EDMA_REQ_NONE; chan->status = EDMA_ST_IDLE; - if (write) - chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); + if (chan->dir == EDMA_DIR_WRITE) + chan->ll_max = (chip->ll_region_wr[chan->id].sz / EDMA_LL_SZ); else - chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); + chan->ll_max = (chip->ll_region_rd[chan->id].sz / EDMA_LL_SZ); chan->ll_max -= 1; dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", - write ? "write" : "read", j, chan->ll_max); + chan->dir == EDMA_DIR_WRITE ? "write" : "read", + chan->id, chan->ll_max); if (dw->nr_irqs == 1) pos = 0; + else if (chan->dir == EDMA_DIR_WRITE) + pos = chan->id % wr_alloc; else - pos = off_alloc + (j % alloc); + pos = wr_alloc + chan->id % rd_alloc; irq = &dw->irq[pos]; - if (write) - irq->wr_mask |= BIT(j); + if (chan->dir == EDMA_DIR_WRITE) + irq->wr_mask |= BIT(chan->id); else - irq->rd_mask |= BIT(j); + irq->rd_mask |= BIT(chan->id); irq->dw = dw; memcpy(&chan->msi, &irq->msi, sizeof(chan->msi)); dev_vdbg(dev, "MSI:\t\tChannel %s[%u] addr=0x%.8x%.8x, data=0x%.8x\n", - write ? "write" : "read", j, + chan->dir == EDMA_DIR_WRITE ? "write" : "read", chan->id, chan->msi.address_hi, chan->msi.address_lo, chan->msi.data); chan->vc.desc_free = vchan_free_desc; - vchan_init(&chan->vc, dma); + chan->vc.chan.private = chan->dir == EDMA_DIR_WRITE ? + &dw->chip->dt_region_wr[chan->id] : + &dw->chip->dt_region_rd[chan->id]; - if (write) { - dt_region->paddr = chip->dt_region_wr[j].paddr; - dt_region->vaddr = chip->dt_region_wr[j].vaddr; - dt_region->sz = chip->dt_region_wr[j].sz; - } else { - dt_region->paddr = chip->dt_region_rd[j].paddr; - dt_region->vaddr = chip->dt_region_rd[j].vaddr; - dt_region->sz = chip->dt_region_rd[j].sz; - } + vchan_init(&chan->vc, dma); dw_edma_v0_core_device_config(chan); } @@ -797,16 +815,16 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_cap_set(DMA_CYCLIC, dma->cap_mask); dma_cap_set(DMA_PRIVATE, dma->cap_mask); dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); - dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV); + dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; - dma->chancnt = cnt; /* Set DMA channel callbacks */ dma->dev = chip->dev; dma->device_alloc_chan_resources = dw_edma_alloc_chan_resources; dma->device_free_chan_resources = dw_edma_free_chan_resources; + dma->device_caps = dw_edma_device_caps; dma->device_config = dw_edma_device_config; dma->device_pause = dw_edma_device_pause; dma->device_resume = dw_edma_device_resume; @@ -820,9 +838,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_set_max_seg_size(dma->dev, U32_MAX); /* Register DMA device */ - err = dma_async_device_register(dma); - - return err; + return dma_async_device_register(dma); } static inline void dw_edma_dec_irq_alloc(int *nr_irqs, u32 *alloc, u16 cnt) @@ -893,10 +909,8 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw_edma_interrupt_read, IRQF_SHARED, dw->name, &dw->irq[i]); - if (err) { - dw->nr_irqs = i; - return err; - } + if (err) + goto err_irq_free; if (irq_get_msi_desc(irq)) get_cached_msi_msg(irq, &dw->irq[i].msi); @@ -905,6 +919,14 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw->nr_irqs = i; } + return 0; + +err_irq_free: + for (i--; i >= 0; i--) { + irq = chip->ops->irq_vector(dev, i); + free_irq(irq, &dw->irq[i]); + } + return err; } @@ -951,7 +973,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (!dw->chan) return -ENOMEM; - snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%d", chip->id); + snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%s", + dev_name(chip->dev)); /* Disable eDMA, only to establish the ideal initial conditions */ dw_edma_v0_core_off(dw); @@ -961,13 +984,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (err) return err; - /* Setup write channels */ - err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc); - if (err) - goto err_irq_free; - - /* Setup read channels */ - err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc); + /* Setup write/read channels */ + err = dw_edma_channel_setup(dw, wr_alloc, rd_alloc); if (err) goto err_irq_free; @@ -993,6 +1011,10 @@ int dw_edma_remove(struct dw_edma_chip *chip) struct dw_edma *dw = chip->dw; int i; + /* Skip removal if no private data found */ + if (!dw) + return -ENODEV; + /* Disable eDMA */ dw_edma_v0_core_off(dw); @@ -1001,23 +1023,13 @@ int dw_edma_remove(struct dw_edma_chip *chip) free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); /* Deregister eDMA device */ - dma_async_device_unregister(&dw->wr_edma); - list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, + dma_async_device_unregister(&dw->dma); + list_for_each_entry_safe(chan, _chan, &dw->dma.channels, vc.chan.device_node) { tasklet_kill(&chan->vc.task); list_del(&chan->vc.chan.device_node); } - dma_async_device_unregister(&dw->rd_edma); - list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, - vc.chan.device_node) { - tasklet_kill(&chan->vc.task); - list_del(&chan->vc.chan.device_node); - } - - /* Turn debugfs off */ - dw_edma_v0_core_debugfs_off(dw); - return 0; } EXPORT_SYMBOL_GPL(dw_edma_remove); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 85df2d511907..0ab2b6dba880 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -96,12 +96,11 @@ struct dw_edma_irq { }; struct dw_edma { - char name[20]; + char name[32]; - struct dma_device wr_edma; - u16 wr_ch_cnt; + struct dma_device dma; - struct dma_device rd_edma; + u16 wr_ch_cnt; u16 rd_ch_cnt; struct dw_edma_irq *irq; @@ -112,9 +111,6 @@ struct dw_edma { raw_spinlock_t lock; /* Only for legacy */ struct dw_edma_chip *chip; -#ifdef CONFIG_DEBUG_FS - struct dentry *debugfs; -#endif /* CONFIG_DEBUG_FS */ }; struct dw_edma_sg { diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index d6b5e2463884..2b40f2b44f5e 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -95,8 +95,23 @@ static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) return pci_irq_vector(to_pci_dev(dev), nr); } +static u64 dw_edma_pcie_address(struct device *dev, phys_addr_t cpu_addr) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_bus_region region; + struct resource res = { + .flags = IORESOURCE_MEM, + .start = cpu_addr, + .end = cpu_addr, + }; + + pcibios_resource_to_bus(pdev->bus, ®ion, &res); + return region.start; +} + static const struct dw_edma_core_ops dw_edma_pcie_core_ops = { .irq_vector = dw_edma_pcie_irq_vector, + .pci_address = dw_edma_pcie_address, }; static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, @@ -207,7 +222,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, /* Data structure initialization */ chip->dev = dev; - chip->id = pdev->devfn; chip->mf = vsec_data.mf; chip->nr_irqs = nr_irqs; @@ -226,21 +240,21 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; - ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; - if (!ll_region->vaddr) + ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; + if (!ll_region->vaddr.io) return -ENOMEM; - ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->vaddr.io += ll_block->off; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; - dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; - if (!dt_region->vaddr) + dt_region->vaddr.io = pcim_iomap_table(pdev)[dt_block->bar]; + if (!dt_region->vaddr.io) return -ENOMEM; - dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->vaddr.io += dt_block->off; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } @@ -251,21 +265,21 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; - ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; - if (!ll_region->vaddr) + ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; + if (!ll_region->vaddr.io) return -ENOMEM; - ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->vaddr.io += ll_block->off; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; - dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; - if (!dt_region->vaddr) + dt_region->vaddr.io = pcim_iomap_table(pdev)[dt_block->bar]; + if (!dt_region->vaddr.io) return -ENOMEM; - dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->vaddr.io += dt_block->off; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } @@ -289,24 +303,24 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.ll_wr[i].bar, vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz, - chip->ll_region_wr[i].vaddr, &chip->ll_region_wr[i].paddr); + chip->ll_region_wr[i].vaddr.io, &chip->ll_region_wr[i].paddr); pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.dt_wr[i].bar, vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz, - chip->dt_region_wr[i].vaddr, &chip->dt_region_wr[i].paddr); + chip->dt_region_wr[i].vaddr.io, &chip->dt_region_wr[i].paddr); } for (i = 0; i < chip->ll_rd_cnt; i++) { pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.ll_rd[i].bar, vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz, - chip->ll_region_rd[i].vaddr, &chip->ll_region_rd[i].paddr); + chip->ll_region_rd[i].vaddr.io, &chip->ll_region_rd[i].paddr); pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", i, vsec_data.dt_rd[i].bar, vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz, - chip->dt_region_rd[i].vaddr, &chip->dt_region_rd[i].paddr); + chip->dt_region_rd[i].vaddr.io, &chip->dt_region_rd[i].paddr); } pci_dbg(pdev, "Nr. IRQs:\t%u\n", chip->nr_irqs); diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 77e6cfe52e0a..72e79a0c0a4e 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -8,6 +8,8 @@ #include <linux/bitfield.h> +#include <linux/io-64-nonatomic-lo-hi.h> + #include "dw-edma-core.h" #include "dw-edma-v0-core.h" #include "dw-edma-v0-regs.h" @@ -53,8 +55,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_32(dw, rd_##name, value); \ } while (0) -#ifdef CONFIG_64BIT - #define SET_64(dw, name, value) \ writeq(value, &(__dw_regs(dw)->name)) @@ -80,8 +80,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_64(dw, rd_##name, value); \ } while (0) -#endif /* CONFIG_64BIT */ - #define SET_COMPAT(dw, name, value) \ writel(value, &(__dw_regs(dw)->type.unroll.name)) @@ -161,11 +159,6 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define GET_CH_32(dw, dir, ch, name) \ readl_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) -#define SET_LL_32(ll, value) \ - writel(value, ll) - -#ifdef CONFIG_64BIT - static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u64 value, void __iomem *addr) { @@ -192,7 +185,7 @@ static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, const void __iomem *addr) { - u32 value; + u64 value; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; @@ -222,11 +215,6 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define GET_CH_64(dw, dir, ch, name) \ readq_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) -#define SET_LL_64(ll, value) \ - writeq(value, ll) - -#endif /* CONFIG_64BIT */ - /* eDMA management callbacks */ void dw_edma_v0_core_off(struct dw_edma *dw) { @@ -298,17 +286,53 @@ u32 dw_edma_v0_core_status_abort_int(struct dw_edma *dw, enum dw_edma_dir dir) GET_RW_32(dw, dir, int_status)); } +static void dw_edma_v0_write_ll_data(struct dw_edma_chunk *chunk, int i, + u32 control, u32 size, u64 sar, u64 dar) +{ + ptrdiff_t ofs = i * sizeof(struct dw_edma_v0_lli); + + if (chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + struct dw_edma_v0_lli *lli = chunk->ll_region.vaddr.mem + ofs; + + lli->control = control; + lli->transfer_size = size; + lli->sar.reg = sar; + lli->dar.reg = dar; + } else { + struct dw_edma_v0_lli __iomem *lli = chunk->ll_region.vaddr.io + ofs; + + writel(control, &lli->control); + writel(size, &lli->transfer_size); + writeq(sar, &lli->sar.reg); + writeq(dar, &lli->dar.reg); + } +} + +static void dw_edma_v0_write_ll_link(struct dw_edma_chunk *chunk, + int i, u32 control, u64 pointer) +{ + ptrdiff_t ofs = i * sizeof(struct dw_edma_v0_lli); + + if (chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + struct dw_edma_v0_llp *llp = chunk->ll_region.vaddr.mem + ofs; + + llp->control = control; + llp->llp.reg = pointer; + } else { + struct dw_edma_v0_llp __iomem *llp = chunk->ll_region.vaddr.io + ofs; + + writel(control, &llp->control); + writeq(pointer, &llp->llp.reg); + } +} + static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) { struct dw_edma_burst *child; struct dw_edma_chan *chan = chunk->chan; - struct dw_edma_v0_lli __iomem *lli; - struct dw_edma_v0_llp __iomem *llp; u32 control = 0, i = 0; int j; - lli = chunk->ll_region.vaddr; - if (chunk->cb) control = DW_EDMA_V0_CB; @@ -320,41 +344,16 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) if (!(chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL)) control |= DW_EDMA_V0_RIE; } - /* Channel control */ - SET_LL_32(&lli[i].control, control); - /* Transfer size */ - SET_LL_32(&lli[i].transfer_size, child->sz); - /* SAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].sar.reg, child->sar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].sar.lsb, lower_32_bits(child->sar)); - SET_LL_32(&lli[i].sar.msb, upper_32_bits(child->sar)); - #endif /* CONFIG_64BIT */ - /* DAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].dar.reg, child->dar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].dar.lsb, lower_32_bits(child->dar)); - SET_LL_32(&lli[i].dar.msb, upper_32_bits(child->dar)); - #endif /* CONFIG_64BIT */ - i++; + + dw_edma_v0_write_ll_data(chunk, i++, control, child->sz, + child->sar, child->dar); } - llp = (void __iomem *)&lli[i]; control = DW_EDMA_V0_LLP | DW_EDMA_V0_TCB; if (!chunk->cb) control |= DW_EDMA_V0_CB; - /* Channel control */ - SET_LL_32(&llp->control, control); - /* Linked list */ - #ifdef CONFIG_64BIT - SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); - #else /* CONFIG_64BIT */ - SET_LL_32(&llp->llp.lsb, lower_32_bits(chunk->ll_region.paddr)); - SET_LL_32(&llp->llp.msb, upper_32_bits(chunk->ll_region.paddr)); - #endif /* CONFIG_64BIT */ + dw_edma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr); } void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) @@ -504,8 +503,3 @@ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw) { dw_edma_v0_debugfs_on(dw); } - -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw) -{ - dw_edma_v0_debugfs_off(dw); -} diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.h b/drivers/dma/dw-edma/dw-edma-v0-core.h index 75aec6d31b21..ab96a1f48080 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.h +++ b/drivers/dma/dw-edma/dw-edma-v0-core.h @@ -23,6 +23,5 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); /* eDMA debug fs callbacks */ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw); #endif /* _DW_EDMA_V0_CORE_H */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 5226c9014703..0745d9e7d259 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -13,76 +13,79 @@ #include "dw-edma-v0-regs.h" #include "dw-edma-core.h" -#define REGS_ADDR(name) \ - ((void __force *)®s->name) -#define REGISTER(name) \ - { #name, REGS_ADDR(name) } - -#define WR_REGISTER(name) \ - { #name, REGS_ADDR(wr_##name) } -#define RD_REGISTER(name) \ - { #name, REGS_ADDR(rd_##name) } - -#define WR_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.wr_##name) } +#define REGS_ADDR(dw, name) \ + ({ \ + struct dw_edma_v0_regs __iomem *__regs = (dw)->chip->reg_base; \ + \ + (void __iomem *)&__regs->name; \ + }) + +#define REGS_CH_ADDR(dw, name, _dir, _ch) \ + ({ \ + struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ + \ + if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ + __ch_regs = REGS_ADDR(dw, type.legacy.ch); \ + else if (_dir == EDMA_DIR_READ) \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].rd); \ + else \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].wr); \ + \ + (void __iomem *)&__ch_regs->name; \ + }) + +#define REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, name) } + +#define CTX_REGISTER(dw, name, dir, ch) \ + { dw, #name, REGS_CH_ADDR(dw, name, dir, ch), dir, ch } + +#define WR_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, wr_##name) } +#define RD_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, rd_##name) } + +#define WR_REGISTER_LEGACY(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.legacy.wr_##name) } #define RD_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.rd_##name) } + { dw, #name, REGS_ADDR(dw, type.legacy.rd_##name) } -#define WR_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.wr_##name) } -#define RD_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.rd_##name) } +#define WR_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.wr_##name) } +#define RD_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.rd_##name) } #define WRITE_STR "write" #define READ_STR "read" #define CHANNEL_STR "channel" #define REGISTERS_STR "registers" -static struct dw_edma *dw; -static struct dw_edma_v0_regs __iomem *regs; - -static struct { - void __iomem *start; - void __iomem *end; -} lim[2][EDMA_V0_MAX_NR_CH]; - -struct debugfs_entries { +struct dw_edma_debugfs_entry { + struct dw_edma *dw; const char *name; - dma_addr_t *reg; + void __iomem *reg; + enum dw_edma_dir dir; + u16 ch; }; static int dw_edma_debugfs_u32_get(void *data, u64 *val) { - void __iomem *reg = (void __force __iomem *)data; + struct dw_edma_debugfs_entry *entry = data; + struct dw_edma *dw = entry->dw; + void __iomem *reg = entry->reg; + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && - reg >= (void __iomem *)®s->type.legacy.ch) { - void __iomem *ptr = ®s->type.legacy.ch; - u32 viewport_sel = 0; + reg >= REGS_ADDR(dw, type.legacy.ch)) { unsigned long flags; - u16 ch; - - for (ch = 0; ch < dw->wr_ch_cnt; ch++) - if (lim[0][ch].start >= reg && reg < lim[0][ch].end) { - ptr += (reg - lim[0][ch].start); - goto legacy_sel_wr; - } - - for (ch = 0; ch < dw->rd_ch_cnt; ch++) - if (lim[1][ch].start >= reg && reg < lim[1][ch].end) { - ptr += (reg - lim[1][ch].start); - goto legacy_sel_rd; - } - - return 0; -legacy_sel_rd: - viewport_sel = BIT(31); -legacy_sel_wr: - viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); + u32 viewport_sel; + + viewport_sel = entry->dir == EDMA_DIR_READ ? BIT(31) : 0; + viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, entry->ch); raw_spin_lock_irqsave(&dw->lock, flags); - writel(viewport_sel, ®s->type.legacy.viewport_sel); - *val = readl(ptr); + writel(viewport_sel, REGS_ADDR(dw, type.legacy.viewport_sel)); + *val = readl(reg); raw_spin_unlock_irqrestore(&dw->lock, flags); } else { @@ -93,222 +96,197 @@ legacy_sel_wr: } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], - int nr_entries, struct dentry *dir) +static void dw_edma_debugfs_create_x32(struct dw_edma *dw, + const struct dw_edma_debugfs_entry ini[], + int nr_entries, struct dentry *dent) { + struct dw_edma_debugfs_entry *entries; int i; + entries = devm_kcalloc(dw->chip->dev, nr_entries, sizeof(*entries), + GFP_KERNEL); + if (!entries) + return; + for (i = 0; i < nr_entries; i++) { - if (!debugfs_create_file_unsafe(entries[i].name, 0444, dir, - entries[i].reg, &fops_x32)) - break; + entries[i] = ini[i]; + + debugfs_create_file_unsafe(entries[i].name, 0444, dent, + &entries[i], &fops_x32); } } -static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, - struct dentry *dir) +static void dw_edma_debugfs_regs_ch(struct dw_edma *dw, enum dw_edma_dir dir, + u16 ch, struct dentry *dent) { - int nr_entries; - const struct debugfs_entries debugfs_regs[] = { - REGISTER(ch_control1), - REGISTER(ch_control2), - REGISTER(transfer_size), - REGISTER(sar.lsb), - REGISTER(sar.msb), - REGISTER(dar.lsb), - REGISTER(dar.msb), - REGISTER(llp.lsb), - REGISTER(llp.msb), + struct dw_edma_debugfs_entry debugfs_regs[] = { + CTX_REGISTER(dw, ch_control1, dir, ch), + CTX_REGISTER(dw, ch_control2, dir, ch), + CTX_REGISTER(dw, transfer_size, dir, ch), + CTX_REGISTER(dw, sar.lsb, dir, ch), + CTX_REGISTER(dw, sar.msb, dir, ch), + CTX_REGISTER(dw, dar.lsb, dir, ch), + CTX_REGISTER(dw, dar.msb, dir, ch), + CTX_REGISTER(dw, llp.lsb, dir, ch), + CTX_REGISTER(dw, llp.msb, dir, ch), }; + int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, dent); } -static void dw_edma_debugfs_regs_wr(struct dentry *dir) +static noinline_for_stack void +dw_edma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - WR_REGISTER(engine_en), - WR_REGISTER(doorbell), - WR_REGISTER(ch_arb_weight.lsb), - WR_REGISTER(ch_arb_weight.msb), + WR_REGISTER(dw, engine_en), + WR_REGISTER(dw, doorbell), + WR_REGISTER(dw, ch_arb_weight.lsb), + WR_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - WR_REGISTER(int_status), - WR_REGISTER(int_mask), - WR_REGISTER(int_clear), - WR_REGISTER(err_status), - WR_REGISTER(done_imwr.lsb), - WR_REGISTER(done_imwr.msb), - WR_REGISTER(abort_imwr.lsb), - WR_REGISTER(abort_imwr.msb), - WR_REGISTER(ch01_imwr_data), - WR_REGISTER(ch23_imwr_data), - WR_REGISTER(ch45_imwr_data), - WR_REGISTER(ch67_imwr_data), - WR_REGISTER(linked_list_err_en), + WR_REGISTER(dw, int_status), + WR_REGISTER(dw, int_mask), + WR_REGISTER(dw, int_clear), + WR_REGISTER(dw, err_status), + WR_REGISTER(dw, done_imwr.lsb), + WR_REGISTER(dw, done_imwr.msb), + WR_REGISTER(dw, abort_imwr.lsb), + WR_REGISTER(dw, abort_imwr.msb), + WR_REGISTER(dw, ch01_imwr_data), + WR_REGISTER(dw, ch23_imwr_data), + WR_REGISTER(dw, ch45_imwr_data), + WR_REGISTER(dw, ch67_imwr_data), + WR_REGISTER(dw, linked_list_err_en), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - WR_REGISTER_UNROLL(engine_chgroup), - WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), - WR_REGISTER_UNROLL(engine_hshake_cnt.msb), - WR_REGISTER_UNROLL(ch0_pwr_en), - WR_REGISTER_UNROLL(ch1_pwr_en), - WR_REGISTER_UNROLL(ch2_pwr_en), - WR_REGISTER_UNROLL(ch3_pwr_en), - WR_REGISTER_UNROLL(ch4_pwr_en), - WR_REGISTER_UNROLL(ch5_pwr_en), - WR_REGISTER_UNROLL(ch6_pwr_en), - WR_REGISTER_UNROLL(ch7_pwr_en), + WR_REGISTER_UNROLL(dw, engine_chgroup), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + WR_REGISTER_UNROLL(dw, ch0_pwr_en), + WR_REGISTER_UNROLL(dw, ch1_pwr_en), + WR_REGISTER_UNROLL(dw, ch2_pwr_en), + WR_REGISTER_UNROLL(dw, ch3_pwr_en), + WR_REGISTER_UNROLL(dw, ch4_pwr_en), + WR_REGISTER_UNROLL(dw, ch5_pwr_en), + WR_REGISTER_UNROLL(dw, ch6_pwr_en), + WR_REGISTER_UNROLL(dw, ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(WRITE_STR, dir); - if (!regs_dir) - return; + regs_dent = debugfs_create_dir(WRITE_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, + regs_dent); } for (i = 0; i < dw->wr_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; - - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - lim[0][i].start = ®s->type.unroll.ch[i].wr; - lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_WRITE, i, ch_dent); } } -static void dw_edma_debugfs_regs_rd(struct dentry *dir) +static noinline_for_stack void dw_edma_debugfs_regs_rd(struct dw_edma *dw, + struct dentry *dent) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - RD_REGISTER(engine_en), - RD_REGISTER(doorbell), - RD_REGISTER(ch_arb_weight.lsb), - RD_REGISTER(ch_arb_weight.msb), + RD_REGISTER(dw, engine_en), + RD_REGISTER(dw, doorbell), + RD_REGISTER(dw, ch_arb_weight.lsb), + RD_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - RD_REGISTER(int_status), - RD_REGISTER(int_mask), - RD_REGISTER(int_clear), - RD_REGISTER(err_status.lsb), - RD_REGISTER(err_status.msb), - RD_REGISTER(linked_list_err_en), - RD_REGISTER(done_imwr.lsb), - RD_REGISTER(done_imwr.msb), - RD_REGISTER(abort_imwr.lsb), - RD_REGISTER(abort_imwr.msb), - RD_REGISTER(ch01_imwr_data), - RD_REGISTER(ch23_imwr_data), - RD_REGISTER(ch45_imwr_data), - RD_REGISTER(ch67_imwr_data), + RD_REGISTER(dw, int_status), + RD_REGISTER(dw, int_mask), + RD_REGISTER(dw, int_clear), + RD_REGISTER(dw, err_status.lsb), + RD_REGISTER(dw, err_status.msb), + RD_REGISTER(dw, linked_list_err_en), + RD_REGISTER(dw, done_imwr.lsb), + RD_REGISTER(dw, done_imwr.msb), + RD_REGISTER(dw, abort_imwr.lsb), + RD_REGISTER(dw, abort_imwr.msb), + RD_REGISTER(dw, ch01_imwr_data), + RD_REGISTER(dw, ch23_imwr_data), + RD_REGISTER(dw, ch45_imwr_data), + RD_REGISTER(dw, ch67_imwr_data), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - RD_REGISTER_UNROLL(engine_chgroup), - RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), - RD_REGISTER_UNROLL(engine_hshake_cnt.msb), - RD_REGISTER_UNROLL(ch0_pwr_en), - RD_REGISTER_UNROLL(ch1_pwr_en), - RD_REGISTER_UNROLL(ch2_pwr_en), - RD_REGISTER_UNROLL(ch3_pwr_en), - RD_REGISTER_UNROLL(ch4_pwr_en), - RD_REGISTER_UNROLL(ch5_pwr_en), - RD_REGISTER_UNROLL(ch6_pwr_en), - RD_REGISTER_UNROLL(ch7_pwr_en), + RD_REGISTER_UNROLL(dw, engine_chgroup), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + RD_REGISTER_UNROLL(dw, ch0_pwr_en), + RD_REGISTER_UNROLL(dw, ch1_pwr_en), + RD_REGISTER_UNROLL(dw, ch2_pwr_en), + RD_REGISTER_UNROLL(dw, ch3_pwr_en), + RD_REGISTER_UNROLL(dw, ch4_pwr_en), + RD_REGISTER_UNROLL(dw, ch5_pwr_en), + RD_REGISTER_UNROLL(dw, ch6_pwr_en), + RD_REGISTER_UNROLL(dw, ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(READ_STR, dir); - if (!regs_dir) - return; + regs_dent = debugfs_create_dir(READ_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, + regs_dent); } for (i = 0; i < dw->rd_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; - - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - lim[1][i].start = ®s->type.unroll.ch[i].rd; - lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_READ, i, ch_dent); } } -static void dw_edma_debugfs_regs(void) +static void dw_edma_debugfs_regs(struct dw_edma *dw) { - const struct debugfs_entries debugfs_regs[] = { - REGISTER(ctrl_data_arb_prior), - REGISTER(ctrl), + const struct dw_edma_debugfs_entry debugfs_regs[] = { + REGISTER(dw, ctrl_data_arb_prior), + REGISTER(dw, ctrl), }; - struct dentry *regs_dir; + struct dentry *regs_dent; int nr_entries; - regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); - if (!regs_dir) - return; + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->dma.dbg_dev_root); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); - dw_edma_debugfs_regs_wr(regs_dir); - dw_edma_debugfs_regs_rd(regs_dir); + dw_edma_debugfs_regs_wr(dw, regs_dent); + dw_edma_debugfs_regs_rd(dw, regs_dent); } -void dw_edma_v0_debugfs_on(struct dw_edma *_dw) +void dw_edma_v0_debugfs_on(struct dw_edma *dw) { - dw = _dw; - if (!dw) - return; - - regs = dw->chip->reg_base; - if (!regs) - return; - - dw->debugfs = debugfs_create_dir(dw->name, NULL); - if (!dw->debugfs) + if (!debugfs_initialized()) return; - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); - debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); - debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); - - dw_edma_debugfs_regs(); -} - -void dw_edma_v0_debugfs_off(struct dw_edma *_dw) -{ - dw = _dw; - if (!dw) - return; + debugfs_create_u32("mf", 0444, dw->dma.dbg_dev_root, &dw->chip->mf); + debugfs_create_u16("wr_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->wr_ch_cnt); + debugfs_create_u16("rd_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->rd_ch_cnt); - debugfs_remove_recursive(dw->debugfs); - dw->debugfs = NULL; + dw_edma_debugfs_regs(dw); } diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h index 3391b86edf5a..fb3342d97d6d 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h @@ -13,15 +13,10 @@ #ifdef CONFIG_DEBUG_FS void dw_edma_v0_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_debugfs_off(struct dw_edma *dw); #else static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw) { } - -static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw) -{ -} #endif /* CONFIG_DEBUG_FS */ #endif /* _DW_EDMA_V0_DEBUG_FS_H */ |