Age | Commit message (Collapse) | Author |
|
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.
Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
dma_cache_maint_contiguous is now simple enough to live inside
dma_cache_maint_page, so move it there.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
|
|
On ARMv7, it is invalid to map the same physical address multiple times
with different memory types. Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.
However, DMA memory maps it as "strongly ordered". Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it
anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
This entirely separates the DMA coherent buffer remapping code from
the allocation code, and gets rid of the duplicate copy in the !MMU
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
IXP23xx added support for dma_alloc_coherent() for DMA arches with an
exception in dma_alloc_coherent(). This is a subset of what goes on
in __dma_alloc(), and there is no reason why dma_alloc_writecombine()
should not be given the same treatment (except, maybe, that IXP23xx
doesn't use it.)
We can better deal with this by moving the arch_is_coherent() test
inside __dma_alloc() and killing the code duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
No point wrapping the contents of this function with #ifdef CONFIG_MMU
when we can place it and the core_initcall() entirely within the
existing conditional block.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
We effectively have three implementations of dma_free_coherent() mixed up
in the code; the incoherent MMU, coherent MMU and noMMU versions.
The coherent MMU and noMMU versions are actually functionally identical.
The incoherent MMU version is almost the same, but with the additional
step of unmapping the secondary mapping.
Separate out this additional step into __dma_free_remap() and simplify
the resulting dma_free_coherent() code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage
the memory. dma_alloc_coherent() is expected to work with a granularity
of a page, so this is wrong. Fix it by using the helper functions now
provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
The coherent architecture dma_alloc_coherent was using kmalloc/kfree to
manage the memory. dma_alloc_coherent() is expected to work with a
granularity of a page, so this is wrong. Fix it by using the helper
functions now provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
|
|
We were using GFP_DMA for masks other than 0xffffffff, which is
wrong when some masks are initialized to 0xffffffffffffffff.
This caused such masks to obtain memory from the precious DMA
pool.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address. Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
|
The current use of these macros works well when the conversion is
entirely linear. In this case, we can be assured that the following
holds true:
__va(p + s) - s = __va(p)
However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.
So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses. This patch tweaks the code
to achieve this.
Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename ARM's struct vm_region so that I can introduce my own global version
for NOMMU. It's feasible that the ARM version may wish to use my global one
instead.
The NOMMU vm_region struct defines areas of the physical memory map that are
under mmap. This may include chunks of RAM or regions of memory mapped
devices, such as flash. It is also used to retain copies of file content so
that shareable private memory mappings of files can be made. As such, it may
be compatible with what is described in the banner comment for ARM's vm_region
struct.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership. Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures. We may
have to revisit the DMA API later for these architectures anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
No point having two of these; dma_map_page() can do all the work
for us.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Update the ARM DMA scatter gather APIs for the scatterlist changes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|