summaryrefslogtreecommitdiff
path: root/arch/csky
diff options
context:
space:
mode:
authorGuo Ren <guoren@linux.alibaba.com>2023-07-12 10:03:20 -0400
committerGuo Ren <guoren@linux.alibaba.com>2023-08-10 22:09:25 -0400
commit1362d15ffb59db65b2df354b548b7915686cb05c (patch)
treeab0fdfa57a573a2c61ecf0a8d2e291f3978d5b09 /arch/csky
parent57427df33d5f4281e5b71d4e9afddacdbc622b86 (diff)
csky: pgtable: Invalidate stale I-cache lines in update_mmu_cache
The final icache_flush was in the update_mmu_cache, and update_mmu_cache is after the set_pte_at. Thus, when CPU0 sets the pte, the other CPU would see it before the icache_flush broadcast happens, and their icaches may have cached stale VIPT cache lines in their I-caches. When address translation was ready for the new cache line, they will use the stale data of icache, not the fresh one of the dcache. The csky instruction cache is VIPT, and it needs an origin virtual address to invalidate the virtual address index entries of cache ways. The current implementation uses a temporary mapping mechanism - kmap_atomic, which returns a new virtual address for invalidation. But, the original virtual address cache line may still in the I-cache. So force invalidation I-cache in update_mmu_cache, and prevent flush_dcache when there is an EXEC page. This bug was detected in the 4*c860 SMP system, and this patch could pass the stress test. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org>
Diffstat (limited to 'arch/csky')
-rw-r--r--arch/csky/abiv2/cacheflush.c4
1 files changed, 1 insertions, 3 deletions
diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c
index 9923cd24db58..500eb8f69397 100644
--- a/arch/csky/abiv2/cacheflush.c
+++ b/arch/csky/abiv2/cacheflush.c
@@ -27,11 +27,9 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
addr = (unsigned long) kmap_atomic(page);
+ icache_inv_range(address, address + PAGE_SIZE);
dcache_wb_range(addr, addr + PAGE_SIZE);
- if (vma->vm_flags & VM_EXEC)
- icache_inv_range(addr, addr + PAGE_SIZE);
-
kunmap_atomic((void *) addr);
}