Messages in this thread |  | | Date | Tue, 21 May 96 15:45:38 PDT | From | (Bob Felderman) | Subject | Re: Caches and DMA with PPro |
| |
I sent a message about this a few months ago, now I have conclusive evidence.
I have a network interface board that I can access from the user-level to reduce latency. Associated with the board is a block of kmalloc'd memory in the kernel that we also export to user-level apps by using mmap().
The user code can bcopy data into this "copy_block" and program the interface board to DMA data to the card and into the network. The reverse happens on receive.
This works just fine on all our Pentium machines.
On our Intel PPro Intel Reference Platform (Aurora system) 200MHz PPro Model SP6AXD200MT32 Serial# A057560453 Orion Chipset - B0 stepping version
user-level programs experience packet errors that clearly are due to stale data. When I disable the system caches via BIOS, the program works perfectly (like it does on the Pentium).
On other systems (like Sun SBus) we have to explicitly tag this "copy_block" as non-cacheable or else we see the same bad behavior.
I guess there is some problem with the memory "aliased"(mmaped) to the user-level. There does NOT seem to be any problem when I run IP packets over the board where the system is manipulating kernel virtual memory before/after DMA. I've seen no checksum errors or other errors that would indicate that the cacheing is broken.
Thoughts?
Thanks, Bob
Here's some snippets of code from the mmap function
/* kmalloc() the copy_block */
while ((lp->copy_block_origP == NULL) && (counter-- >= 0)) { if (lp->copy_block_origP == NULL) { lp->copy_block_origP = (char *) kmalloc(COPY_BLOCK_SIZE + 2 * PAGE_SIZE, GFP_DMA); lp->kmalloc_block = 1; PRINTF(0) ("myri_hw_init: get_dma_pages failed, used kmalloc\n"); lp->copy_block_size = COPY_BLOCK_SIZE; } }
/* align the start */
if (((unsigned) lp->copy_block_origP) & ALIGNPAGE) { lp->copy_blockP = (int *) (((u_long) lp->copy_block_origP + ALIGNPAGE) & ~(ALIGNPAGE)); } else { lp->copy_blockP = (int *) lp->copy_block_origP; }
/* "reserve" the memory */ { int i; unsigned int start = (unsigned int) lp->copy_blockP; unsigned int end = start + lp->copy_block_size - 1; for (i = MAP_NR(start); i <= MAP_NR(end); i++) { mem_map_reserve(i); } }
/* In the mmap call remap the pages */ if (remap_page_range(vma_get_start(vma) + (3* PAGE_SIZE) + lanai_memory_size(unit), (int)virt_to_phys(mlanai_private[minor].myriP->copy_blockP), mlanai_private[minor].myriP->copy_block_size, vma_get_page_prot(vma))) { return -EAGAIN; } PRINTF(4) ("mapped copy_block\n");
|  |