Messages in this thread | | | From | Segher Boessenkool <> | Subject | Re: Opinion on ordering of writel vs. stores to RAM | Date | Mon, 11 Sep 2006 02:54:29 +0200 |
| |
>> - writel/readl become totally ordered (including vs. memory). >> Basically x86-like. Expensive (very expensive even on some >> architectures) but also very safe. > > This approach will minimize driver changes, and would imply the > removal > of some existing mmiowb() and wmb() macros.
Like I tried to explain already, in my competing approach, no drivers would need changes either. And you could remove those macro's (or their more-verbosely-saying-what-their-doing, preferably bus-specific as well) as well -- but you'll face the wrath of those who care about performance of those drivers on non-x86 platforms.
> This is what mmiowb() is supposed to be, though only for writes. I.e. > two writes from different CPUs may arrive out of order if you don't > use > mmiowb() carefully. Do you also need a mmiorb() macro or just a > stronger mmiob()?
I'd name this barrier pci_cpu_to_cpu_barrier() -- what it is supposed to do is order I/O accesses from the same device driver to the same device, from different CPUs. The same driver is never concurrently running on more than one CPU right now, which is a fine model.
I include "pci_" in the name, so that we can distinguish between different bus types, which after all have different ordering rules. PCI is a very common bus type of course, which explains why there is mmiowb() and no ..rb() -- this is simply not needed on PCI (PCI MMIO reads are _always_ slow -- non-posted accesses, in PCI terminology).
> mmiowb() could be written as io_to_io_write_barrier() if we wanted > to be > extra verbose. AIUI it's the same thing as smb_wmb() but for MMIO > space.
Except that "main-memory" ("coherent domain") accesses are always atomic as far as this ordering is concerned -- starting a transaction and having its result are not separately observable. For I/O this is different -- the whole point of mmiowb() was that although without it the device drivers _start_ the transactions in the right order just fine, the order the I/O adapters see them might be different (because there are multiple paths from different CPUs to the same I/O adapter, or whatnot).
Hence my proposal of calling it pci_cpu_to_cpu_barrier() -- what it orders is accesses from separate CPUs. Oh, and it's bus-specific, of course.
Segher
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |