lkml.org 
[lkml]   [2014]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] aio, mem-hotplug: Add memory barrier to aio ring page migration.
(2014/02/27 23:57), Benjamin LaHaise wrote:
> On Thu, Feb 27, 2014 at 06:40:16PM +0800, Tang Chen wrote:
>> When doing aio ring page migration, we migrated the page, and update
>> ctx->ring_pages[]. Like the following:
>>
>> aio_migratepage()
>> |-> migrate_page_copy(new, old)
>> | ...... /* Need barrier here */
>> |-> ctx->ring_pages[idx] = new
>>
>> Actually, we need a memory barrier between these two operations.
>> Otherwise, if ctx->ring_pages[] is updated before memory copy due to
>> the compiler optimization, other processes may have an opportunity
>> to access to the not fully initialized new ring page.
>>
>> So add a wmb to synchronize them.
>
> The smp_wmb() is not needed after you added the taking of ctx->completion_lock
> lock since all accesses to ring_pages is then protected by the spinlock.
> Why are you adding this then? Or have you missed adding the lock somewhere?

Pleaes see following thread. It's a updated patch.

https://lkml.org/lkml/2014/2/27/237

aio_read_events_ring() accesses ring_pages without locking ctx-completion_lock.
If copying old page'date to new page runs after new page was set to ctx->ring_pages
by changing order, aio_read_events_ring() may not work correctly.

So smp_wmb() and smp_rmb() is needed.

Thanks,
Yasuaki Ishimatsu

> Also, if you've changed the patch, it is customary to add a "v2" somewhere in
> the patch title so that I have some idea what version of the patch should be
> applied.
>
> -ben



>
>> Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
>> ---
>> fs/aio.c | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/fs/aio.c b/fs/aio.c
>> index 50c089c..f0ed838 100644
>> --- a/fs/aio.c
>> +++ b/fs/aio.c
>> @@ -327,6 +327,14 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
>> pgoff_t idx;
>> spin_lock_irqsave(&ctx->completion_lock, flags);
>> migrate_page_copy(new, old);
>> +
>> + /*
>> + * Ensure memory copy is finished before updating
>> + * ctx->ring_pages[]. Otherwise other processes may access to
>> + * new ring pages which are not fully initialized.
>> + */
>> + smp_wmb();
>> +
>> idx = old->index;
>> if (idx < (pgoff_t)ctx->nr_pages) {
>> /* And only do the move if things haven't changed */
>> --
>> 1.8.3.1
>






\
 
 \ /
  Last update: 2014-02-28 09:01    [W:0.138 / U:3.816 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site