lkml.org 
[lkml]   [1999]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: NFSv3: Summary of recent bugfixes/updates to the NFSv3 patches...
On 1 Sep 1999, Trond Myklebust wrote:

> > Following up on my own post. After fixing three small conflicts, they do
> > apply over HJ's. However, lockd still ooopses immediately when exercising
> > locks against another server. This is the problem I reported weeks ago.
>
> Are HJ's lockd patches really required on linux-2.2.12? As far as I
> can see, 'nfsd-2.2.7-1.lock.patch' is already in the kernel.

Yes, I've not applied that - sorry.

> 'nfsd-2.2.7-2.lockd.patch' should in most cases not be necessary, but
> if you do prefer to start lockd by hand rather than letting NFS do it,
> then as far as I can see, the only thing that fails is the patch to
> 'svc.c'. You need to add
>
> do_lockdctl = NULL;
>
> to 'cleanup_module(void)'.

Already done.

> As for the rest of HJ's 2.2.7 patches, I can't see any conflicts with
> NFSv3. Where are you seeing them?

The other two conflicts were with Alan's 2.2.13-pre1 patch, now that I
look closer:

filemap.c.rej:

***************
*** 280,296 ****
} while (count);
}

- static inline void add_to_page_cache(struct page * page,
- struct inode * inode, unsigned long offset,
- struct page **hash)
- {
- atomic_inc(&page->count);
- page->flags = (page->flags & ~((1 << PG_uptodate) | (1 << PG_error))) | (1 << PG_referenced);
- page->offset = offset;
- add_page_to_inode_queue(inode, page);
- __add_page_to_hash_queue(page, hash);
- }
/*
* Try to read ahead in the file. "page_cache" is a potentially free page
* that we could use for the cache (if it is 0 we can try to create one,
--- 280,285 ----
} while (count);
}

/*
* Try to read ahead in the file. "page_cache" is a potentially free page
* that we could use for the cache (if it is 0 we can try to create one,
***************
*** 1547,1567 ****

/* Get exclusive IO access to the page.. */
wait_on_page(page);
- set_bit(PG_locked, &page->flags);

/*
* Do the real work.. If the writer ends up delaying the write,
* the writer needs to increment the page use counts until he
* is done with the page.
*/
- bytes -= copy_from_user((u8*)page_address(page) + offset, buf, bytes);
- status = -EFAULT;
- if (bytes)
- status = inode->i_op->updatepage(file, page, offset, bytes, sync);
- /* Mark it unlocked again and drop the page.. */
- clear_bit(PG_locked, &page->flags);
- wake_up(&page->wait);
page_cache_release(page);

if (status < 0)
--- 1536,1548 ----

/* Get exclusive IO access to the page.. */
wait_on_page(page);

/*
* Do the real work.. If the writer ends up delaying the write,
* the writer needs to increment the page use counts until he
* is done with the page.
*/
+ status = inode->i_op->updatepage(file, page, buf, offset, bytes, sync);
page_cache_release(page);

if (status < 0)


pagemap.h.rej:

***************
*** 148,153 ****
__wait_on_page(page);
}

extern void update_vm_cache(struct inode *, unsigned long, const char *, int);

#endif
--- 148,165 ----
__wait_on_page(page);
}

+ static inline void add_to_page_cache(struct page * page,
+ struct inode * inode, unsigned long offset,
+ struct page **hash)
+ {
+ atomic_inc(&page->count);
+ page->flags &= ~((1 << PG_uptodate) | (1 << PG_error));
+ page->flags |= (1 << PG_referenced);
+ page->offset = offset;
+ add_page_to_inode_queue(inode, page);
+ __add_page_to_hash_queue(page, hash);
+ }
+
extern void update_vm_cache(struct inode *, unsigned long, const char *, int);

#endif


Even after fixing these, lockd goes down with an Ooops on the first lock
attempt. I'll try backing out the standalone lockd patch and see if that
mitigates the problem. I'm not really set on using this feature, but HJ's
init scripts assume its presence, so it seemed easier to just apply it.

Steve



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.057 / U:0.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site