Messages in this thread |  | | Date | Thu, 22 Nov 2001 09:52:51 +0100 | From | Samuel Maftoul <> | Subject | NFS problem |
| |
Hello, I have a QFS filesystem on a solaris 8 which is exported via NFS: qfs1 2848604160 3996672 2844607488 1% /data/id19/inhouse
([3] % uname -a SunOS azure 5.8 Generic_108528-09 sun4u sparc SUNW,Ultra-4 )
The QFS have an option on it's directories: you can change the attributes on a directory with setfa (set file atrributes) to direct IO or page cache. Here is the concerned excerpt of the QFS setfa manpage: " -D Specifies the direct I/O attribute be permanently set for this file. This means data is transferred directly between the user's buffer and disk. This attribute should only be set for large block aligned sequential I/O. The default I/O mode is buffered (uses the page cache). Directio will not be used if the file is currently memory mapped. See man directio(3C) for Solaris 2.6 and above for more details, however the SAM-FS directio attribute is permanent. " With this option we have strange performences: >From a solaris client to this solaris server we have approximatively 40MB/s, with a linux client >1MB/s. Without directio mode we have approximatively 20MB/s on a linux and 20MB/s on a solaris ( I have tested on several solaris clients, several linux client with gigabit ethernet links and without (for having 20MB/s) ).
I thought that NFS's underlying FS do not have any effect on NFS performances, and that the client is not aware of the ("local") remote FS. Am I wrong ? Does anybody have an idea to fix the problem ? Is it a bug in NFS's implementation of linux kernel ?
Thanks in advance Sam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |