[lkml]   [2002]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectNFS problem after 2.4.19-pre3, not solved
    >>>>> " " == Mario Vanoni <> writes:

    > Hi Trond, hi Andrea, hi All In production environment, since >6
    > months, ethernet 10Mbits/s, on backup_machine mount -t nfs
    > production_machine /mnt.

    > find `listing from production_machine` | \ cpio -pdm
    > backup_machine

    > Volume ~320MB, nearly constant.

    > Medium times:

    > 2.4.17-rc1aa1: 1m58s, _the_ champion !!!

    > all later's, e.g.:

    > 2.4.19-pre8aa2; 4m35s 2.4.19-pre8-ac1: 4m00s
    > 2.4.19-pre7-rmap13a: 4m02s 2.4.19-pre7: 4m35s 2.4.19-pre4:
    > 4m20s

    > the last usable was:

    > 2.4.19-pre3: 2m35s, _not_ a champion

    > All benchmarks don't reflect some production needs, <2 minutes
    > or >4 minutes is a great difference !!!

    > Mario, not in lkml, but active reader (and tester).


    Your case where you transfer 320MB in 1'58" is either a measurement
    error, or it involves some pretty heavy caching, since otherwise you
    would be reading at ~3MB/sec == ~24Mbit/s over a 10Mbit line.

    4 minutes is in fact wire speed for 320MB of data over a 10Mbit
    connection. To imply that is 'unusable' would be a tad exaggerating...

    It may indeed be that the CTO patch is having an effect on the cache
    but it should only do so if the file's mtimes or inode number or NFS
    filehandle are changing with time.
    If not, then the only thing that could be causing cache invalidation
    is memory pressure and the standard Linux memory reclamation scheme.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:26    [W:0.026 / U:5.392 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site