lkml.org 
[lkml]   [1998]   [Aug]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: DEVFSv50 and /dev/fb? (or /dev/fb/? ???)
    Terry L. Ridder writes:
    > Jurgen Botz wrote:
    > >
    > > "Anthony Barbachan" wrote:
    > > > /dev/sd{a,b,c,...} is definately cleaner and simplier than
    > > > /dev/dsk/c0t0u0d0s0 (or whatever?!?!?!?). And EIDE devices definately do
    > >
    > > As a system administrator with over a decade experience, I must say that
    > > I disagree completely. The old naming scheme is problematic and ugly and
    > > the DEVFS one is practical and elegant. MHO.
    > >
    > > (I rather suspect that Anthony and some other detractors have never
    > > administered systems with multiple SCSI busses and more than a few
    > > drives.)
    >
    > I can only answer for myself, and what you suspect, at least in my case,
    > is incorrect. I have been involved with UNIX for over 20 years, from
    > working
    > on the original AT&T version 6 source code, programming applications,
    > SysAdmin,
    > etc. I have administered HP file servers, Sun file servers, etc, with
    > multiple SCSI busses, and hundreds of hard drives. I did not like
    > Solaris
    > then and still do not.
    >
    > By contrast I have also administered large scale AppleShare file servers
    > with multiple SCSI controllers and nearly 64 drives.
    > Volume naming is nice. Does not matter care what SCSI controller it is
    > on,
    > what SCSI ID it has, move it to a new fileserver and it comes up with
    > the
    > same name as before.
    >
    > >
    > > Personally I really like DEVFS. It solves some real problems and it does
    > > it in a scaleable, forward-looking manner. Auto-generation seems like
    > > a quick-and-dirty hack by comparison. The counter-arguments I've seen
    > > seemed to mostly refer to vague aesthetic issues. I think the aesthetics
    > > of this kind of thing flow from its functionality, and by that DEVFS is
    > > beautiful.
    > >
    >
    > As others have pointed out.
    >
    > To quote H. Peter Arvin:
    >
    > <Begin Quote>
    > Auto device generation can be a security hole, and can be done in user
    > space without hogging tons of kernel memory.
    > <End Quote>
    >
    > To quote H. Peter Arvin again:
    >
    > <Begin Quote>
    > Counter question: how much kernel memory does it take to
    > keep a million devices with all their info (atime, mtime,
    > ctime, permissions, ownership all included!)
    > <End Quote>
    >
    > Note no one from the dev_fs side has answered this question yet.

    I guess you missed my response:
    Answer: way too much! That's why a scsidev or devfs system is
    essential.

    HPA and I then took our discussion offline. For the public benefit,
    the typical RAM consumption of devfs is a few pages. The answer I had
    given public was a reference to the fact that with devfs (or scsidev)
    you don't need millions of inodes.

    > dev_fs uses too much of kernel memory and by doing so inflicts a
    > performance hit.

    It uses a measly few pages. I don't think this is "too much". As I
    also discussed with HPA (quiting myself again):
    Yes, having a few extra unswappable pages can potentially hurt
    performance. On the other hand, devfs also can improve performance
    (both because of less device nodes and a more direct link between
    device nodes and device drivers). Which effect is more significant is
    unknown at this point.

    > Using either tar, or a C program to save and restore permissions,
    > user/group ownership, modtimes, etc is a hack.

    If needed, devfs can have real persistence to a block device.

    > To quote Theodore:
    >
    > <Begin Quote>
    > As far as searching a list when we open a major number, again this is a
    > extremely flawed and weak argument. First of all, the vast majority of
    > systems out there will only have less than 16 major devices. A typical
    > system has less than 10 major devices. (cat /proc/devices and see!) So
    > searching the list is simply not a problem. If searching the list were
    > an issue, there are plenty of ways of solving this problem internal to
    > the kernel, without needing to make any user-visible changes --- such
    > using hash table.
    > <End Quote>

    And quoting my response:
    I think that the extra layer between device nodes and device drivers
    is an ugly hack. I see the extra level of indirection as unnecessary
    and adding some (small, but avoidable) performance overhead. I also
    see it as a conceptual and administrative overhead. We now have device
    information kept in two places: in the source of each driver and in
    devices.txt, and has to be synchronised manually. Devfs avoids that
    entirely by keeping it in one place: in the driver.
    These are not "killer argument" for it, they're just some (small)
    reasons in a long list. As I've said in the FAQ, IMHO the totality of
    these reasons does show that devfs is a good idea.

    Regards,

    Richard....

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html

    \
     
     \ /
      Last update: 2005-03-22 13:43    [W:0.028 / U:30.388 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site