lkml.org 
[lkml]   [2001]   [Nov]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] devfs v196 available
Roman Zippel writes:
> Hi,
>
> On Tue, 20 Nov 2001, Richard Gooch wrote:
>
> > Delayed events are harmless, since devfs ensures correct ordering.
>
> It's not about ordering, timing is currently unpredictable, anything
> timing sensitive has to be very careful to touch anything in devfs.

Are you talking about user-space or kernel-space? I assume you mean
user-space. And it's only a problem if you have module autoloading
enabled, *and* you are attempting a lookup of a non-existant inode.
I'd argue that if you have applications which are timing sensitive,
you shouldn't be autoloading modules anyway, you should have them
statically loaded or built into the kernel. Loading a module remains a
heavyweight operation.

> > After consideration, I've decided to dynamically grow the event buffer
> > as required, and free up space as it's not needed.
>
> You should use a slab cache and acknowledge events as soon as they
> are finished. Right now all processes are waiting until the devfsd
> is completely finished and restarted at the same time. This is
> currently limited by dropping events, with a dynamic event queue
> this can become a problem.

I'm concerned about the overhead of allocating an entry. With the
current scheme, the common case is that the buffer doesn't overflow,
and hence it's usually "good enough". In addition, it's very fast,
since it takes few instructions to "allocate" an entry (grab a lock,
test and update some pointers).

If the slab cache is fast enough on allocations (for the common case),
then it has clear advantages, since it makes cleaning up very easy,
and hence it's easier to wake up waiters as events are processed.

Changing to a wake-per-event-processed scheme will require some
careful consideration, though, since the current devfs+devfsd behviour
ensures there are few surprises (everyone waits until devfsd has
finished, so you know the system is in a "finished" state before
providing access). I'm not going to be rushed into changing it until
I'm satisfied that the new behaviour is reasonable. This is definately
not a v1.1 thing.

> > Since devfsd has to
> > wait for a module load to complete, it's not a good idea to block
> > waiting for free space in the event buffer (potential deadlock).
>
> What do you mean?

One thought I had was to have event generators wait if the queue is
full, until devfsd consumes (at least) one event. But that's no good
if devfsd is responsible for generating those events (i.e. REGISTER
events when devsfd loads a module). Since devsfd can't process new
events while waiting for the module to finish loading, in effect
devfsd would have to wait on itself: deadlock.

I tried to come up with ways around this deadlock, but none of them
were pretty, so I abandoned that approach (fortunately before cutting
any code:-). So a dynamic buffer (possibly a slab cache) is probably
the only reasonable solution.

Regards,

Richard....
Permanent: rgooch@atnf.csiro.au
Current: rgooch@ras.ucalgary.ca
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:13    [W:0.124 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site