lkml.org 
[lkml]   [2005]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [1/1] connector/CBUS: new messaging subsystem. Revision number next.
From
Date
On Tue, 2005-04-26 at 15:02 -0500, Dmitry Torokhov wrote:
> On 4/26/05, Evgeniy Polyakov <johnpol@2ka.mipt.ru> wrote:
> > On Tue, 26 Apr 2005 14:06:36 -0500
> > Dmitry Torokhov <dmitry.torokhov@gmail.com> wrote:
> >
> > > On 4/26/05, Evgeniy Polyakov <johnpol@2ka.mipt.ru> wrote:
> > > > On Tue, 26 Apr 2005 13:42:10 -0500
> > > > Dmitry Torokhov <dmitry.torokhov@gmail.com> wrote:
> > > > > Yes, that woudl work, although I would urge you to implement a message
> > > > > queue for callbacks (probably limit it to 1000 messages or so) to
> > > > > allow bursting.
> > > >
> > > > It already exist, btw, but not exactly in that way -
> > > > we have skb queue, which can not be filled from userspace
> > > > if pressure is so strong so work queue can not be scheduled.
> > > > It is of course different and is influenced by other things
> > > > but it handles bursts quite well - it was tested on both
> > > > SMP and UP machines with continuous flows of forks with
> > > > shape addon of new running tasks [both fith fork bomb and not],
> > > > so I think it can be called real bursty test.
> > > >
> > >
> > > Ok, hear me out and tell me where I am wrong:
> > >
> > > By default a socket can receive at least 128 skbs with 258-byte
> > > payload, correct? That means that user of cn_netlink_send, if started
> > > "fresh", 128 average - size connector messages. If sender does not
> > > want to wait for anything (unlike your fork tests that do schedule)
> > > that means that 127 of those 128 messages will be dropped, although
> > > netlink would deliver them in time just fine.
> > >
> > > What am I missing?
> >
> > Concider netlink_broadcast - it delivers skb to the kernel
> > listener directly to the input callback - no queueing actually,
>
> Right, no queueing for in-kernel...
>
> But then we have the following: netlink will drop messages sent to
> in-kernel socket ony if it can not copy skb - which is i'd say a very
> rare scenario. Connector, on the other hand, is guaranteed to drop all
> but the very first message sent between 2 schedules. That makes
> connector several orders of magnitude less reliable than bare netlink
> protocol. And you don't see it with your fork tests because you do
> schedule when you fork.

Let's clarify that we are talking about userspace->kernelspace
direction.
Only for that messages callback path is invoked.
For such messages there is always scheduling after system call.
So if we have situation when after a schedule() call work queue can not
be scheduled, but userspace process in that situation can not be
scheduled
too, so no new message will be delivered and nothing can be dropped.

Actually you pointed to the interesting peice of code,
I've digged out some old pre-connector draft, where there is a note,
that it could be fine to
1. send messages to self, but first implementation was dropped.
2. call several callbacks in parallel, although it can not happen now.

So I will think of changing that code, although it works without unneded
drops now.

--
Evgeniy Polyakov

Crash is better than data corruption -- Arthur Grabowski
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2005-04-27 06:04    [W:0.052 / U:0.796 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site