lkml.org 
[lkml]   [2004]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: kthread vs. dm-daemon
Christophe Saout wrote:
> Am So, den 15.02.2004 schrieb Christoph Hellwig um 20:46:
>
>
>>>The only reason, I guess, is that it depends on this very small
>>>dm-daemon thing:
>>>http://people.sistina.com/~thornber/dm/patches/2.6-unstable/2.6.2/2.6.2-udm1/00016.patch
>>
>>Well, actually the above code should not enter the kernel tree at all.
>>Care to rewrite dm-crypt to use Rusty's kthread code in -mm instead and
>>submit a patch to Andrew? Whenever he merges the kthread stuff to mainline
>>he could just include dm-crypt then.
>
>
> Sure I could.
>
> But kthread is currently not a full replacement for dm-daemon. kthread
> provides thread creation and destruction functions. But dm-daemon
> additionaly does mainloop handling.
>
> Usually, the dm-daemon client adds some work to a list under a spinlock
> and calls dm_daemon_wake. The next time the thread runs it calls the
> client work function which usually just grabs the whole list and
> processes it.
> The client work function can also indicate it wants periodic wakeup
> using a return value which is currently used in the multipath target but
> it's not sure whether this will be moved to a userspace daemon.
>
> There seems to beg a small race conditition that can appear when using
> only wake_up for notifies so dm-daemon uses an additional atomic_t
> variable to make sure nothing gets missed. Just see the function
> ``daemon'' in dm-daemon.c.
>
> Making dm-daemon use the kthread primitives would make dm-daemon a very
> small and stupid wrapper. Changing all dm targets to handle worker
> thread notification themselves would result in unnecessary code
> duplication.
>

When dm-multipath is more stable it could be using a work queue (my
patch was prematurely sent). Imagine a large number of dm-mp devices
multipathing across two fabrics and one switch failing. Every dm-mp
device could be resubmitting io at the same time. That leaves dm-raid1,
dm-snap and your target. The raid1 code looks like it could also benefit
from swithing. If every write for every dm-raid1 device is going through
a single dm-daemon, it could become a bottleneck. This is all assuming
the number of processors and DM devices on your machine makes sense.

I guess you could also just do a thread per target instance, but maybe
this was ruled as excessive for thousands of DM devices?

Mike Christie
mikenc@us.ibm.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:00    [W:0.166 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site