lkml.org 
[lkml]   [2000]   [Aug]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: SCO: "thread creation is about a thousand times faster than on native Linux"
On Wed, Aug 23, 2000 at 10:42:26PM +0400, A.N.Kuznetsov wrote:
> Hello!
>
> > The main problem Linux clone has over LWPs is the extensive memory resource
> > usage (~8.5K on x86, 16+somethingK on 64bit for the kernel stacks)
>
> Hmm... Andi, could you elaborate this? Does LWP need not kernel stack?
>
> It is diffucult to believe that LWP in some OS eats less resources
> than Linux process does.

Sorry, terminology was a bit unclear. I was thinking of a user level
thread, where many could be mapped to a kernel "scheduling entity". iirc
UnixWare creates such a thing and they were surely measuring that.

It certainly doesn't need a kernel stack, but as Tigran noted has some other
cost, e.g. in select() and signal handling.

I think it would be possible to get the best of both (give up the kernel
stack when you're in poll or sigtimedwait). It still would be a bit more
costly at thread creation time.

> > Also the LinuxThreads library does horrible things at pthread_create time,
> > like doing a full context switch to the manager thread and cloning from
> > there (mostly to work around signal/waitpid() problems in the kernel)
>
> I hope, this is closer to truth.
>
> But even closer to truth is that "thread creation" is not equal
> to "LWP creation", thread may be created not entering kernel at all.
> F.e. normal multithreading application with several thousands of threads
> with sane thread library creates only couple of LWPs sometimes,
> because LWPs are created only when they are _required_.
> LinuxThreads is far from being sane in this sense.

IMHO these applications should use cooperative threading and save themself
a lot of locking work and wasted brain cycles in the programmer's brain ... @)

>
> BTW look into our scheduler, which scans all the list of idle processes
> each time slice, when only one process is running really and the rest
> are deathly dead. 8)

The SGI scheduler patch fixes that iirc, just unfortunately it didn't get
merged :-/ Looks like it is an degenerated case of the fast path concept
(or is it less costly than manipulating a few counters/pointers during
the scheduler operations -- I don't know)
There is other stuff like this, e.g. the loadavg computation in the timer tick.


-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.120 / U:0.664 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site