lkml.org 
[lkml]   [2005]   [Sep]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: RFC: i386: kill !4KSTACKS
Jan Kiszka wrote:
> 2005/9/6, Giridhar Pemmasani <giri@lmc.cs.sunysb.edu>:
>
>>Jan Kiszka wrote:
>>
>>
>>>The only way I see is to switch stacks back on ndiswrapper API entry.
>>>But managing all those stacks correctly is challenging, as you will
>>>likely not want to create a new stack on each switching point. Rather,
>>
>>This is what I had in mind before I saw this thread here. I, in fact, did
>>some work along those lines, but it is even more complicated than you
>>mentioned here: Windows uses different calling conventions (STDCALL,
>>FASTCALL, CDECL) so switching stacks by copying arguments/results gets
>>complicated. So I gave up on that approach. For X86-64 drivers we use
>>similar approach, but for that there is only one calling convention and we
>>don't need to switch stacks, but reshuffle arguments on stack / in
>>registers.
>>
>>I am still hoping that Andi's approach is possible (I don't understand how
>>we can make kernel see current info from private stack).
>>
>
>
> The more I think about this the more it becomes clear that this path
> will be too winding, especially when compared to the effort needed to
> patch 8K (or more) back into the kernel as an intermediate workaround.

Is there a technical reason ("hard to implement" is a practical reason)
why all stacks need to be the same size?

--
-bill davidsen (davidsen@tmr.com)
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-09-07 19:42    [W:0.093 / U:0.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site