lkml.org 
[lkml]   [2016]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core)
On Fri, Jun 24, 2016 at 10:56 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Fri, Jun 24, 2016 at 10:47 AM, Andy Lutomirski <luto@amacapital.net> wrote:
>>
>> FWIW, your patch is much more lenient than my approach.
>
> I hate big flag-days - they cause so much pain for everybody. The
> people who get it to work and can test it, can't test all the other
> cases (whether they be drivers or other architectures), so I'd much
> rather implement something that allows a gradual per-architecture
> change from having the thread_info on the stack into having the
> thread_info in the task_struct.
>
> Bit "let's just change everything at once" patches are fine (and, in
> fact, preferable) when you can test everything in one go. So something
> that can be statically verified (ie "patch makes no semantic
> difference, but changes calling convention or naming, so if it
> compiles it is fine") I much prefer just getting the pain over and
> done with rather than some lingering thing.
>
> But when it's something where "oops, I broke every other architecture,
> and I can't even test it", I'd much rather do it in a way where each
> architecture can move over to the new model one by one.

Agreed.

To clarify, though: I wasn't planning on changing all arches at once.
I'm just saying that, for arches that switch over, they get a single
core definition of thread_info. That way, when someone (probably
named Peter) decides down the road to move, say, thread_info::cpu into
task_struct proper to optimize cache line layout, they won't need to
do it for every architecture.

Also, I want to give people an incentive to finally move their crap
out of struct thread_info and into struct thread_struct.

--Andy

\
 
 \ /
  Last update: 2016-06-24 21:01    [W:1.544 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site