Messages in this thread | | | Date | Tue, 4 Mar 2008 00:51:48 -0600 | From | Paul Jackson <> | Subject | Re: [RFC/PATCH] cpuset: cpuset irq affinities |
| |
Paul M wrote: > I'm one such user who's been forced to add the mem_hardwall flag to > get around the fact that exclusive and hardwall are controlled by the > same flag. I keep meaning to send it in as a patch but haven't yet got > round to it.
I made essentially the same mistake twice in the evolution of cpusets: 1) overloading the cpu_exclusive flag to define sched domains, and 2) overloading the mem_exclusive flag to define memory hardwalls.
I eventually reversed (1), with a deliberately incompatible change (and you know how I resist those ;), creating a new 'sched_load_balance' flag that controls the sched_domain partitioning, and removing any affect that the cpu_exclusive flag has on this.
Perhaps the unfortunate interaction of mem_exclusive and hardwall is destined to go the same path. Thought the audience that is currently using mem_exclusive for the purpose of hardwall enforcement of kernel allocations might be broader than the specialized real-time audience that was using cpu_exclusive for dynamic sched domain isolation, and so we might not choose to just break compatibility in one shot, but rather phase in your new flag, before, perhaps, in a later release, phasing out the old hardwall overloading of the mem_exclusive flag.
(My primeval mistake was including the cpu_exclusive and mem_exclusive flags in the original cpuset design; those two flags have given me nothing but temptation to commit further design errors ;).
> Also, if you're using fake numa for memory isolation (which we're > experimenting with) then the correlation between cpu placement and > memory placement is much much weaker, or non-existent.
That might be a good answer to my asking where the beef was.
-- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj@sgi.com> 1.940.382.4214
| |