lkml.org 
[lkml]   [2018]   [Jul]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 03/10] smp,cpumask: introduce on_each_cpu_cond_mask
From
Date
<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><span></span></div><div><br><div><br>On Jul 29, 2018, at 5:00 AM, Rik van Riel &lt;<a href="mailto:riel@surriel.com">riel@surriel.com</a>&gt; wrote:<br><br></div><blockquote type="cite"><div><span>On Sat, 2018-07-28 at 19:57 -0700, Andy Lutomirski wrote:</span><br><blockquote type="cite"><span>On Sat, Jul 28, 2018 at 2:53 PM, Rik van Riel &lt;<a href="mailto:riel@surriel.com">riel@surriel.com</a>&gt;</span><br></blockquote><blockquote type="cite"><span>wrote:</span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>Introduce a variant of on_each_cpu_cond that iterates only over the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>CPUs in a cpumask, in order to avoid making callbacks for every</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>single</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>CPU in the system when we only need to test a subset.</span><br></blockquote></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Nice.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Although, if you want to be really fancy, you could optimize this (or</span><br></blockquote><blockquote type="cite"><span>add a variant) that does the callback on the local CPU in parallel</span><br></blockquote><blockquote type="cite"><span>with the remote ones. &nbsp;That would give a small boost to TLB flushes.</span><br></blockquote><span></span><br><span>The test_func callbacks are not run remotely, but on</span><br><span>the local CPU, before deciding who to send callbacks</span><br><span>to.</span><br><span></span><br><span>The actual IPIs are sent in parallel, if the cpumask</span><br><span>allocation succeeds (it always should in many kernel</span><br><span>configurations, and almost always in the rest).</span><br><span></span><br></div></blockquote><div><br></div><div>What I meant is that on_each_cpu_mask does:</div><div><br></div><div><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl kwd">smp_call_function_many</span><span class="hl opt">(</span>mask<span class="hl opt">,</span> func<span class="hl opt">,</span> info<span class="hl opt">,</span> wait<span class="hl opt">);</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody">
<span class="hl kwa">if</span> <span class="hl opt">(</span><span class="hl kwd">cpumask_test_cpu</span><span class="hl opt">(</span>cpu<span class="hl opt">,</span> mask<span class="hl opt">)) {</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl kwb">&nbsp; &nbsp;unsigned long</span> flags;</font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl kwd">&nbsp; &nbsp;local_irq_save</span><span class="hl opt">(</span>flags<span class="hl opt">);</span>
<span class="hl kwd">func</span><span class="hl opt">(</span>info<span class="hl opt">);</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl kwd">&nbsp; &nbsp;local_irq_restore</span><span class="hl opt">(</span>flags<span class="hl opt">);</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody">
<span class="hl opt">}</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl opt"><br></span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><code style="white-space: normal; background-color: rgba(255, 255, 255, 0);"><font face="UICTFontTextStyleBody"><span class="hl opt">So it IPIs all the remote CPUs in parallel, then waits, then does the local work. &nbsp;In principle, the local flush could be done after triggering the IPIs but before they all finish.</span></font></code></pre><pre style="padding: 0px; margin-top: 0px; margin-bottom: 0px;"><br></pre></div><br><blockquote type="cite"><div><span>-- </span><br><span>All Rights Reversed.</span></div></blockquote></div></body></html>
\
 
 \ /
  Last update: 2018-07-29 17:37    [W:0.328 / U:0.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site