lkml.org 
[lkml]   [2015]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interacting with coherent memory on external devices
On Thu, Apr 23, 2015 at 09:12:38AM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Paul E. McKenney wrote:
>
> > Agreed, the use case that Jerome is thinking of differs from yours.
> > You would not (and should not) tolerate things like page faults because
> > it would destroy your worst-case response times. I believe that Jerome
> > is more interested in throughput with minimal change to existing code.
>
> As far as I know Jerome is talkeing about HPC loads and high performance
> GPU processing. This is the same use case.

The difference is sensitivity to latency. You have latency-sensitive
HPC workloads, and Jerome is talking about HPC workloads that need
high throughput, but are insensitive to latency.

> > Let's suppose that you and Jerome were using GPGPU hardware that had
> > 32,768 hardware threads. You would want very close to 100% of the full
> > throughput out of the hardware with pretty much zero unnecessary latency.
> > In contrast, Jerome might be OK with (say) 20,000 threads worth of
> > throughput with the occasional latency hiccup.
> >
> > And yes, support for both use cases is needed.
>
> What you are proposing for High Performacne Computing is reducing the
> performance these guys trying to get. You cannot sell someone a Volkswagen
> if he needs the Ferrari.

You do need the low-latency Ferrari. But others are best served by a
high-throughput freight train.

Thanx, Paul



\
 
 \ /
  Last update: 2015-04-23 22:01    [W:0.231 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site