lkml.org 
[lkml]   [2019]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page pools if a memory pressure is detected
Date
> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> SeongJae Park
> Sent: 04 December 2019 11:34
> To: konrad.wilk@oracle.com; roger.pau@citrix.com; axboe@kernel.dk
> Cc: sj38.park@gmail.com; xen-devel@lists.xenproject.org; linux-
> block@vger.kernel.org; linux-kernel@vger.kernel.org; Park, Seongjae
> <sjpark@amazon.com>
> Subject: [Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page
> pools if a memory pressure is detected
>
> Each `blkif` has a free pages pool for the grant mapping. The size of
> the pool starts from zero and be increased on demand while processing
> the I/O requests. If current I/O requests handling is finished or 100
> milliseconds has passed since last I/O requests handling, it checks and
> shrinks the pool to not exceed the size limit, `max_buffer_pages`.
>
> Therefore, `blkfront` running guests can cause a memory pressure in the
> `blkback` running guest by attaching arbitrarily large number of block
> devices and inducing I/O.

OOI... How do guests unilaterally cause the attachment of arbitrary numbers of PV devices?

Paul

\
 
 \ /
  Last update: 2019-12-04 12:53    [W:0.039 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site