lkml.org 
[lkml]   [2016]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [fuse-devel] fuse: max_background and congestion_threshold settings
Date
On Nov 16 2016, Maxim Patlasov <mpatlasov@virtuozzo.com> wrote:
> On 11/16/2016 11:19 AM, Nikolaus Rath wrote:
>
>> Hi Maxim,
>>
>> On Nov 15 2016, Maxim Patlasov <mpatlasov@virtuozzo.com> wrote:
>>> On 11/15/2016 08:18 AM, Nikolaus Rath wrote:
>>>> Could someone explain to me the meaning of the max_background and
>>>> congestion_threshold settings of the fuse module?
>>>>
>>>> At first I assumed that max_background specifies the maximum number of
>>>> pending requests (i.e., requests that have been send to userspace but
>>>> for which no reply was received yet). But looking at fs/fuse/dev.c, it
>>>> looks as if not every request is included in this number.
>>> fuse uses max_background for cases where the total number of
>>> simultaneous requests of given type is not limited by some other
>>> natural means. AFAIU, these cases are: 1) async processing of direct
>>> IO; 2) read-ahead. As an example of "natural" limitation: when
>>> userspace process blocks on a sync direct IO read/write, the number of
>>> requests fuse consumed is limited by the number of such processes
>>> (actually their threads). In contrast, if userspace requests 1GB
>>> direct IO read/write, it would be unreasonable to issue 1GB/128K==8192
>>> fuse requests simultaneously. That's where max_background steps in.
>> Ah, that makes sense. Are these two cases meant as examples, or is that
>> an exhaustive list? Because I would have thought that other cases should
>> be writing of cached data (when writeback caching is enabled), and
>> asynchronous I/O from userspace...?
>
> I think that's exhaustive list, but I can miss something.
>
> As for writing of cached data, that definitely doesn't go through
> background requests. Here we rely on flusher: fuse will allocate as
> many requests as the flusher wants to writeback.
>
> Buffered AIO READs actually block in submit_io until fully
> processed. So it's just another example of "natural" limitation I told
> above.

Not sure I understand. What is it that's blocking? It can't be the
userspace process, because then it wouldn't be asynchronous I/O...

>> Also, I am not sure what you mean with async processing of direct
>> I/O. Shouldn't direct I/O always go directly to the file-system? If so,
>> how can it be processed asynchronously?
>
> That's a nice optimization we implemented a few years ago: having
> incoming sync direct IO request of 1MB size, kernel fuse splits it
> into eight 128K requests and starts processing them in async manner,
> waiting for the completion of all of them before completing that
> incoming 1MB requests.

I see. But why isn't that also done for regular (non-direct) IO?

Thanks,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«

\
 
 \ /
  Last update: 2016-11-16 21:20    [W:0.074 / U:0.368 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site