lkml.org 
[lkml]   [2020]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v4 1/2] crypto: engine - support for parallel requests
Date
On 3/12/2020 5:26 AM, Herbert Xu wrote:
> On Mon, Mar 09, 2020 at 12:51:32AM +0200, Iuliana Prodan wrote:
>>
>> ret = enginectx->op.do_one_request(engine, async_req);
>> - if (ret) {
>> - dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
>> - goto req_err;
>> + can_enq_more = ret;
>> + if (can_enq_more < 0) {
>> + dev_err(engine->dev, "Failed to do one request from queue: %d\n",
>> + ret);
>> + goto req_err_1;
>> + }
>
> So this now includes the case of the hardware queue being full
> and the request needs to be queued until space opens up again.
> In this case, we should not do dev_err. So you need to be able
> to distinguish between the hardware queue being full vs. a real
> fatal error on the request (e.g., out-of-memory or some hardware
> failure).
>
There are two aspects here:
- if all requests go through crypto-engine, and, in this case, if there
is no space in hw queue, do_one_req returns 0, and actually there will
be no case of do_one_request() < 0;
- if there are other requests (non crypto API) that go to hw for
execution (in CAAM we have split key or RNG) those might occupy the hw
queue, after crypto-engine returns that it still has space. This case
wasn't supported before my modifications and neither with these patches.
This use-case can be solved by retrying to send the request to hw -
enqueue it back to crypto-engine, in the head of the queue (to keep the
order, and send it again to hw).
I've tried this, but it implies modifications in all drivers. For
example, a driver, in case of error, it frees the resources of the
request. So, will need to map again a request.

Thanks,
Iulia



\
 
 \ /
  Last update: 2020-03-12 13:47    [W:0.092 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site