lkml.org 
[lkml]   [2019]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.1 026/128] crypto: lrw - dont access already-freed walk.iv
    Date
    From: Eric Biggers <ebiggers@google.com>

    commit aec286cd36eacfd797e3d5dab8d5d23c15d1bb5e upstream.

    If the user-provided IV needs to be aligned to the algorithm's
    alignmask, then skcipher_walk_virt() copies the IV into a new aligned
    buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
    if the caller unconditionally accesses walk.iv, it's a use-after-free.

    Fix this in the LRW template by checking the return value of
    skcipher_walk_virt().

    This bug was detected by my patches that improve testmgr to fuzz
    algorithms against their generic implementation. When the extra
    self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
    splat occured during lrw(aes) testing.

    Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    Cc: <stable@vger.kernel.org> # v4.20+
    Cc: Ondrej Mosnacek <omosnace@redhat.com>
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    crypto/lrw.c | 4 +++-
    1 file changed, 3 insertions(+), 1 deletion(-)

    --- a/crypto/lrw.c
    +++ b/crypto/lrw.c
    @@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_req
    }

    err = skcipher_walk_virt(&w, req, false);
    - iv = (__be32 *)w.iv;
    + if (err)
    + return err;

    + iv = (__be32 *)w.iv;
    counter[0] = be32_to_cpu(iv[3]);
    counter[1] = be32_to_cpu(iv[2]);
    counter[2] = be32_to_cpu(iv[1]);

    \
     
     \ /
      Last update: 2019-05-20 15:09    [W:4.307 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site