lkml.org 
[lkml]   [2021]   [Aug]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg
Date


> On 8 Jul 2020, at 03:12, Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Tue, Jul 07, 2020 at 06:05:02PM -0700, Divya Indi wrote:
>> Thanks Jason.
>>
>> Appreciate your help and feedback for fixing this issue.
>>
>> Would it be possible to access the edited version of the patch?
>> If yes, please share a pointer to the same.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=for-rc&id=f427f4d6214c183c474eeb46212d38e6c7223d6a

Hi Jason,


At first glanse, this commit calls rdma_nl_multicast() whilst holding a spinlock. Since rdma_nl_multicast() is called with a gfp_flag parameter, one could assume it supports an atomic context. rdma_nl_multicast() ends up in netlink_broadcast_filtered(). This function calls netlink_lock_table(), which calls read_unlock_irqrestore(), which ends up calling _raw_read_unlock_irqrestore(). And here preempt_enable() is called :-(

Now, this could be fixed by calling rdma_nl_multicast() outside the spinlock and instead insert the request into the timeout list in a sorted fashion.

But the main problem here is that ib_nl_make_request() can be called from an atomic context, for example from:

neigh_refresh_path() (takes a spin lock) ==>
path_rec_start() ==>
ib_sa_path_rec_get() ==>
send_mad() ==>
ib_nl_make_request() ==>

Here's the stack trace (not newest upstream, but I pretty sure the same problem is there):

<IRQ>
queued_spin_lock_slowpath+0xb/0xf
_raw_spin_lock_irqsave+0x46/0x48
send_mad+0x3d2/0x590 [ib_core]
? ipoib_start_xmit+0x6a0/0x6a0 [ib_ipoib]
ib_sa_path_rec_get+0x223/0x4d0 [ib_core]
? ipoib_start_xmit+0x6a0/0x6a0 [ib_ipoib]
? do_IRQ+0x59/0xe3
path_rec_start+0xa3/0x140 [ib_ipoib]
? ipoib_start_xmit+0x6a0/0x6a0 [ib_ipoib]
ipoib_start_xmit+0x2b0/0x6a0 [ib_ipoib]
dev_hard_start_xmit+0xb2/0x237
sch_direct_xmit+0x114/0x1bf
__dev_queue_xmit+0x592/0x818
? __alloc_skb+0xa1/0x289
dev_queue_xmit+0x10/0x12
arp_xmit+0x38/0xa6
arp_send_dst.part.16+0x61/0x84
arp_process+0x825/0x889
? try_to_wake_up+0x59/0x4f1
arp_rcv+0x140/0x1c9
? wake_up_worker+0x28/0x2b
? __slab_free+0x9b/0x2ba
__netif_receive_skb_core+0x401/0xb39
? dma_get_required_mask+0x28/0x31
? iommu_should_identity_map+0x52/0xdb
? iommu_no_mapping+0x4a/0xd1
__netif_receive_skb+0x18/0x59
netif_receive_skb_internal+0x45/0x119
napi_gro_receive+0xd8/0xf6
ipoib_ib_handle_rx_wc+0x1ca/0x520 [ib_ipoib]
ipoib_poll+0xcd/0x150 [ib_ipoib]
net_rx_action+0x289/0x3f4
__do_softirq+0xe1/0x2b5
do_softirq_own_stack+0x2a/0x35
</IRQ>
do_softirq+0x4d/0x6a
__local_bh_enable_ip+0x57/0x59
_raw_spin_unlock_bh+0x23/0x25
peernet2id+0x51/0x73
netlink_broadcast_filtered+0x223/0x41b
netlink_broadcast+0x1d/0x1f
rdma_nl_multicast+0x22/0x30 [ib_core]
send_mad+0x3e5/0x590 [ib_core]
? cma_bind_port+0x90/0x90 [rdma_cm]
ib_sa_path_rec_get+0x223/0x4d0 [ib_core]
? cma_bind_port+0x90/0x90 [rdma_cm]
? ring_buffer_lock_reserve+0x120/0x34d
? kmem_cache_alloc_trace+0x16f/0x1cd
rdma_resolve_route+0x287/0x810 [rdma_cm]
? cma_bind_port+0x90/0x90 [rdma_cm]
rds_rdma_cm_event_handler_cmn+0x311/0x7d0 [rds_rdma]
rds_rdma_cm_event_handler_worker+0x22/0x30 [rds_rdma]
process_one_work+0x169/0x3a6
worker_thread+0x4d/0x3e5
kthread+0x105/0x138


How shall this be attacked?


Thxs, Håkon





\
 
 \ /
  Last update: 2021-08-23 18:54    [W:0.143 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site