lkml.org 
[lkml]   [2020]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 1/2] net: store KCOV remote handle in sk_buff
Date
From: Aleksandr Nogikh <nogikh@google.com>

Remote KCOV coverage collection enables coverage-guided fuzzing of the
code that is not reachable during normal system call execution. It is
especially helpful for fuzzing networking subsystems, where it is
common to perform packet handling in separate work queues even for the
packets that originated directly from the user space.

Enable coverage-guided frame injection by adding a kcov_handle
parameter to sk_buff structure. Initialization in __alloc_skb ensures
that no socket buffer that was generated during a system call will be
missed.

Code that is of interest and that performs packet processing should be
annotated with kcov_remote_start()/kcov_remote_stop().

An alternative approach is to determine kcov_handle solely on the
basis of the device/interface that received the specific socket
buffer. However, in this case it would be impossible to distinguish
between packets that originated from normal background network
processes and those that were intentionally injected from the user
space.

Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
---
include/linux/skbuff.h | 21 +++++++++++++++++++++
net/core/skbuff.c | 1 +
2 files changed, 22 insertions(+)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index a828cf99c521..5639f27e05ef 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -701,6 +701,7 @@ typedef unsigned char *sk_buff_data_t;
* @transport_header: Transport layer header
* @network_header: Network layer header
* @mac_header: Link layer header
+ * @kcov_handle: KCOV remote handle for remote coverage collection
* @tail: Tail pointer
* @end: End pointer
* @head: Head of buffer
@@ -904,6 +905,10 @@ struct sk_buff {
__u16 network_header;
__u16 mac_header;

+#ifdef CONFIG_KCOV
+ u64 kcov_handle;
+#endif
+
/* private: */
__u32 headers_end[0];
/* public: */
@@ -4605,5 +4610,21 @@ static inline void skb_reset_redirect(struct sk_buff *skb)
#endif
}

+static inline void skb_set_kcov_handle(struct sk_buff *skb, const u64 kcov_handle)
+{
+#ifdef CONFIG_KCOV
+ skb->kcov_handle = kcov_handle;
+#endif
+}
+
+static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
+{
+#ifdef CONFIG_KCOV
+ return skb->kcov_handle;
+#else
+ return 0;
+#endif
+}
+
#endif /* __KERNEL__ */
#endif /* _LINUX_SKBUFF_H */
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index f67631faa9aa..e7acd7d45b03 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -233,6 +233,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
skb->end = skb->tail + size;
skb->mac_header = (typeof(skb->mac_header))~0U;
skb->transport_header = (typeof(skb->transport_header))~0U;
+ skb_set_kcov_handle(skb, kcov_common_handle());

/* make sure we initialize shinfo sequentially */
shinfo = skb_shinfo(skb);
--
2.28.0.806.g8561365e88-goog
\
 
 \ /
  Last update: 2020-10-07 12:18    [W:0.100 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site