lkml.org 
[lkml]   [2018]   [Jan]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
From
Date


On 1/18/18 1:02 AM, Thomas Gleixner wrote:
> On Thu, 18 Jan 2018, Yang Shi wrote:
>> On 1/17/18 4:21 AM, Thomas Gleixner wrote:
>>> There are two things which can be done here:
>>>
>>> 1) The collected objects can be put on a global free list and work
>>> scheduled to free them piecewise.
>>
>> I don't get your point here. objects free has already been done in a work.
>> free_object() -> schedule_work()
>
> But it's doing it in a loop for each freed object.
>
>> Do you mean free all of them out of the for loop in a batch? Then don't call
>> free_object() in the for loop?
>
> Somehing like this:
>
> diff --git a/lib/debugobjects.c b/lib/debugobjects.c
> index 2f5349c6e81a..d36940cdc658 100644
> --- a/lib/debugobjects.c
> +++ b/lib/debugobjects.c
> @@ -42,6 +42,7 @@ static struct debug_obj obj_static_pool[ODEBUG_POOL_SIZE] __initdata;
> static DEFINE_RAW_SPINLOCK(pool_lock);
>
> static HLIST_HEAD(obj_pool);
> +static HLIST_HEAD(obj_to_free);
>
> static int obj_pool_min_free = ODEBUG_POOL_SIZE;
> static int obj_pool_free = ODEBUG_POOL_SIZE;
> @@ -714,13 +715,12 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
> static void __debug_check_no_obj_freed(const void *address, unsigned long size)
> {
> unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
> - struct hlist_node *tmp;
> - HLIST_HEAD(freelist);
> struct debug_obj_descr *descr;
> enum debug_obj_state state;
> struct debug_bucket *db;
> + struct hlist_node *tmp;
> struct debug_obj *obj;
> - int cnt;
> + int cnt, freed = 0;
>
> saddr = (unsigned long) address;
> eaddr = saddr + size;
> @@ -751,21 +751,22 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
> goto repeat;
> default:
> hlist_del(&obj->node);
> - hlist_add_head(&obj->node, &freelist);
> + /* Put them on the freelist */
> + raw_spin_lock_irqsave(&pool_lock, flags);
> + hlist_add_head(&obj->node, &obj_to_free);
> + raw_spin_lock_irqrestore(&pool_lock, flags);
> + freed++;
> break;
> }
> }
> raw_spin_unlock_irqrestore(&db->lock, flags);
>
> - /* Now free them */
> - hlist_for_each_entry_safe(obj, tmp, &freelist, node) {
> - hlist_del(&obj->node);
> - free_object(obj);
> - }
> -
> if (cnt > debug_objects_maxchain)
> debug_objects_maxchain = cnt;
> }
> +
> + if (freed)
> + schedule_work(.....);
>
> The allocation side can look at the free list as well and grab objects from
> there if the pool level is low if that happens before the work can do that.

Thanks a lot for the example. I'm working on a preliminary patch, will
send out for review once I finish my tests.

>
>
>>>
>>> 2) We can do a cond_resched() if not in atomic context and interrupts are
>>> enabled.
>>
>> I did try this before I went with touching softlockup watchdog approach. The
>> problem is in_atomic() can't tell if it is in atomic context on non-preempt
>> kernel. For preempt kernel, it is easy.
>
> Peter, can we do anything about that?

As Peter confirmed, in_atomic() is not functional on !PREEMPT kernel,
shall we still use touching softlockup watchdog approach?

Thanks,
Yang

>
> Thanks,
>
> tglx
>

\
 
 \ /
  Last update: 2018-01-20 01:08    [W:0.177 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site