lkml.org 
[lkml]   [2017]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectQuestions about process statistics

Hello,

Recently I noticed that in the early versions, the kernel scheduler
suffers from such an attack,
http://static.usenix.org/event/sec07/tech/full_papers/tsafrir/tsafrir_html/
and it has already been fixed by introducing CFS and nanosecond
granularity accounting.

However, as to the statistics exported from kernel to /proc/stat, it
seems that the data is updated upon every tick by
update_process_times, and the granularity is jiffies.

In my view, for applications which utilize such statistics in
userspace, they would still suffer from a time accounting attack, for
it is possible for a process to run between two ticks to evade from
being accounted (please correct me if I'm wrong). Is there any special
reason that /proc/stat only achieves a granularity of jiffies? Is it
possible to update the statistics every time the CPU switches to
another process instead of upon every tick, and to read TSC for a more
accurate time value?

Also, I noticed that the acct_rss_mem1/acct_vm_mem1 area in
task_struct is updated upon every tick, and a malicious process is
able to occupy a large amount of memory between two ticks. Is it
possible to update accumulate memory every time the memory size is
modified (for example, insert_page) by adding previous memory times
time interval? I'd like to know if it can help to avoid time
accounting attack and achieve more accurate statistics.

I'd be appreciate if you can answer my questions. Thanks a lot.

Best Regards,

Wenqiu


\
 
 \ /
  Last update: 2017-01-22 06:05    [W:0.039 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site