lkml.org 
[lkml]   [2012]   [Apr]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: High CPU usage of scheduler?
From
On Fri, Apr 27, 2012 at 8:23 AM, Dave Johansen <davejohansen@gmail.com> wrote:
> On Thu, Apr 26, 2012 at 8:10 PM, Yong Zhang <yong.zhang0@gmail.com> wrote:
>>
>> On Thu, Apr 26, 2012 at 03:08:51PM -0700, Dave Johansen wrote:
>> > I am looking into moving an application from RHEL 5 to RHEL 6 and I
>> > noticed an unexpected increase in CPU usage. A little digging has led
>> > me to believe that the scheduler may be the culprit.
>> >
>> > I created the attached test_select_work.c file to test this out. I
>> > compiled it with the following command on RHEL 5:
>> >
>> > cc test_select_work.c -O2 -DSLEEP_TYPE=0 -Wall -Wextra -lm -lpthread
>> > -o test_select_work
>>
>> Hmm...Do both RHEL 5 and RHEL 6 have high resolution timer enabled?
>>
>> If not, could you please try to boot the one which enable high resolution
>> timer with 'highres=off' to see if things change?
>
> Yes, RHEL 6 has CONFIG_HIGH_RES_TIMERS=y. I rebooted and used the
> 'highres=off' in grub and got the following results:
>
>   ./test_select_work 1000 10000 300 4
>   time_per_iteration: min: 3130.1 us avg: 3152.2 us max: 3162.2 us
> stddev: 15.0 us
>   ./test_select_work 1000 10000 300 8
>   time_per_iteration: min: 4314.6 us avg: 4407.9 us max: 4496.3 us
> stddev: 60.6 us
>   ./test_select_work 1000 10000 300 40
>   time_per_iteration: min: 8901.7 us avg: 9056.5 us max: 9121.3 us
> stddev: 57.5 us
>
> Any other info that might be helpful?
>
> Thanks,
> Dave

I made some improvements to the program to make comparisons a bit
easier and the standard deviation a bit more meaningful. I have
attached the updated program and script I used to run it. I also made
a git repo to track the code and it's available at:
git://github.com/daveisfera/test_sleep.git

I also ran the tests on a Dell Vostro 200 (dual core CPU) with several
different OS versions. I realize that several of these will have
different patches applied and won't be "pure kernel code", but this
was the simplest way that I could run the tests on different versions
of the kernel and get comparisons. The plots are available in the
bugzilla I opened on this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=812148

All of these tests were run with the following command with the
executable that was built on Ubuntu 11.10 (except for CentOS 6.2 which
had gcc available on it so I re-compiled it):
./run_test 1000 10 1000 250 8 4

The first interesting results is with Ubuntu 7.10. usleep seems to
behave as expected, but select and poll took significantly longer than
expected. Then with CentOS 5.6, the opposite is true, so I'm not sure
what result can be drawn from those conclusions.

I tried running the Ubuntu 8.04-4 LiveCD (because that's when the CFS
was introduced), but it wouldn't boot on the Dell Vostro 200.

CentOS 6.2 is where things start to get interesting, because a 10-15%
increase in the execution time is seen with select and poll. With
usleep, that penalty doesn't seem to be present for low numbers of
threads but seems to reach similar levels once the number of threads
is 2x the number of cores.

Ubuntu 11.10, Ubuntu 12.04, and Fedora 16 all show results basically
in line with CentOS 6.2 but with usleep always being a comparable
penalty the other sleep methods. The one exception to this is that on
Fedora 16, poll is lower than the other sleep methods with 3 threads.

One final test that I did was to install the PPA on Ubuntu 11.10 that
has BFS instead of CFS. Info about it can be found at:
https://launchpad.net/~chogydan/+archive/ppa
It doesn't seem to be suffering from this sleep issue (or at least in
a different way) because the results for 1-3 threads are at 2 ms as
expected (basically no penalty), but then at 4 threads it's about 20%
higher than the no sleep methods and it seems to maintain more of a
penalty at higher numbers of threads. I also didn't see the system CPU
usage when running the previous test with BFS (but this might just be
an accounting thing since CFS and BFS account for CPU usage
differently).

I know that these results aren't 100% conclusive, but they seem to
indicate to me that something about the sleeping mechanism in the
kernel is using more CPU than I expect it to and that this is the
cause of the increased CPU usage. It doesn't appear to be the
scheduler itself because then a similar sort of penalty would have
been seen with sched_yield. It appears to me that it is something to
do with the sleep mechanism's interaction with the scheduler
(particularly the CFS more than the BFS).

Is there any more data I can gather or tests that I can run that can
help diagnose this problem?

Thanks in advance for any help,
Dave
[unhandled content-type:application/octet-stream]#include <float.h>
#include <math.h>
#include <poll.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/select.h>
#include <sys/time.h>



// The different type of sleep that are supported
enum sleep_type {
SLEEP_TYPE_NONE,
SLEEP_TYPE_SELECT,
SLEEP_TYPE_POLL,
SLEEP_TYPE_USLEEP,
SLEEP_TYPE_YIELD,
};

// Function type for doing work with a sleep
typedef long long *(*work_func)(const int sleep_time, const int num_iterations, const int work_size);

// Information passed to the thread
struct thread_info {
int sleep_time;
int num_iterations;
int work_size;
work_func func;
};

// In order to make SLEEP_TYPE a run-time parameter function pointers are used.
// The function pointer could have been to the sleep function being used, but
// then that would mean an extra function call inside of the "work loop" and I
// wanted to keep the measurements as tight as possible and the extra work being
// done to be as small/controlled as possible so instead the work is declared as
// a seriees of macros that are called in all of the sleep functions. The code
// is a bit uglier this way, but I believe it results in a more accurate test.

// Fill in a buffer with random numbers (taken from latt.c by Jens Axboe <jens.axboe@oracle.com>)
#define DECLARE_WORK() \
int *buf; \
int pseed; \
int inum, bnum; \
struct timeval before, after; \
long long *diff; \
buf = calloc(work_size, sizeof(int)); \
diff = malloc(sizeof(long long)); \
gettimeofday(&before, NULL)

#define DO_WORK(SLEEP_FUNC) \
for (inum=0; inum<num_iterations; ++inum) { \
SLEEP_FUNC \
\
pseed = 1; \
for (bnum=0; bnum<work_size; ++bnum) { \
pseed = pseed * 1103515245 + 12345; \
buf[bnum] = (pseed / 65536) % 32768; \
} \
} \

#define FINISH_WORK() \
gettimeofday(&after, NULL); \
*diff = 1000000LL * (after.tv_sec - before.tv_sec); \
*diff += after.tv_usec - before.tv_usec; \
free(buf); \
return diff

long long *do_work_nosleep(const int sleep_time, const int num_iterations, const int work_size)
{
DECLARE_WORK();

// Let the compiler know that sleep_time isn't used in this function
(void)sleep_time;

DO_WORK();

FINISH_WORK();
}

long long *do_work_select(const int sleep_time, const int num_iterations, const int work_size)
{
struct timeval ts;
DECLARE_WORK();

DO_WORK(
ts.tv_sec = 0;
ts.tv_usec = sleep_time;
select(0, 0, 0, 0, &ts);
);

FINISH_WORK();
}

long long *do_work_poll(const int sleep_time, const int num_iterations, const int work_size)
{
struct pollfd pfd;
const int sleep_time_ms = sleep_time / 1000;
DECLARE_WORK();

pfd.fd = 0;
pfd.events = 0;

DO_WORK(
poll(&pfd, 1, sleep_time_ms);
);

FINISH_WORK();
}

long long *do_work_usleep(const int sleep_time, const int num_iterations, const int work_size)
{
DECLARE_WORK();

DO_WORK(
usleep(sleep_time);
);

FINISH_WORK();
}

long long *do_work_yield(const int sleep_time, const int num_iterations, const int work_size)
{
DECLARE_WORK();

// Let the compiler know that sleep_time isn't used in this function
(void)sleep_time;

DO_WORK(
sched_yield();
);

FINISH_WORK();
}

void *do_test(void *arg)
{
const struct thread_info *tinfo = (struct thread_info *)arg;

// Call the function to do the work
return (*tinfo->func)(tinfo->sleep_time, tinfo->num_iterations, tinfo->work_size);
}

int main(int argc, char **argv)
{
if (argc <= 6) {
printf("Usage: %s <sleep_time> <outer_iterations> <inner_iterations> <work_size> <num_threads> <sleep_type>\n", argv[0]);
printf(" outer_iterations: Number of iterations for each thread (used to calculate statistics)\n");
printf(" inner_iterations: Number of work/sleep cycles performed in each thread (used to improve consistency/observability))\n");
printf(" work_size: Number of array elements (in kb) that are filled with psuedo-random numbers\n");
printf(" num_threads: Number of threads to spawn and perform work/sleep cycles in\n");
printf(" sleep_type: 0=none 1=select 2=poll 3=usleep 4=yield\n");
return -1;
}

struct thread_info tinfo;
int outer_iterations;
int sleep_type;
int s, inum, tnum, num_threads;
pthread_attr_t attr;
pthread_t *threads;
long long *res;
long long *times;

// Get the parameters
tinfo.sleep_time = atoi(argv[1]);
outer_iterations = atoi(argv[2]);
tinfo.num_iterations = atoi(argv[3]);
tinfo.work_size = atoi(argv[4]) * 1024;
num_threads = atoi(argv[5]);
sleep_type = atoi(argv[6]);
switch (sleep_type) {
case SLEEP_TYPE_NONE: tinfo.func = &do_work_nosleep; break;
case SLEEP_TYPE_SELECT: tinfo.func = &do_work_select; break;
case SLEEP_TYPE_POLL: tinfo.func = &do_work_poll; break;
case SLEEP_TYPE_USLEEP: tinfo.func = &do_work_usleep; break;
case SLEEP_TYPE_YIELD: tinfo.func = &do_work_yield; break;
default:
printf("Invalid sleep type: %d\n", sleep_type);
return -7;
}

// Initialize the thread creation attributes
s = pthread_attr_init(&attr);
if (s != 0) {
printf("Error initializing thread attributes\n");
return -2;
}

// Allocate the memory to track the threads
threads = calloc(num_threads, sizeof(pthread_t));
times = calloc(num_threads, sizeof(unsigned long long));
if (threads == NULL) {
printf("Error allocating memory to track threads\n");
return -3;
}

// Calculate the statistics of the processing
float min_time = FLT_MAX;
float max_time = -FLT_MAX;
float avg_time = 0;
float prev_avg_time = 0;
float stddev_time = 0;

// Perform the requested number of outer iterations
for (inum=0; inum<outer_iterations; ++inum) {
// Start all of the threads
for (tnum=0; tnum<num_threads; ++tnum) {
s = pthread_create(&threads[tnum], &attr, &do_test, &tinfo);

if (s != 0) {
printf("Error starting thread\n");
return -4;
}
}

// Clean up the thread creation attributes
s = pthread_attr_destroy(&attr);
if (s != 0) {
printf("Error cleaning up thread attributes\n");
return -5;
}

// Wait for all the threads to finish
for (tnum=0; tnum<num_threads; ++tnum) {
s = pthread_join(threads[tnum], (void **)(&res));

if (s != 0) {
printf("Error waiting for thread\n");
return -6;
}

// Save the time
times[tnum] = *res;

// And clean it up
free(res);
}

// Update the statistics
for (tnum=0; tnum<num_threads; ++tnum) {
if (times[tnum] < min_time)
min_time = times[tnum];
if (times[tnum] > max_time)
max_time = times[tnum];
avg_time += (times[tnum] - avg_time) / (float)(tnum + 1);
stddev_time += (times[tnum] - prev_avg_time) * (times[tnum] - avg_time);
prev_avg_time = avg_time;
}
}

// Finish the calculation of the standard deviation
stddev_time = sqrtf(stddev_time / ((outer_iterations * num_threads) - 1));

// Print out the statistics of the times
printf("time_per_iteration: min: %.1f us avg: %.1f us max: %.1f us stddev: %.1f us\n",
min_time / tinfo.num_iterations,
avg_time / tinfo.num_iterations,
max_time / tinfo.num_iterations,
stddev_time / tinfo.num_iterations);

// Clean up the allocated threads
free(threads);

return 0;
}
\
 
 \ /
  Last update: 2012-04-30 21:01    [W:0.091 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site