lkml.org 
[lkml]   [2005]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [1/1] CBUS: new very fast (for insert operations) message bus based on kenel connector.
Evgeniy Polyakov <johnpol@2ka.mipt.ru> wrote:
>
> I'm pleased to annouce CBUS - ultra fast (for insert operations)
> message bus.

> +static int cbus_enqueue(struct cbus_event_container *c, struct cn_msg *msg)
> +{
> + int err;
> + struct cbus_event *event;
> + unsigned long flags;
> +
> + event = kmalloc(sizeof(*event) + msg->len, GFP_ATOMIC);

Using GFP_ATOMIC is a bit lame. It would be better to require the caller
to pass in the gfp_flags. Or simply require that all callers not hold
locks and use GFP_KERNEL.

> +static int cbus_process(struct cbus_event_container *c, int all)
> +{
> + struct cbus_event *event;
> + int len, i, num;
> +
> + if (list_empty(&c->event_list))
> + return 0;
> +
> + if (all)
> + len = c->qlen;
> + else
> + len = 1;
> +
> + num = 0;
> + for (i=0; i<len; ++i) {
> + event = cbus_dequeue(c);
> + if (!event)
> + continue;
> +
> + cn_netlink_send(&event->msg, 0);
> + num++;
> +
> + kfree(event);
> + }
> +
> + return num;
> +}

It might be cleaner to pass in an item count rather than a boolean `all'
here. Then again, it seems racy.

The initial list_empty() call could fail to detect new events due to lack
of locking and memory barriers.

We conventionally code for loops as

for (i = 0; i < len; i++)

> +static int cbus_event_thread(void *data)
> +{
> + int i, non_empty = 0, empty = 0;
> + struct cbus_event_container *c;
> +
> + daemonize(cbus_name);
> + allow_signal(SIGTERM);
> + set_user_nice(current, 19);

Please use the kthread api for managing this thread.

Is a new kernel thread needed?

> + while (!cbus_need_exit) {
> + if (empty || non_empty == 0 || non_empty > 10) {
> + interruptible_sleep_on_timeout(&cbus_wait_queue, 10);

interruptible_sleep_on_timeout() is heavily deprecated and is racy without
external locking (it pretty much has to be the BKL). Use wait_event_timeout().

> +int __devinit cbus_init(void)
> +{
> + int i, err = 0;
> + struct cbus_event_container *c;
> +
> + for_each_cpu(i) {
> + c = &per_cpu(cbus_event_list, i);
> + cbus_init_event_container(c);
> + }
> +
> + init_completion(&cbus_thread_exited);
> +
> + cbus_pid = kernel_thread(cbus_event_thread, NULL, CLONE_FS | CLONE_FILES);

Using the kthread API would clean this up.

> + if (IS_ERR((void *)cbus_pid)) {

The weird cast here might not even work at all on 64-bit architectures. It
depends if they sign extend ints when casting them to pointers. I guess
they do. If cbus_pid is indeed an s32.

Much better to do

if (cbus_pid < 0)

> +void __devexit cbus_fini(void)
> +{
> + int i;
> + struct cbus_event_container *c;
> +
> + cbus_need_exit = 1;
> + kill_proc(cbus_pid, SIGTERM, 0);
> + wait_for_completion(&cbus_thread_exited);
> +
> + for_each_cpu(i) {
> + c = &per_cpu(cbus_event_list, i);
> + cbus_fini_event_container(c);
> + }
> +}

I think this is racy. What stops new events from being queued while this
function is in progress?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-04-06 13:31    [W:0.069 / U:0.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site