Messages in this thread | | | From | Ingo Oeser <> | Subject | Re: Van Jacobson's net channels and real-time | Date | Fri, 21 Apr 2006 18:52:47 +0200 |
| |
Hi David,
nice to see you getting started with it.
I'm not sure about the queue logic there.
1867 /* Caller must have exclusive producer access to the netchannel. */ 1868 int netchannel_enqueue(struct netchannel *np, struct netchannel_buftrailer *bp) 1869 { 1870 unsigned long tail; 1871 1872 tail = np->netchan_tail; 1873 if (tail == np->netchan_head) 1874 return -ENOMEM;
This looks wrong, since empty and full are the same condition in your case.
1891 struct netchannel_buftrailer *__netchannel_dequeue(struct netchannel *np) 1892 { 1893 unsigned long head = np->netchan_head; 1894 struct netchannel_buftrailer *bp = np->netchan_queue[head]; 1895 1896 BUG_ON(np->netchan_tail == head);
See?
What about sth. like
struct netchannel { /* This is only read/written by the writer (producer) */ unsigned long write_ptr; struct netchannel_buftrailer *netchan_queue[NET_CHANNEL_ENTRIES];
/* This is modified by both */ atomic_t filled_entries; /* cache_line_align this? */
/* This is only read/written by the reader (consumer) */ unsigned long read_ptr; }
This would prevent this bug from the beginning and let us still use the full queue size.
If cacheline bouncing because of the shared filled_entries becomes an issue, you are receiving or sending a lot.
Then you can enqueue and dequeue multiple and commit the counts later. To be done with a atomic_read, atomic_add and atomic_sub on filled_entries.
Maybe even cheaper with local_t instead of atomic_t later on.
But I guess the cacheline bouncing will be a non-issue, since the whole point of netchannels was to keep traffic as local to a cpu as possible, right?
Would you like to see a sample patch relative to your tree, to show you what I mean?
Regards
Ingo Oeser - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |