lkml.org 
[lkml]   [1997]   [Jul]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectAnother fs/buffer.c patch ...
Sorry to be late with this, but I just spotted it a little while ago. 
After grow_more_buffer_heads() wakes up from &buffer_wait, it fails to
do a recover_reusable_buffer_heads(). So even if some async buffers
have been freed, it may continue to sleep until some other kind task
recovers the buffer heads.

The attached patch corrects this. There are really only a few lines
changed, but I moved recover_reusable_buffer_heads() to keep the
compiler happy.

Regards,
Bill--- fs/buffer.c.old Sun Apr 27 15:54:43 1997
+++ fs/buffer.c Wed Jul 2 18:43:11 1997
@@ -919,6 +919,34 @@
wake_up(&buffer_wait);
}

+/*
+ * We can't put completed temporary IO buffer_heads directly onto the
+ * unused_list when they become unlocked, since the device driver
+ * end_request routines still expect access to the buffer_head's
+ * fields after the final unlock. So, the device driver puts them on
+ * the reuse_list instead once IO completes, and we recover these to
+ * the unused_list here.
+ *
+ * The reuse_list receives buffers from interrupt routines, so we need
+ * to be IRQ-safe here (but note that interrupts only _add_ to the
+ * reuse_list, never take away. So we don't need to worry about the
+ * reuse_list magically emptying).
+ */
+static inline void recover_reusable_buffer_heads(void)
+{
+ if (reuse_list) {
+ struct buffer_head *head;
+
+ head = xchg(&reuse_list, NULL);
+
+ do {
+ struct buffer_head *bh = head;
+ head = head->b_next_free;
+ put_unused_buffer_head(bh);
+ } while (head);
+ }
+}
+
static void get_more_buffer_heads(void)
{
struct buffer_head * bh;
@@ -946,35 +974,10 @@
*/
run_task_queue(&tq_disk);
sleep_on(&buffer_wait);
- }
-
-}
-
-/*
- * We can't put completed temporary IO buffer_heads directly onto the
- * unused_list when they become unlocked, since the device driver
- * end_request routines still expect access to the buffer_head's
- * fields after the final unlock. So, the device driver puts them on
- * the reuse_list instead once IO completes, and we recover these to
- * the unused_list here.
- *
- * The reuse_list receives buffers from interrupt routines, so we need
- * to be IRQ-safe here (but note that interrupts only _add_ to the
- * reuse_list, never take away. So we don't need to worry about the
- * reuse_list magically emptying).
- */
-static inline void recover_reusable_buffer_heads(void)
-{
- if (reuse_list) {
- struct buffer_head *head;
-
- head = xchg(&reuse_list, NULL);
-
- do {
- struct buffer_head *bh = head;
- head = head->b_next_free;
- put_unused_buffer_head(bh);
- } while (head);
+ /*
+ * After we wake up, check for released async buffer heads.
+ */
+ recover_reusable_buffer_heads();
}
}

@@ -1158,6 +1161,7 @@
free_async_buffers(bh);
restore_flags(flags);
after_unlock_page(page);
+ wake_up(&buffer_wait);
}
++current->maj_flt;
return 0;
\
 
 \ /
  Last update: 2005-03-22 13:39    [W:0.054 / U:1.704 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site