lkml.org 
[lkml]   [1996]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Shared memory
Date
> Hey,
>
> Im working on a project, where a process should maintain a huge memory
> array (about 30-40 MB).
>
> Other processes or communicating via connect/bind/accept to this process.
> Each time a connection is made, the server-process is forked. But all my
> server processes, has to have access to the same memory array.
>
> How do I make this?
>
> shmget can only take 16MB arrays, why? Can I change the #define in
> include/asm/shmparam.h?

I think you can increase the shared memory size to a max of 128Mb. The
linux/include/asm/shmparam.h include has a max shm system wide (SHMALL)
as (1 << 15) pages so if it 4Kb pages its 128Mb. Looking at that I
think you should be able to up SHMMAX to say 64Mb which is large enough
for what you want to do and leaves 64Mb for anything else that uses
shm.

> I cannot use fork, because it makes a new array... I have read about the
> clone call, can I use it?

Could take a look at one of the threads packages. I think there is one
using clone now (but likely still in beta). Wonder what all the down
sides would be to using a shared mmap of a 40Mb file???

> Please help...
>
> Thomas Bjoerk
> Denmark

======================================================================
Brad Pepers Proud supporter of Linux in
Ramparts Management Group Ltd. Canada!
ramparts@agt.net
http://www.agt.net/public/ramparts Linux rules!


\
 
 \ /
  Last update: 2005-03-22 13:38    [W:0.049 / U:1.848 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site