lkml.org 
[lkml]   [2009]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectPlease advice on realtime application architecture
Date
Hi

I'm currently trying to port a realtime application from an in-house RT
kernel to something standard, preferrably preempt-rt linux.

Application controls a set of i/o cards with some data computed in
realtime.
The cards are relatively slow:
- single word transmit with inl()/outl() or ioread32()/iowrite32() takes a
microsecond or more;
- setting up single i/o request requires up to several tens of these word
operations;
- large number of i/o operations has to be programmed on several devices in
parallel.
- in total, significant portion of raw CPU time (20% or more) is needed to
execute raw hardware-control operations.

Application can generate a large number [hundreds] of i/o requests at a
lime, however later it could "change it's mind" and decide to change some
of those, or cancel the queue altogether.

Writing all generated i/o requests to hardware at a time is not feasible
since it will use enough CPU to make the main application miss it's
deadlines. Also, writing requests that will be later changed or cancelled
is a pure waste of CPU time.

So we once implemented "late/lazy hardware control".
There was a thread dedicated to hardware control, that:
- normally worked as an idle thread;
- but got highest priority in case of i/o operation not programmed yet when
hardware should execute it "soon".

That worked well in our previous environment where thread rescheduling was
very fast and lightweight.

And now we want to move to rt-preempt linux.

Here is bit more abstract model of the situation.
- there is a stream of "requests" that have to be executed on CPU;
- each request has a deadline, missing that deadline is absolutely
unacceptable;
- for each "request", approximate CPU time is more or less known; normally
is is between 10 and 40 microseconds;
- number of requests is large, total execution of those consumes 20% of raw
cpu time or even more;
- deadlines are distributed more or less uniform over the time;
- requests arre async, NOT periodic;
- in common case, request arrives long before deadline (99% of requests -
more than 500us, 70% of requests - more than 5 ms), but in some rare cases
time left between request arrive and deadline could be 100us or less
[although the lower margin could be probably increased by some changes at
highlevel];
- request may be cancelled by application; that may happen at any moment up
to the deadline;
- this request cancellation is a common case (say 30% of all requests get
cancelled)
- running cancelled request is no harm other than wasting CPU time;
- requests are preemptible by nature, however preempted request will have
to continue execution when resumed even if cancelled in meantime;
- monopolizong CPU for request execution for more than several tens of
microseconds should be avoided when possible.

This will likely run on x86-based industrial computers.
Dedication a cpu core to request processing does not look good: want to
make use of 80% of core's time not spent by to request processing; also
unsure that all installations of the system will have several cores.

A naive implementation may have a kernel thread running requests, with EDF
ordering; priority of the thread may be normally low, but raised to
SCHED_FIFO 99 from an hrtimer if deadlines become near, and lowered back
when no near deadlines.

However, there is a concern that if trying not to monopolize cpu too much,
hrtimer and reschedule overhead will become larger than actual request
execution time.

So I'm looking for some advice on how to implement the request processing.
Maybe there are some better ways than "naive implementation" described
above? What do you think?

Many thanks for any advice.
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2009-10-05 10:49    [W:0.035 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site