lkml.org 
[lkml]   [2020]   [Sep]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH 1/9] misc: Add Surface Aggregator subsystem
From
Date
On 9/23/20 6:57 PM, Greg Kroah-Hartman wrote:
> On Wed, Sep 23, 2020 at 05:15:03PM +0200, Maximilian Luz wrote:
>> +/* -- Safe counters. -------------------------------------------------------- */
>> +
>> +/**
>> + * ssh_seq_reset() - Reset/initialize sequence ID counter.
>> + * @c: The counter to reset.
>> + */
>> +static void ssh_seq_reset(struct ssh_seq_counter *c)
>> +{
>> + WRITE_ONCE(c->value, 0);
>> +}
>
> These "counters" are odd, what exactly are they?

The SEQ counter is a sequence counter for transport packets with
roll-over at U8_MAX. As far as I can tell from testing, it doesn't
specifically have to be in sequence, but that's what I've called it
after initially reverse-engineering the protocol (the Windows driver
does keep it in sequence).

This counter is basically used to ensure that data packets can be
matched up with the corresponding ACK (acknowledge) control-packet. The
main reason for it being sequential, or rather where the sequentiality
can help, is packet retransmission detection:

Imagine the EC sends us a data packet, we ACK it but the ACK is somehow
dropped in communication. After the retransmission timeout the EC sends
us the data packet again. We can now assume (since the counter is
sequential) that there is a significant amount of time needed until we
can see the same SEQ number again. So we can write it to a fixed-size
rotating list after we've first received it, ACK the packet and
ignore-and-ACK any packet with the same ID after that until we've
received, let's say 16 packets with different IDs.

I assume that a similar mechanism is implemented on the EC side,
although I'm not sure how it's implemented specifically.

The RQID counter is pretty much the same, except for requests, roll-over
at U16_MAX, and some reserved IDs at the beginning (specifically 1 to
SSH_NUM_EVENTS, both inclusive) to differentiate events from responses
to requests.

> They seem like a simple atomic counter, but not quite, so you have
> rolled your own pseudo-atomic variable. Are you sure that it works
> properly? If so, how?

I'm fairly sure they work as designed, at least in terms of being safe
for concurrent execution (please do point it out if they're not). The
reset function is only called on init when no other thread should have
access to it, the counter get-and-increment is handled via cmpxchg to
guarantee that the same (old) value is only returned once per roll-over
and the correct new one gets assigned to the current counter value.

There can be a problem when a new packet/request is being put on hold by
the submitter (before actually being submitted) for an overly long time
and the counter rolls over at least once, causing the exact same packet
(or request) ID to be sent in sequence, and that in turn the EC to drop
the packet and result in a request timeout.

Note that everything else should (with the current implementation) work
fine. I.e. matching a request with its response will work (on the
driver-side) even with two requests with the same request ID in
sequence, as the pending "set" is actually a list that's traversed in
order, so matching the first submitted to the first received and the
second submitted to the second received.

I think that in practice, holding a packet/request this long should not
happen. My current implementation just ignores that issue. Maybe not the
best strategy, really...

> What about just using an ida/idr structure instead? Or just a simple
> atomic counter that avoids the values you can't touch, or better yet, a
> simple number with a correct lock protecting it :)

As mentioned above, I belive that concurrent execution doesn't cause any
issues due to the cmpxchg, so no need to use a lock here. AFAIK atomic
counters don't roll over, so there shouldn't be any difference in
implementing this with atomic_int apart from then using atomic_cmpxchg
(semantically, they are the same, right?).

As far as I can tell, a fairly easy way to fix the duplicate-value
problem itself would be using IDAs for both, however, they would need to
be kept allocated for some time after the packet/request has been
completed to avoid re-using the same ID sequentially and accidentally
triggering retransmission-detection on the EC. I don't have a clue how
that's implemented... just that it works with roll-over counters in the
Windows driver, and that the EC itself uses a roll-over counter for its
SEQ values (SEQs sent by the EC and SEQs sent by the driver are
completely independent). I also don't know how that works on the Windows
driver for that matter.

_Theoretically_, the protocol should be able to support multiple packets
waiting for an ACK, but in testing that caused problems with
retransmission after error-detection on my Surface Book 2, so that's the
reason why it's currently limited to one due to this. I'm not sure if MS
considers changing that/has changed that on newer devices, which could
influence how retransmission-detection behaves, so that's another reason
why I'd like to keep it similar to what we observe on Windows.

We could add an "in-use" range to block allocation of new IDs. This
would be fairly cheap, still sequential, and guarantee that no ID is
used twice at the same time. On the other hand, that would also
completely block communication if just one packet is held back before
submission (something that could be avoided with IDAs).

And the last alternative: Keep it as it is. As said, this can result in
a dropped packet/request, upon which the caller is encouraged to
resubmit. In that case, I should probably also document this
somewhere... This will likely bite us though if the request throughput
gets large (and thus time-to-roll-over small) so I think this should be
addressed properly.

In short: Concurrent execution of the counter functions works, as far as
I can tell at least, and, as you see by the long answer, I have to spend
some time and think about the duplicate-value problem (again). If you've
managed to read through this wall of text (sorry about that) and you
have any ideas/preferences, please let me know.

Thanks,
Max

\
 
 \ /
  Last update: 2020-09-23 22:35    [W:0.107 / U:0.604 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site