lkml.org 
[lkml]   [2020]   [Jul]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 10/11] ASoC: qdsp6-dai: add gapless support
From
Date


On 08/07/2020 14:32, Pierre-Louis Bossart wrote:
>
>>>> Add support to gapless playback by implementing metadata,
>>>> next_track, drain and partial drain support.
>>>>
>>>> Gapless on Q6ASM is implemented by opening 2 streams in a single asm
>>>> stream
>>>
>>> What does 'in a single asm stream' means?
>>
>>
>> So in QDSP6 ASM (Audio Stream Manager) terminology we have something
>> called "asm session" for each ASoC FE DAI, Each asm session can be
>> connected with multiple streams (upto 8 I think). However there will
>> be only one active stream at anytime. Also there only single data
>> buffer associated with each asm session.
>>
>> For Gapless usecase, we can keep two streams open for one asm-session,
>> allowing us to fill in data on second stream while first stream is
>> playing.
>
> Ah, that's interesting, thanks for the details. So you have one DMA
> transfer and the data from the previous and next track are provided in
> consecutive bytes in a ring buffer, but at the DSP level you have a
> switch that will feed data for the previous and next tracks into
> different decoders, yes?

Yes, that's true, we can drain and stop first stream and start next
stream which will do the switch!

>
> If that is the case, indeed the extension you suggested earlier to
> change the profile is valid. You could even change the format I guess.
>

Exactly, we did test this patchset along with the extension suggested!


> To avoid confusion I believe the capabilities would need to be extended
> so that applications know that gapless playback is supported across
> unrelated profiles/formats. The point is that you don't want a
> traditional implementation to use a capability that isn't supported in
> hardware or will lead to audio issues.
> >>>> and toggling them on next track.
>>>
>>> It really seems to me that you have two streams at the lowest level,
>>> along with the knowledge of how many samples to remove/insert and
>>> hence could do a much better job - including gapless support between
>>> unrelated profiles and cross-fading - without the partial drain and
>>> next_track mechanism that was defined assuming a single stream/profile.
>> At the end of the day its a single session with one data buffer but
>> with multiple streams.
>>
>> Achieving cross fade should be easy with this design.
>
> looks like it indeed.
>
>> We need those hooks for partial drain and next track to allow us to
>> switch between streams and pass silence information to respective
>> stream ids.
>
> right, but the key point is 'switch between streams'. That means a more
> complex/capable implementation that should be advertised as such to
> applications. This is not the default behavior assumed initially: to
> allow for minimal implementations in memory-constrained devices, we
> assumed gapless was supported with a single decoder.
>
> Maybe the right way to do this is extend the snd_compr_caps structure:
>
> /**
>  * struct snd_compr_caps - caps descriptor
>  * @codecs: pointer to array of codecs
>  * @direction: direction supported. Of type snd_compr_direction
>  * @min_fragment_size: minimum fragment supported by DSP
>  * @max_fragment_size: maximum fragment supported by DSP
>  * @min_fragments: min fragments supported by DSP
>  * @max_fragments: max fragments supported by DSP
>  * @num_codecs: number of codecs supported
>  * @reserved: reserved field
>  */
> struct snd_compr_caps {
>     __u32 num_codecs;
>     __u32 direction;
>     __u32 min_fragment_size;
>     __u32 max_fragment_size;
>     __u32 min_fragments;
>     __u32 max_fragments;
>     __u32 codecs[MAX_NUM_CODECS];
>     __u32 reserved[11];
> } __attribute__((packed, aligned(4)));
>
>
> and use a reserved field to provide info on capabilities, and filter the
> set_codec_params() addition based this capability - i.e. return -ENOTSUP
> in 'traditional' implementations based on a single 'stream'/decoder
> instance.
Sounds good!
I will give it a go and see how it ends up!


--srini

\
 
 \ /
  Last update: 2020-07-08 17:24    [W:0.110 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site