lkml.org 
[lkml]   [2019]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 4/4] arm64: dts: rockchip: add video codec for rk3399
On Mon, Jan 7, 2019 at 2:30 AM Ayaka <ayaka@soulik.info> wrote:
>
> Hello Ezequiel
>
> Sent from my iPad
>
> > On Jan 7, 2019, at 1:21 AM, Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> wrote:
> >
> >> On Sun, 6 Jan 2019 at 13:16, Ayaka <ayaka@soulik.info> wrote:
> >>
> >>
> >>
> >> Sent from my iPad
> >>
> >>> On Jan 7, 2019, at 12:04 AM, Ezequiel Garcia <ezequiel@collabora.com> wrote:
> >>>
> >>> On Sun, 2019-01-06 at 23:05 +0800, Ayaka wrote:
> >>>>> On Jan 6, 2019, at 10:22 PM, Ezequiel Garcia <ezequiel@collabora.com> wrote:
> >>>>>
> >>>>> Hi Randy,
> >>>>>
> >>>>> Thanks a lot for this patches. They are really useful
> >>>>> to provide more insight into the VPU hardware.
> >>>>>
> >>>>> This change will make the vpu encoder and vpu decoder
> >>>>> completely independent, can they really work in parallel?
> >>>> As I said it depends on the platform, but with this patch, the user space would think they can work at the same time.
> >>>
> >>>
> >>> I think there is some confusion.
> >>>
> >>> The devicetree is one thing: it is a hardware representation,
> >>> a way to describe the hardware, for the kernel/bootloader to
> >>> parse.
> >>>
> >>> The userspace view will depend on the driver implementation.
> >>>
> >>> The current devicetree and driver (without your patches),
> >>> model the VPU as a single piece of hardware, exposing a decoder
> >>> and an encoder.
> >>>
> >>> The V4L driver will then create two video devices, i.e. /dev/videoX
> >>> and /dev/videoY. So userspace sees an independent view of the
> >>> devices.
> >>>
> >> I knew that, the problem is that the driver should not always create a decoder and encoder pair, they may not exist at some platforms, even some platforms doesn’t have a encoder. You may have a look on the rk3328 I post on the first email as example.
> >
> > That is correct. But that still doesn't tackle my question: is the
> > hardware able to run a decoding and an encoding job in parallel?
> >
> For rk3328, yes, you see I didn’t draw them in the same box.
> > If not, then it's wrong to describe them as independent entities.
> >
> >>> However, they are internally connected, and thus we can
> >>> easily avoid two jobs running in parallel.
> >>>
> >> That is what the mpp service did in my patches, handing the relationship between each devices. And it is not a easy work, maybe a 4k decoder would be blocked by another high frame rate encoding work or another decoder session. The vendor kernel have more worry about this, but not in this version.
> >
> > Right. That is one way to design it. Another way is having a single
> > devicetree node for the VPU encoder/decoder "complex".
> No, you can’t assume which one is in the combo group, it can be various. you see, in the rk3328, the vdpu is paired with an avs+ decoder. That is why I use a virtual device standing for scheduler.

First of all, thanks for all the input. Having more understanding of
the hardware and shortcomings of the current V4L2 APIs is really
important to let us further evolve the API and make sure that it works
for further use cases.

As for the Device Tree itself, it doesn't always describe the hardware
in 100%. Most of the time it's just the necessary information to
choose and instantiate the right drivers and bind to the right
hardware resources. The information on which hardware instances on the
SoC can work independently can of course be described in DT (e.g. by
sub-nodes of a video-codec complex OR a set of phandles, e.g.
rockchip,shared-instances), but it's also perfectly fine to defer this
kind of knowledge to the drivers themselves.

Best regards,
Tomasz

\
 
 \ /
  Last update: 2019-01-08 07:35    [W:0.090 / U:2.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site