Mailing List Archive

Feedback on render branch
Hi Mark

Here are a few observations.

**VAAPI acceleration
OpenGL-HW - no deinterlacer is offered
Resolution change is ok - it would fail on old VAAPI
1080i - ticker jerky due to single frame deinterlace
So it is inferior to the old vaapi in the deinterlace area, superior in
the changed resolution area.

**VAAPI decode only
the va deinterlacers are not being used - if you select one of them the
output is still interlaced
video output is corrupt after a change of resolution
1080i - ticker jerky due to interlace
So it is inferior to the old vaapi2 in the deinterlace area and in the
changed resolution area.

OpenGL Slim (s/w decode)
video output is corrupt after a change of resolution

Screen shot of corrupted video after resolution change
see https://imgur.com/LbACR9c

I have not yet taken much of a look at the code in the render branch
yet. I thing that tracing through it during playback would help
understanding.

I did invent my own deinterlace shader and test it with a simple program
- I am sure it is inferior but I am trying out various things as a
learning exercise.

I suppose the main complex things that need to be done in shaders is
deinterlace and conversion from various YUV encodings to RGB for
display. I plan to look at the code and try to understand how those work.

At some point I hope to try integrating direct NVDEC display.

Let me know if I should look into the issues reported above, and if I
should make updates to the render branch. I don't want to step on what
you are doing.

It seems that debugging a shader is not easy - you can't do printf in
the code or trace through it with gdb to see where something is going wrong.

Peter

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Feedback on render branch [ In reply to ]
On Wed, Apr 3, 2019, 8:48 PM Peter Bennett <pb.mythtv@gmail.com> wrote:

> Hi Mark
>
> Here are a few observations.
>
> **VAAPI acceleration
> OpenGL-HW - no deinterlacer is offered
> Resolution change is ok - it would fail on old VAAPI
> 1080i - ticker jerky due to single frame deinterlace
> So it is inferior to the old vaapi in the deinterlace area, superior in
> the changed resolution area.
>
> **VAAPI decode only
> the va deinterlacers are not being used - if you select one of them the
> output is still interlaced
> video output is corrupt after a change of resolution
> 1080i - ticker jerky due to interlace
> So it is inferior to the old vaapi2 in the deinterlace area and in the
> changed resolution area.
>

Hi Peter,

Thanks for testing.

With respect to deinterlacing, for the most part the deinterlacers are not
currently hooked up. I decided to wait until all of the different hardware
decoders were converted to opengl direct rendering before implementing the
deinterlacer settings changes I mentioned before.

OpenGL Slim (s/w decode)
> video output is corrupt after a change of resolution
>
> Screen shot of corrupted video after resolution change
> see https://imgur.com/LbACR9c


That looks strangely familiar:) I'm aware of this issue (with the same
clip!) - just trying to decide how to best fix it.


> I have not yet taken much of a look at the code in the render branch
> yet. I thing that tracing through it during playback would help
> understanding.
>

You'll find the code has changed considerably. The VideoOutput classes have
been gutted, OpenGLVideo is now much more straight forward, most of the
video texture handling has been passed off to MythVideoTexture and
MythOpenGLInterop (and subclasses) handles the hardware frames. And
MythRenderOpenGL has been largely rewritten.

Apart from the deinterlacer changes, it is 90% there. I have uncommitted
changes for vdpau (direct rendering and decode only), nvdec direct
rendering and openmax EGL direct rendering. VAAPI seems to be fine other
than deinterlacing, videotoolbox is done and mediacodec is almost there.
That only leaves windows - which may have to wait.

The render branch also has changes to allow direct display of a range of
other YUV formats - including 10/12/16 bit YUV420P and NV12. (I.e. 10bit
video is passed through without loss/conversion)

This is all via VideoOutOpenGL/OpenGLVideo.

Including uncommitted code, the current outstanding issues/bugs are:

- the input change bug above
- MediaCodec seek delay
- MediaCodec input/resolution changes
- MediaCodec frame timing with DVD playback (probably applies to NVDEC as
well)
- VDPAU crash when seeking beyond the end of a video
- proper rendering of 10bit NV12 NVDEC/CUDA frames
- GUI freeze after rendering with NVDEC/CUDA


I did invent my own deinterlace shader and test it with a simple program
> - I am sure it is inferior but I am trying out various things as a
> learning exercise.
>
> I suppose the main complex things that need to be done in shaders is
> deinterlace and conversion from various YUV encodings to RGB for
> display. I plan to look at the code and try to understand how those work.
>

The biggest issue is getting the data into textures in the most efficient
way. The colour space conversion is a 'simple' matrix multiplication
(simple for glsl) while the detail behind the actual matrix is in
VideoColourSpace. There is more work to be done with HDR displays and gamma
adjustment - but that is currently beyond my understanding.

At some point I hope to try integrating direct NVDEC display.
>

As mentioned this is almost done.

Let me know if I should look into the issues reported above, and if I
> should make updates to the render branch. I don't want to step on what
> you are doing.
>

Feel free to get stuck in:)

It seems that debugging a shader is not easy - you can't do printf in
> the code or trace through it with gdb to see where something is going
> wrong.
>

Yup - I work on the basis that getting something on screen is a positive.
If you have nothing to see, you have no idea where to start:)

Thanks and regards,
Mark

>
Re: Feedback on render branch [ In reply to ]
On 4/3/19 6:25 PM, Mark Kendall wrote:
>
> At some point I hope to try integrating direct NVDEC display.
>
>
> As mentioned this is almost done.
>

Hi Mark

Thanks for the reply.

It seems like you are way ahead of me. I take it you have acquired
NVidia hardware for testing NVDEC.

Since you are already screaming ahead with fixing what I had identified,
I will concentrate on trying to understand the OpenGL code so that some
day I can make a meaningful contribution.

Regards
Peter