Mailing List Archive

Deinterlacer settings
I'd like to propose a change to how deinterlacers are configured as
the current setup has become unwieldy to the extent that it is almost
unmaintainable.

Proposal:

- remove existing specific, named deinterlacer selections for main and fallback.

- replace with something along the lines of:
- new settings:
"Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
"Prefer GPU deinterlacers": Yes/No
"Prefer GPU driver deinterlacers": Yes/No
"Deinterlacer quality (double rate):" None, Fast, Medium, High, Advanced
"Prefer GPU deinterlacers": Yes/No
"Prefer GPU driver deinterlacers": Yes/No

I believe that gives the same flexiblity as the current settings
without tying the code to specific named deinterlacers. Remember these
are per profile settings.

Selecting 'Prefer GPU deinterlacers' would obviously select shader
based deints over CPU based and 'GPU driver deinterlacers' would
prefer VDPAU/VAAPI etc over all others. The code can then make some
informed decisions on what deinterlacer to use and at what stage in
the decode to presentation process, as well as falling back where
needed.

With this setup, under the hood, the deinterlacer selections for Fast,
Medium, High and Advanced would look something like:

CPU - Onefield, LInear blend, Kernel, Yadif
OpenGL/D3D - onefield, linear blend, kernel, new motion adaptive shader
VAAPI - Bob, Weave, Motion adaptive, Motion compensated
VDPAU - Bob, Temporal, Spatial, ???
OpenMax - linedouble, fast, advanced, ??

Where onefield and bob are interchangeable as 1x and 2x versions.

Background

We now have a range of deinterlacer options, not just in terms of
CPU/GPU/driver based but also in terms of at what stage deinterlacing
occurs.

As an example, with VAAPI decode only (VAAPI2), deinterlacing could
occur in the decoder using either CPU or VAAPI based deinterlacers or
at playback using the CPU or GLSL shaders (you could even use VAAPI
again at this stage). With the new VAAPI zero copy code in the render
branch, the current setup cannot cope with a VAAPI profile that
expects to use VAAPI deinterlacer names but OpenGLVideo is actually
presented with raw video frames that (currently) need to pass through
the GLSL shaders.

The current code is inflexible - especially with its use of strings to
explicitly identify each deinterlacer - and has started to break. A
much simpler and more flexible approach would be to use a simple
flag/enum that encapsulates the user preferences.

Thoughts welcome!

Regards
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
> Wiadomo?? napisana przez Mark Kendall <mark.kendall@gmail.com> w dniu 16.02.2019, o godz. 11:26:
>
> I'd like to propose a change to how deinterlacers are configured as
> the current setup has become unwieldy to the extent that it is almost
> unmaintainable.
>
> Proposal:
>
> - remove existing specific, named deinterlacer selections for main and fallback.
>
> - replace with something along the lines of:
> - new settings:
> "Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
> "Deinterlacer quality (double rate):" None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
>
> I believe that gives the same flexiblity as the current settings
> without tying the code to specific named deinterlacers. Remember these
> are per profile settings.
>
> Selecting 'Prefer GPU deinterlacers' would obviously select shader
> based deints over CPU based and 'GPU driver deinterlacers' would
> prefer VDPAU/VAAPI etc over all others. The code can then make some
> informed decisions on what deinterlacer to use and at what stage in
> the decode to presentation process, as well as falling back where
> needed.
>
> With this setup, under the hood, the deinterlacer selections for Fast,
> Medium, High and Advanced would look something like:
>
> CPU - Onefield, LInear blend, Kernel, Yadif
> OpenGL/D3D - onefield, linear blend, kernel, new motion adaptive shader
> VAAPI - Bob, Weave, Motion adaptive, Motion compensated
> VDPAU - Bob, Temporal, Spatial, ???
> OpenMax - linedouble, fast, advanced, ??
>
> Where onefield and bob are interchangeable as 1x and 2x versions.
>
> Background
>
> We now have a range of deinterlacer options, not just in terms of
> CPU/GPU/driver based but also in terms of at what stage deinterlacing
> occurs.
>
> As an example, with VAAPI decode only (VAAPI2), deinterlacing could
> occur in the decoder using either CPU or VAAPI based deinterlacers or
> at playback using the CPU or GLSL shaders (you could even use VAAPI
> again at this stage). With the new VAAPI zero copy code in the render
> branch, the current setup cannot cope with a VAAPI profile that
> expects to use VAAPI deinterlacer names but OpenGLVideo is actually
> presented with raw video frames that (currently) need to pass through
> the GLSL shaders.
>
> The current code is inflexible - especially with its use of strings to
> explicitly identify each deinterlacer - and has started to break. A
> much simpler and more flexible approach would be to use a simple
> flag/enum that encapsulates the user preferences.
>
> Thoughts welcome!
>
> Regards
> Mark
> _______________________________________________
> mythtv-dev mailing list
> mythtv-dev@mythtv.org
> http://lists.mythtv.org/mailman/listinfo/mythtv-dev
> http://wiki.mythtv.org/Mailing_List_etiquette
> MythTV Forums: https://forum.mythtv.org

Mark,

I want to speak from user perspective.
I really like changes direction You propose.

IMHO any change like „yadiff 2xHW” to „high quality, CPU intensive” is always welcomed as user might be not learned what „yadiff” is - but „high quality CPU intensive” will be quickly understood. This will make app feeling like user friendly.

Also I think it will be somehow good to include trade-off in options name.
I mean something like this:

from
"Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced

to
"Deinterlacer quality (normal)": None, LowQuality, MediumQuality, HighQuality, Custom

and in hint/help text:
"LowQuality offers least CPU/GPU load; MediumQuality offers trade-off between CPU/GPU load and deinterlacing quality; HighQuality offers best deinterlacing quality without any compromise on CPU/GPU load; Custom allows to select where major deinterlace load is allocated: CPU or GPU?"

Also maybe „normal” should be changed to „single rate”?
(as alternate is "double rate”)

And maybe „[single]double rate” to

„1x frame rate” and „2x frame rate”?

br



_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On 2/16/19 5:26 AM, Mark Kendall wrote:
> I'd like to propose a change to how deinterlacers are configured as
> the current setup has become unwieldy to the extent that it is almost
> unmaintainable.
>
> Proposal:
>
> - remove existing specific, named deinterlacer selections for main and fallback.
>
> - replace with something along the lines of:
> - new settings:
> "Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
> "Deinterlacer quality (double rate):" None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
>
> I believe that gives the same flexiblity as the current settings
> without tying the code to specific named deinterlacers. Remember these
> are per profile settings.
>
> Selecting 'Prefer GPU deinterlacers' would obviously select shader
> based deints over CPU based and 'GPU driver deinterlacers' would
> prefer VDPAU/VAAPI etc over all others. The code can then make some
> informed decisions on what deinterlacer to use and at what stage in
> the decode to presentation process, as well as falling back where
> needed.
>
> With this setup, under the hood, the deinterlacer selections for Fast,
> Medium, High and Advanced would look something like:
>
> CPU - Onefield, LInear blend, Kernel, Yadif
> OpenGL/D3D - onefield, linear blend, kernel, new motion adaptive shader
> VAAPI - Bob, Weave, Motion adaptive, Motion compensated
> VDPAU - Bob, Temporal, Spatial, ???
> OpenMax - linedouble, fast, advanced, ??
>
> Where onefield and bob are interchangeable as 1x and 2x versions.
>
> Background
>
> We now have a range of deinterlacer options, not just in terms of
> CPU/GPU/driver based but also in terms of at what stage deinterlacing
> occurs.
>
> As an example, with VAAPI decode only (VAAPI2), deinterlacing could
> occur in the decoder using either CPU or VAAPI based deinterlacers or
> at playback using the CPU or GLSL shaders (you could even use VAAPI
> again at this stage). With the new VAAPI zero copy code in the render
> branch, the current setup cannot cope with a VAAPI profile that
> expects to use VAAPI deinterlacer names but OpenGLVideo is actually
> presented with raw video frames that (currently) need to pass through
> the GLSL shaders.
>
> The current code is inflexible - especially with its use of strings to
> explicitly identify each deinterlacer - and has started to break. A
> much simpler and more flexible approach would be to use a simple
> flag/enum that encapsulates the user preferences.
>
> Thoughts welcome!
>
> Regards
> Mark
> _______________________________________________
>
Hi Mark

Question - What are GPU deinterlacers vs GPU driver deinterlacers?

It sounds good. Just a few things to bear in mind that you may not be
aware of -

There are deinterlacers  that work with the decoder in the case of
VAAPI2 and NVDEC (i.e. the video is deinterlaced before we get it from
the decoder.

In the case of OpenMAX the deinterlacer can work with the decoder or the
renderer, currently is with the renderer.

In the case of mediacodec on NVidia Shield the video is automatically
deinterlaced and frame doubled in the decoder without us having any
control over it. Mediacodec on fire stick presents video still
interlaced but tells us it is progressive, also we have no control over it.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Sat, Feb 16, 2019 at 10:26:18AM +0000, Mark Kendall wrote:
> I'd like to propose a change to how deinterlacers are configured as
> the current setup has become unwieldy to the extent that it is almost
> unmaintainable.
>
> Proposal:
>
> - remove existing specific, named deinterlacer selections for main and fallback.
>
> - replace with something along the lines of:
> - new settings:
> "Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
> "Deinterlacer quality (double rate):" None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
>
> I believe that gives the same flexiblity as the current settings
> without tying the code to specific named deinterlacers. Remember these
> are per profile settings.

Seems reasonable to me. I do ask that the mapping from generic,
quality name to specific deint be documented somewhere more convenient
than just the source code. I think I'd also like to have the deint
method that is in use be shown on the playback info screen.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On 16 February 2019 10:26:18 GMT+00:00, Mark Kendall <mark.kendall@gmail.com> wrote:
>I'd like to propose a change to how deinterlacers are configured as
>the current setup has become unwieldy to the extent that it is almost
>unmaintainable.
>
>Proposal:
>
>- remove existing specific, named deinterlacer selections for main and
>fallback.
>
>- replace with something along the lines of:
>- new settings:
> "Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
>"Deinterlacer quality (double rate):" None, Fast, Medium, High,
>Advanced
> "Prefer GPU deinterlacers": Yes/No
> "Prefer GPU driver deinterlacers": Yes/No
>
>I believe that gives the same flexiblity as the current settings
>without tying the code to specific named deinterlacers. Remember these
>are per profile settings.
>
>Selecting 'Prefer GPU deinterlacers' would obviously select shader
>based deints over CPU based and 'GPU driver deinterlacers' would
>prefer VDPAU/VAAPI etc over all others. The code can then make some
>informed decisions on what deinterlacer to use and at what stage in
>the decode to presentation process, as well as falling back where
>needed.
>
>With this setup, under the hood, the deinterlacer selections for Fast,
>Medium, High and Advanced would look something like:
>
>CPU - Onefield, LInear blend, Kernel, Yadif
>OpenGL/D3D - onefield, linear blend, kernel, new motion adaptive shader
>VAAPI - Bob, Weave, Motion adaptive, Motion compensated
>VDPAU - Bob, Temporal, Spatial, ???
>OpenMax - linedouble, fast, advanced, ??
>
>Where onefield and bob are interchangeable as 1x and 2x versions.
>
>Background
>
>We now have a range of deinterlacer options, not just in terms of
>CPU/GPU/driver based but also in terms of at what stage deinterlacing
>occurs.
>
>As an example, with VAAPI decode only (VAAPI2), deinterlacing could
>occur in the decoder using either CPU or VAAPI based deinterlacers or
>at playback using the CPU or GLSL shaders (you could even use VAAPI
>again at this stage). With the new VAAPI zero copy code in the render
>branch, the current setup cannot cope with a VAAPI profile that
>expects to use VAAPI deinterlacer names but OpenGLVideo is actually
>presented with raw video frames that (currently) need to pass through
>the GLSL shaders.
>
>The current code is inflexible - especially with its use of strings to
>explicitly identify each deinterlacer - and has started to break. A
>much simpler and more flexible approach would be to use a simple
>flag/enum that encapsulates the user preferences.
>
>Thoughts welcome!
>
>Regards
>Mark
>_______________________________________________
>mythtv-dev mailing list
>mythtv-dev@mythtv.org
>http://lists.mythtv.org/mailman/listinfo/mythtv-dev
>http://wiki.mythtv.org/Mailing_List_etiquette
>MythTV Forums: https://forum.mythtv.org

Can you say more about in what way the current system is inflexible and is breaking?

It seems to me a disadvantage of what you propose is I'd have to try 20 different settings (or at least 17) to explore all possibilities. Also some changes in settings would not change which deinterlacer was selected, whereas other setting changes would select subtly different deinterlacers. That's unhelpful when swapping back and forwards between settings trying to decide which you prefer. Lastly there is at least one deinterlacer that doesn't fall into those categories: there is one that has the purpose of driving a tv's own deinterlacer and is required only for synchronisation -perhaps rarely used these days.
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Sun, 17 Feb 2019 at 10:10, Piotr Oniszczuk <piotr.oniszczuk@gmail.com> wrote:
> Mark,
>
> I want to speak from user perspective.
> I really like changes direction You propose.

Good :)

> IMHO any change like „yadiff 2xHW” to „high quality, CPU intensive” is always welcomed as user might be not learned what „yadiff” is - but „high quality CPU intensive” will be quickly understood. This will make app feeling like user friendly.

Piotr,

The wording is entirely flexible - I just chose something off the top
of my head. They can of course also be adjusted in the translations.

Thanks for the feedback.
Regards
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Sun, 17 Feb 2019 at 17:57, Peter Bennett <pb.mythtv@gmail.com> wrote:
> Hi Mark
>
> Question - What are GPU deinterlacers vs GPU driver deinterlacers?

This just differentiates between using our own GLSL shaders for
deinterlacing versus those provided by, for example, VAAPI, VDPAU,
OpenMax etc.

As I've said before, I've always worked on the assumption that the
vendor provided implementations are better/faster - though this isn't
always the case.

As I said to Piotr, the wording is up for grabs.

> It sounds good. Just a few things to bear in mind that you may not be
> aware of -
>
> There are deinterlacers that work with the decoder in the case of
> VAAPI2 and NVDEC (i.e. the video is deinterlaced before we get it from
> the decoder.

Yes - and this is part of the complexity we now need to deal with,

> In the case of OpenMAX the deinterlacer can work with the decoder or the
> renderer, currently is with the renderer.

I've just started working on rendering directly to EGL images (which
are mapped to OpenGL textures) - which will require moving the
deinterlacing into the decoder (when openmax is decoding).

> In the case of mediacodec on NVidia Shield the video is automatically
> deinterlaced and frame doubled in the decoder without us having any
> control over it. Mediacodec on fire stick presents video still
> interlaced but tells us it is progressive, also we have no control over it.

Yes - hadn't forgotten:) Again more special cases we need to handle.

Thanks and regards
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Sun, 17 Feb 2019 at 18:42, David Engel <david@istwok.net> wrote:

> Seems reasonable to me. I do ask that the mapping from generic,
> quality name to specific deint be documented somewhere more convenient
> than just the source code. I think I'd also like to have the deint
> method that is in use be shown on the playback info screen.

David,

Documentation is, as ever, our big weakness but I'll put something together.

I had also planned to add deinterlacing (and maybe other detail) to
the playback info screen - though I'm wary of adding too much. My
original aim when the info screen was added was to give live
performance detail - and we run the risk of ruining performance by
trying to monitor and display too much.

Thanks and regards,
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Sun, 17 Feb 2019 at 21:23, Paul Gardiner <lists@glidos.net> wrote:
> Can you say more about in what way the current system is inflexible and is breaking?

Off the top of my head, a few examples of the code breaking down.
(N.B. Some of this is fixable in the current code, some are long
standing issues and some detail refers to changes that have been
implemented in the render branch - so not in master or fixes)

- overriding the deinterlacer selection during playback currently
often fails (e.g switching from GL deinterlacer to software based)
- when any hardware based decoder is selected and hardware decoding
either fails or is not available for that codec, the code just gives
up. It has no idea what might be an alternative selection.
- if a hardware decoder is in use, the current implementation does not
know what to do if the requested deinterlacer is not available (e.g.
VAAPI on older drivers/boards)
- in the render branch, the concept of hardware specific VideoOutput
classes is almost gone. It will soon be all OpenGL (i.e. no
openglvaapi selection). The code detects whether direct rendering of
that hardware format is supported (and the better method where there
are different possibilities). In some cases, decoding and rendering is
supported but the frames format returned to the display code is raw
video frames that need to be processed just like software decoded
frames - the concept of a 'vaapi' deinterlacer is meaningless in this
case and again the code has no way of falling back.
- there are situations (e.g. VAAPI) in which we can support hardware
decoding but rendering is not available. Again, the deinterlacer
selection breaks down but we could actually still use the vaapi
deinterlacers in the decoder - or use software based, or GLSL...
- all of these fallbacks could be handled with the current
implementation but it is a mess...

The current code uses a string to request a deinterlacer (as provided
by the VideoDisplayProfile). Using VAAPI, that string might be
'vaapidoubleratemotion_compensated'.

This encapsulates the idea that the user would like to use VAAPI
deinterlacers, double rate and 'advanced'.

Every section of the code that deals with deinterlacing has to parse
that string to 'decode' its intent. This is inefficient, error prone
and somewhat archaic.

Furthermore, as it stands the deinterlacing code is handled in the
decoder (and MythCodecContext), MythPlayer (heurisitics about
enabling/disabling deinterlacing and double rate support), VideoOutput
(mostly software based deints), OpenGLVideo (GLSL based deinterlacers)
and the new MythOpenGLInterop classes (for hardware (VAAPI/VDPAU etc)
deinterlacers. There is significant duplication of code, no mechanism
for consistently handling fallbacks and it requires cross thread
communication (e.g. decoder to player) - which may impact performance
and is just undesirable.

The commonality between all of these classes is VideoFrame - i.e. the
actual frame currently coming out of the decoder or the current frame
to be displayed.

With a couple of enums/flags added to VideoFrame, we can set the users
preferences (1x and 2x), the currently supported deinterlacing
operations and the state of the current frame.

So at any stage in the process, the current owner of that frame can
simply detect (hopefully just by or'ing the flags) that it needs to
deinterlace, sets the state appropriately and passes the frame on. If
at any stage, the requested operation is not supported, the class can
flag that that operation is not supported and pass the frame on
untouched. The next class can decide if it is the better/preferred
option and so on. If deinterlacing fails at a later stage, the flags
can be picked up on the next pass and handled appropriately.

There are still details that I haven't worked through but it seems to
be a better way of handling the current demands.

> It seems to me a disadvantage of what you propose is I'd have to try 20 different settings (or at least 17) to explore all possibilities. Also some changes in settings would not change which >deinterlacer was selected, whereas other setting changes would select subtly different deinterlacers. That's unhelpful when swapping back and forwards between settings trying to decide > which you prefer.

Yes - some settings changes would have no impact but the code
selecting a subtly different deinterlacer should only be in the case
of a fallback.

>Lastly there is at least one deinterlacer that doesn't fall into those categories: there is one that has the purpose of driving a tv's own deinterlacer and is required only for synchronisation -perhaps rarely used these days.

Firstly, I should point out that there is only one actual 'interlaced'
deinterlacer, which is the software based one. I'm not sure why there
is the option for OpenGL as it has never existed (I think that was my
error many years back - it actually uses 'onefield' - which is not the
same).

When I wrote the 'interlaced' deinterlacer, it only ever worked with a
couple of edge cases. It gave fantastic output on a PS3 frontend that
I had when connected with a good old fashioned scart cable. It
partially worked on other frontends when connected via scart etc but
it was essentially a toss of the coin whether it was in sync - and the
sync was lost after a skip, pause, minor a/v sync jitter etc (and
there is/was no way to work out how to get it in sync). Furthermore,
it generally needs the video classes to exactly match refresh rate,
interlaced mode and output size - otherwise the TV cannot pick up the
fields correctly. I doubt many people actually actually use anything
other than progressive display settings - it tends to mess with OSD
display if you use an interlaced mode, for example.

Thanks and regards
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On 19/02/2019 11:16, Mark Kendall wrote:
> When I wrote the 'interlaced' deinterlacer, it only ever worked with a
> couple of edge cases.

Eh! I wrote it... maybe you reviewed it with a few alterations IIRC :-)
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On 19/02/2019 11:16, Mark Kendall wrote:
> On Sun, 17 Feb 2019 at 21:23, Paul Gardiner <lists@glidos.net> wrote:
>> It seems to me a disadvantage of what you propose is I'd have to try 20 different settings (or at least 17) to explore all possibilities. Also some changes in settings would not change which >deinterlacer was selected, whereas other setting changes would select subtly different deinterlacers. That's unhelpful when swapping back and forwards between settings trying to decide > which you prefer.
>
> Yes - some settings changes would have no impact but the code
> selecting a subtly different deinterlacer should only be in the case
> of a fallback.

That's not quite what I was getting at. Say, when using vdpau, I might
set for the highest quality which would presumably result in Advanced x2
being selected. Then I notice that on occasion I'm seeing the odd
dropped frame. I know that there is also Temporal x2 which I might
prefer, but how would I be able work out what setting change to make
under your proposed scheme to get Temporal x2. I could make a few
experimental changes, but if I see very little difference, I wont know
whether it is because the results of Advanced and Temporal are only
subtly different or because the setting change I've made didn't actually
change which deinterlacer was selected.
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Tue, Feb 19, 2019 at 11:16:00AM +0000, Mark Kendall wrote:
> - in the render branch, the concept of hardware specific VideoOutput
> classes is almost gone. It will soon be all OpenGL (i.e. no
> openglvaapi selection). The code detects whether direct rendering of
> that hardware format is supported (and the better method where there
> are different possibilities). In some cases, decoding and rendering is
> supported but the frames format returned to the display code is raw
> video frames that need to be processed just like software decoded
> frames - the concept of a 'vaapi' deinterlacer is meaningless in this
> case and again the code has no way of falling back.

Mark, there is an issue with our current mediacodec/opengl
implementation. With interlaced content, one of the first few frames
after a skip (including skips for ff/rew) gets corrupted and includes
image data from two different frames. This problem reportedly doesn't
occur when using Android's natvice Surface format for rendering. Are
you aware of that issue and done anything about it? I've been meaning
to try the render branch to see if it still exists. Also, have you
dont anything else in the mediacodec area yet to avoid extra copying?

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Tue, 19 Feb 2019 at 15:59, David Engel <david@istwok.net> wrote:
> Mark, there is an issue with our current mediacodec/opengl
> implementation. With interlaced content, one of the first few frames
> after a skip (including skips for ff/rew) gets corrupted and includes
> image data from two different frames. This problem reportedly doesn't
> occur when using Android's natvice Surface format for rendering. Are
> you aware of that issue and done anything about it? I've been meaning
> to try the render branch to see if it still exists.

I've not seen this but will have a look. That sounds like a reference
frame issue with the active deinterlacer. From our perspective, that
would only happen if using the kernel deinterlacer. Thinking about it,
we don't reset the reference frames after a seek (which we do with
VDPAU) - so there is something to look at there. If it is the
mediacodec deinterlacer doing the same, then I'm not sure how we deal
with that. Perhaps we need to flush the decoder properly (though this
should already be happening).

Also, have you
> dont anything else in the mediacodec area yet to avoid extra copying?

I've been looking at this but am still largely in the dark. The FFmpeg
mediacodec code accepts a single 'surface' as an option when it is
configured (which we obviously do not currently use). There is no
documentation on how to use it. Looking at the kodi (or maybe mpv?)
code - there is one implementation that uses this configuration - but
the 'surface' they supply is the window id and the implementation is
called 'embedded' - so i can only assume that this somehow renders
directly to the screen. Not ideal but could still be workable if we
can still render the OSD etc on top. Other implementations do not use
FFmpeg. Based on a comment in the libqtav code, there is clearly the
possibility of integrating mediacodec directly into OpenGL - but that
part of the code is closed source and not available:( and it is
unclear whether FFmpeg is used.

Regards, Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Wed, Feb 20, 2019 at 09:23:06AM +0000, Mark Kendall wrote:
> On Tue, 19 Feb 2019 at 15:59, David Engel <david@istwok.net> wrote:
> > Mark, there is an issue with our current mediacodec/opengl
> > implementation. With interlaced content, one of the first few frames
> > after a skip (including skips for ff/rew) gets corrupted and includes
> > image data from two different frames. This problem reportedly doesn't
> > occur when using Android's natvice Surface format for rendering. Are
> > you aware of that issue and done anything about it? I've been meaning
> > to try the render branch to see if it still exists.

I watched a half hour, interlaced program last night with mediacodec
and didn't notice the problem. It's possible the problem was fixed
with a Shield update. I'll keep looking for it.

> I've not seen this but will have a look. That sounds like a reference
> frame issue with the active deinterlacer. From our perspective, that
> would only happen if using the kernel deinterlacer. Thinking about it,
> we don't reset the reference frames after a seek (which we do with
> VDPAU) - so there is something to look at there. If it is the
> mediacodec deinterlacer doing the same, then I'm not sure how we deal
> with that. Perhaps we need to flush the decoder properly (though this
> should already be happening).

Our deinterlacing should not be in play at all. The frames are
already deinterlaced when we get them back from mediacodec/ffmpeg. As
I understand it, the problem occurs when we convert the returned,
Android surface to an opengl texture (pardon my likely inexact
terminology). Applications which render the surface directly without
converting it to opengl don't exhibit the problem.

> Also, have you
> > dont anything else in the mediacodec area yet to avoid extra copying?
>
> I've been looking at this but am still largely in the dark. The FFmpeg
> mediacodec code accepts a single 'surface' as an option when it is
> configured (which we obviously do not currently use). There is no
> documentation on how to use it. Looking at the kodi (or maybe mpv?)
> code - there is one implementation that uses this configuration - but
> the 'surface' they supply is the window id and the implementation is
> called 'embedded' - so i can only assume that this somehow renders
> directly to the screen. Not ideal but could still be workable if we
> can still render the OSD etc on top. Other implementations do not use
> FFmpeg. Based on a comment in the libqtav code, there is clearly the
> possibility of integrating mediacodec directly into OpenGL - but that
> part of the code is closed source and not available:( and it is
> unclear whether FFmpeg is used.

Aman Gupta <aman@tmm1.net> has been very helpful to us in the past
regarding mediacodec and ffmpeg. He's the main mediacodec guy at
ffmpeg. He's also one of the authors of the Channels app for Android.
He should be able to answer any questions like this that you have.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Wed, Feb 20, 2019, 3:36 PM David Engel <david@istwok.net wrote:

>
> Aman Gupta <aman@tmm1.net> has been very helpful to us in the past
> regarding mediacodec and ffmpeg. He's the main mediacodec guy at
> ffmpeg. He's also one of the authors of the Channels app for Android.
> He should be able to answer any questions like this that you have.
>

OK - thanks to the info from Peter and a few subsequent Google searches I'm
starting to see how it works.

Do we have a resident Java/QtAndroid expert?

Regards, Mark

>
Re: Deinterlacer settings [ In reply to ]
On 2/20/19 3:02 PM, Mark Kendall wrote:
>
>
> On Wed, Feb 20, 2019, 3:36 PM David Engel <david@istwok.net
> <mailto:david@istwok.net> wrote:
>
>
> Aman Gupta <aman@tmm1.net <mailto:aman@tmm1.net>> has been very
> helpful to us in the past
> regarding mediacodec and ffmpeg.  He's the main mediacodec guy at
> ffmpeg.  He's also one of the authors of the Channels app for Android.
> He should be able to answer any questions like this that you have.
>
>
> OK - thanks to the info from Peter and a few subsequent Google
> searches I'm starting to see how it works.
>
> Do we have a resident Java/QtAndroid expert?
>
> Regards, Mark
>
> _______________________________________________
>
I spent from 2000-2015 developing in Java but on Unix not Android.

Peter
Re: Deinterlacer settings [ In reply to ]
On Wed, Feb 20, 2019 at 03:11:10PM -0500, Peter Bennett wrote:
>
>
> On 2/20/19 3:02 PM, Mark Kendall wrote:
> >
> >
> > On Wed, Feb 20, 2019, 3:36 PM David Engel <david@istwok.net
> > <mailto:david@istwok.net> wrote:
> >
> >
> > Aman Gupta <aman@tmm1.net <mailto:aman@tmm1.net>> has been very
> > helpful to us in the past
> > regarding mediacodec and ffmpeg.? He's the main mediacodec guy at
> > ffmpeg.? He's also one of the authors of the Channels app for Android.
> > He should be able to answer any questions like this that you have.
> >
> >
> > OK - thanks to the info from Peter and a few subsequent Google searches
> > I'm starting to see how it works.
> >
> > Do we have a resident Java/QtAndroid expert?
> >
> > Regards, Mark
> >
> > _______________________________________________
> >
> I spent from 2000-2015 developing in Java but on Unix not Android.

I can fake it with Java. If you find the QtAndroid expert, I want to
speak with her/him. The ridiculously long time to resume after being
backgrounded is still major problem I'd like to solve.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On 2/20/19 4:56 PM, David Engel wrote:
> The ridiculously long time to resume after being
> backgrounded is still major problem I'd like to solve.
I am using the fire stick a lot and it seems consistent that if you
press the home button, mythfrontend is killed, and starting up again
takes the normal startup time. Is this what is expected in the current
version?

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Wed, Feb 20, 2019 at 09:35:56AM -0600, David Engel wrote:
> On Wed, Feb 20, 2019 at 09:23:06AM +0000, Mark Kendall wrote:
> > On Tue, 19 Feb 2019 at 15:59, David Engel <david@istwok.net> wrote:
> > > Mark, there is an issue with our current mediacodec/opengl
> > > implementation. With interlaced content, one of the first few frames
> > > after a skip (including skips for ff/rew) gets corrupted and includes
> > > image data from two different frames. This problem reportedly doesn't
> > > occur when using Android's natvice Surface format for rendering. Are
> > > you aware of that issue and done anything about it? I've been meaning
> > > to try the render branch to see if it still exists.
>
> I watched a half hour, interlaced program last night with mediacodec
> and didn't notice the problem. It's possible the problem was fixed
> with a Shield update. I'll keep looking for it.

I skipped through a 1-hour program 10 seconds at a time last night and
didn't see the problem once. It looks like it got fixed either in an
Nvidia update or in our opengl updates for v30.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Deinterlacer settings [ In reply to ]
On Thu, Feb 21, 2019 at 10:26:17AM -0500, Peter Bennett wrote:
>
>
> On 2/20/19 4:56 PM, David Engel wrote:
> > The ridiculously long time to resume after being
> > backgrounded is still major problem I'd like to solve.
> I am using the fire stick a lot and it seems consistent that if you press
> the home button, mythfrontend is killed, and starting up again takes the
> normal startup time. Is this what is expected in the current version?

Yes, that is expected. The android:noHistory="true" entry in
AndroidManifest.xml causes the app to be closed when it loses focus,
including, I think, even when the screensaver kicks in. I have that
off in my personal builds as I still have hopes of someday resolving
the underlying problem.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org